return2ozma@lemmy.world to politics @lemmy.world · 19 hours agoOpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comexternal-linkmessage-square11fedilinkarrow-up162arrow-down11cross-posted to: technology@lemmy.world
arrow-up161arrow-down1external-linkOpenAI wants to stop ChatGPT from validating users’ political viewsarstechnica.comreturn2ozma@lemmy.world to politics @lemmy.world · 19 hours agomessage-square11fedilinkcross-posted to: technology@lemmy.world
minus-squareSandbar_Trekker@lemmy.todaylinkfedilinkEnglisharrow-up1·12 hours agoClose, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”. So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
minus-squareSpikesOtherDog@ani.sociallinkfedilinkEnglisharrow-up2·11 hours agoFair. I tell a lot of lies for children. It helps when talking to end users.
Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.
So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
Fair.
I tell a lot of lies for children. It helps when talking to end users.