Pro@programming.dev to Technology@lemmy.worldEnglish · 15日前AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.eduexternal-linkmessage-square32fedilinkarrow-up148arrow-down12
arrow-up146arrow-down1external-linkAI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.www.cmu.eduPro@programming.dev to Technology@lemmy.worldEnglish · 15日前message-square32fedilink
minus-squarePasserby6497@lemmy.worldlinkfedilinkEnglisharrow-up1·15日前Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.
Ah, well then, if he tells the bot to not hallucinate and validate output there’s no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.