• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 hours ago

    An April MIT study found AI Large Language Models (LLM) encourage delusional thinking, likely due to their tendency to flatter and agree with users rather than pushing back or providing objective information.

    If all it takes to get someone to believe something is to flatter them and agree with them, it does kind of explain how people manage to sell people on all kinds of crazy things.