• MystikIncarnate@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    Nope. There’s no cognition, no cognitive functions at all in LLMs. They are incapable of understanding actions, reactions, consequences and outcomes.

    Literally all it’s doing is giving you a random assortment of words that vaguely correlate to indicators that scored highly for the symbols (ideas/intents) that the prompt you entered contained.

    Literally that’s fucking it.

    You’re not “talking with an AI” you’re interacting with an LLM that is an amalgam of the collective responses for every inquiry, statement, reply, response, question, etc… That is accessible on the public Internet. It’s a dilution of the “intelligence” that can be derived from what everyone on the Internet has ever said, and what that cacophony of mixed messages, on average, would reply with.

    The reason why LLMs have gotten better is because they’ve absorbed more data than previous attempts and some of the outlying extremist messages have been carefully pruned from the library, so the resultant AI trends more towards the median persons predicted reply, versus everyone’s voice being weighed evenly.

    It only seems like “AI” because the responses are derived from real, legitimate human replies that were posted somewhere on the Internet at some point in time.