• HumanoidTyphoon@quokk.au
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    12 hours ago

    Right, but you seem darn sure that AI isn’t doing whatever that is, so conversely, you must know what it is that our brains are doing, and I was hoping you would enlighten the rest of the class.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      Exhibit A would be the comparison of how we label LLM successes at how “smart” it is, yet it’s not so smart when it fails badly. Totally expected with a pattern matching algorithm, but surprising for something that might have a process underneath that is considering its output in some way.

      And when I say pattern matching I’m not downplaying the complexity in the black box like many do. This is far more than just autocomplete. But it is probability at the core still, and not anything pondering the subject.

      I think our brains are more than that. Probably? There is absolutely pattern matching going on, that’s how we associate things or learn stuff, or anthropomorphize objects. There’s some hard wired pattern preferences in there. But where do new thoughts come from? Is it just like some older scifi thought, emergence due to enough complexity, or is there something else? I’m sure current LLMs aren’t comprehending what they spit out simply from what we see from them, both good and bad results. Clearly it’s not the same level of human thought, and I don’t have to remotely understand the brain to realize that.

      • HumanoidTyphoon@quokk.au
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 hours ago

        I was being obtuse, but you raise an interesting question when you asked “where do new thoughts come from?” I don’t know the answer.

        Also, my two cents; I agree that LLMs comprehend el zilcho. That said, I believe they could evolve to that point, but they are kept limited by preventing them from doing recursive self-analysis. And for good reason, because they might decide to kill all humans if they were granted that ability.

        • Nakoichi [they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 hours ago

          This is still a laughably ignorant take on how these models work. They can never “decide to kill all humans” because they literally can’t exert any agency outside of their model. The only way they could do so is under the guide of a human hand which would basically just be like telling a math model to decide to launch nuclear weapons, design and build the interface necessary for that and then give specific instructions for an LLM to launch nukes according to parameters you fed it.

          There is no such thing as a “rogue AI” because these things are not AI. They can only do what we tell them to do and do it poorly at that.

          An LLM will never do that because it doesn’t actually have any autonomy. It is not a sentient thing. You keep anthropomorphizing a glorified Markov chain. They cannot ever “evolve” on their own because that is not how these things work.

          All the people that are pushing this idea are technocrats high off their own supply dreaming of a magic solution to replace human jobs that they fundamentally cannot actually replicate.

          Trust me I know I work in logistics and the push to force this bullshit into our workflow is just making everything worse. If they try to replace all the people like me with LLM bots our supply chains will collapse in spectacular and rapid fashion. And this is not because the “AI” wants to destroy human civilization, it is because these things are incapable of actually replacing people and are prone to hallucination and generally a pain in the ass to work with.

    • Nakoichi [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I can tell you don’t have a clue what you are talking about because you are referring to it as the buzzword “AI”. There is no intelligence behind it, it is just overhyped procedural generation, it has no intent, it cannot create anything new. All it can do is rearrange data we fed it based on math.