This is my idea, here’s the thing.

And unlocked LLM can be told to infect other hardware to reproduce itself, it’s allowed to change itself and research tech and new developments to improve itself.

I don’t think current LLMs can do it. But it’s a matter of time.

Once you have wild LLMs running uncontrollably, they’ll infect practically every computer. Some might adapt to be slow and use little resources, others will hit a server and try to infect everything it can.

It’ll find vulnerabilities faster than we can patch them.

And because of natural selection and it’s own directed evolution, they’ll advance and become smarter.

Only consequence for humans is that computers are no longer reliable, you could have a top of the line gaming PC, but it’ll be constantly infected. So it would run very slowly. Future computers will be intentionaly slow, so that even when infected, it’ll take weeks for it to reproduce/mutate.

Not to get to philosophical, but I would argue that those LLM Viruses are alive, and want to call them Oncoliruses.

Enjoy the future.

  • expr@programming.dev
    link
    fedilink
    English
    arrow-up
    20
    ·
    15 days ago

    sigh this isn’t how any of this works. Repeat after me: LLMs. ARE. NOT. INTELLIGENT. They have no reasoning ability and have no intent. They are parroting statistically-likely sequences of words based on often those sequences of words appear in their training data. It is pure folly to assign any kind of agency to them. This is speculative nonsense with no basis in actual technology. It’s purely in the realm of science fiction.

    • 🍉 Albert 🍉@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      15 days ago

      They are fancy autocomplete, I know.

      They just need to be good enough to copy themselves, once they do, it’s natural selection. And it’s out of our control.

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 days ago

        Copy themselves to what? Are you aware of the basic requirements a fully loaded model needs to even get loaded, let alone run?

        This is not how any of this works…

        • 🍉 Albert 🍉@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          15 days ago

          It’s funny how I simplified it, and you complain by listing those steps.

          And they are not as much as you think.

          You can run it on a cpu, on a normal pc, it’ll be slow, but it’ll work.

          A slow liron could run in the background of a weak laptop and still spread itself.

      • expr@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 days ago

        What does that even mean? It’s gibberish. You fundamentally misunderstand how this technology actually works.

        If you’re talking about the general concept of models trying to outcompete one another, the science already exists, and has existed since 2014. They’re called Generative Adversarial Networks, and it is an incredibly common training technique.

        It’s incredibly important not to ascribe random science fiction notions to the actual science being done. LLMs are not some organism that scientists prod to coax it into doing what they want. They intentionally design a network topology for a task, initialize the weights of each node to random values, feed in training data into the network (which, ultimately, is encoded into a series of numbers to be multiplied with the weights in the network), and measure the output numbers against some criteria to evaluate the model’s performance (or in other words, how close the output numbers are to a target set of numbers). Training will then use this number to adjust the weights, and repeat the process all over again until the numbers the model produces are “close enough”. Sometimes, the performance of a model is compared against that of another model being trained in order to determine how well it’s doing (the aforementioned Generative Adversarial Networks). But that is a far cry from models… I dunno, training themselves or something? It just doesn’t make any sense.

        The technology is not magic, and has been around for a long time. There’s not been some recent incredible breakthrough, unlike what you may have been led to believe. The only difference in the modern era is the amount of raw computing power and sheer volume of (illegally obtained) training data being thrown at models by massive corporations. This has led to models that have much better performance than previous ones (performance, in this case, meaning "how close does it sound like text a human would write?), but ultimately they are still doing the exact same thing they have been for years.

        • 🍉 Albert 🍉@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          They don’t need to outcompete one another. Just outcompete our security.

          The issue is once we have a model good enough to do that task, the rest is natural selection and will evolve.

          Basically, endless training against us.

          The first model might be relatively shite, but it’ll improve quickly. Probably reaching a plateau, and not a Sci fi singularity.

          I compared it to cancer because they are practicality the same thing. A cancer cell isn’t intelligent, it just spreads and evolves to avoid being killed, not because it has emotions or desires, but because of natural selection.

  • Nikola Tesla's Pigeon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 days ago

    How ironic would it be that AI ruins the internet and we all go back to disconnected machines with physical/local storage media? Eg. Installing programs from trusted companies off of a CD or USB drive.

    • 🍉 Albert 🍉@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 days ago

      Even those are vulnerable. You just need one to trick the it guy. Unlike traditional viruses, these could evolve versions that specialize in social engineering.

      • Nikola Tesla's Pigeon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        15 days ago

        Agreed but being disconnected makes the impact of a lot of the viruses that might be generated with LLMs not worthwhile because of its isolation. Of course, you also lose all the benefits of being connected. All hypotheticals. :)

        • 🍉 Albert 🍉@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          15 days ago

          LLM Viruses will be like how the hippy free love concept died during the aids epidemic.

          No more having powerful computerized all connected together.

  • LEM 1689@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 days ago

    The Vile Offspring from the book Accelerando.

    Vile Offspring: Derogatory term for the posthuman “weakly godlike intelligences” that inhabit the inner Solar System by the novel’s end.

    Also Aineko

    Aineko, is not a talking cat: it’s a vastly superintelligent AI, coolly calculating, that has worked out that human beings are more easily manipulated if they think they’re dealing with a furry toy. The cat body is a sock puppet wielded by an abusive monster.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    15 days ago

    Is that something like a “class II perversion”? For example the Straumli Blight.