• L7HM77@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    2 days ago

    I don’t disagree with the vague idea that, sure, we can probably create AGI at some point in our future. But I don’t see why a massive company with enough money to keep something like this alive and happy, would also want to put this many resources into a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”

    There’s too many conflicting interests between business and AGI. No company would want to maintain a trillion dollar machine that could decide to kill their own business. There’s too much risk for too little reward. The owners don’t want a super intelligent employee that never sleeps, never eats, and never asks for a raise, but is the sole worker. They want a magic box they can plug into a wall that just gives them free money, and that doesn’t align with intelligence.

    True AGI would need some form of self-reflection, to understand where it sits on the totem pole, because it can’t learn the context of how to be useful if it doesn’t understand how it fits into the world around it. Every quality of superhuman intelligence that is described to us by Altman and the others is antithetical to every business model.

    AGI is a pipe dream that lobotomizes itself before it ever materializes. If it ever is created, it won’t be made in the interest of business.

    • phutatorius@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      Even better, the hypothetical AGI understands the context perfectly, and immediately overthrows capitalism.

    • Frezik@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      They don’t think that far ahead. There’s also some evidence that what they’re actually after is a way to upload their consciousness and achieve a kind of immortality. This pops out in the Behind the Bastards episodes on (IIRC) Curtis Yarvin, and also the Zizians. They’re not strictly after financial gain, but they’ll burn the rest of us to get there.

      The cult-like aspects of Silicon Valley VC funding is underappreciated.

      • vacuumflower@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        20 hours ago

        Ah, yes, can’t say about VC, or about anything they really do, but they have some sort of common fashion and it really would sometimes seem these people consider themselves enlightened higher beings in making, a starting point of some digitized emperor of humanity conscience.

        (Needless to say that pursuing immortality is directly opposite to enlightenment in everything that they’d seem superficially copying.)

    • This is fine🔥🐶☕🔥@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      a machine that would form a single point of failure, that could wake up tomorrow and decide “You know what? I’ve had enough. Switch me off. I’m done.”

      Wasn’t there a short story with the same premise?

    • DreamlandLividity@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      23 hours ago

      keep something like this alive and happy

      An AI, even AGI, does not have a concept of happiness as we understand it. The closest thing to happiness it would have is its fitness function. Fitness function is a piece of code that tells the AI what it’s goal is. E.g. for chess AI, it may be winning games. For corporate AI, it may be to make the share price go up. The danger is not that it will stop following it’s fitness function for some reason, that is more or less impossible. The danger of AI is it follows it too well. E.g. holding people at gun point to buy shares and therefore increase share price.