• gandalf_der_12te@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.

    When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.

    The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That’s what should actually be discussed.

  • Deathgl0be@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.

  • terrific@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    We’re not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.

    Irrelevant at best, harmful at worst 🤷

    • qt0x40490FDB@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      1 month ago

      How do you know we’re not remotely close to AGI? Do you have any expertise on the issue? And expertise is not “I can download Python libraries and use them” it is “I can explain the mathematics behind what is going on, and understand the technical and theoretical challenges”.

      • Eranziel@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Part of this is a debate on what the definition of intelligence and/or consciousness is, which I am not qualified to discuss. (I say “discuss” instead of “answer” because there is not an agreed upon answer to either of those.)

        That said, one of the main purposes of AGI would be able to learn novel subject matter, and to come up with solutions to novel problems. No machine learning tool we have created so far is capable of that, on a fundamental level. They require humans to frame their training data by defining what the success criteria is, or they spit out the statistically likely human-like response based on all of the human-generated content they’ve consumed.

        In short, they cannot understand a concept that humans haven’t yet understood, and can only echo solutions that humans have already tried.

        • qt0x40490FDB@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          1 month ago

          I don’t see why AGI must be conscious, and the fact that you even bring it up makes me think you haven’t thought too hard about any of this.

          When you say “novel answers” what is it you mean? The questions on the IMO have never been asked to any human before the Math Olympiad, and almost all humans cannot answer those quesion.

          Why does answering those questions not count as novel? What is a question whose answer you would count as novel, and which you yourself could answer? Presuming that you count yourself as intelligent.

  • Etterra@discuss.online
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    2
    ·
    1 month ago

    Honestly I welcome our AI overlords. They can’t possibly fuck things up harder than we have.