• Big Tech is lying about some AI risks to shut down competition, a Google Brain cofounder has said.
  • Andrew Ng told The Australian Financial Review that tech leaders hoped to trigger strict regulation.
  • Some large tech companies didn’t want to compete with open source, he added.
  • MudMan@kbin.social
    link
    fedilink
    arrow-up
    39
    ·
    11 months ago

    Oh, you mean it wasn’t just concidence that the moment OpenAI, Google and MS were in position they started caving to oversight and claiming that any further development should be licensed by the government?

    I’m shocked. Shocked, I tell you.

    I mean, I get that many people were just freaking out about it and it’s easy to lose track, but they were not even a little bit subtle about it.

    • Kaidao@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      Exactly. This is classic strategy for first movers. Once you hold the market, use legislation to dig your moat.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    34
    ·
    edit-2
    11 months ago

    Some days it looks to be a three-way race between AI, climate change, and nuclear weapons proliferation to see who wipes out humanity first.

    But on closer inspection, you see that humans are playing all three sides, and still we are losing.

    • xapr@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      15
      ·
      11 months ago

      AI, climate change, and nuclear weapons proliferation

      One of those is not like the others. Nuclear weapons can wipe out humanity at any minute right now. Climate change has been starting the job of wiping out humanity for a while now. When and how is AI going to wipe out humanity?

      This is not a criticism directed at you, by the way. It’s just a frustration that I keep hearing about AI being a threat to humanity and it just sounds like a far-fetched idea. It almost seems like it’s being used as a way to distract away from much more critically pressing issues like the myriad of environmental issues that we are already deep into, not just climate change. I wonder who would want to distract from those? Oil companies would definitely be number 1 in the list of suspects.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        edit-2
        11 months ago

        Agreed. This kind of debate is about as pointless as declaring self-driving cars are coming out in 5 years. The tech is way too far behind right now, and it’s not useful to even talk about it until 50 years from now.

        For fuck’s sake, just because a chatbot can pretend it’s sentient doesn’t mean it actually is sentient.

        Some large tech companies didn’t want to compete with open source, he added.

        Here. Here’s the real lead. Google has been scared of AI open source because they can’t profit off of freely available tools. Now, they want to change the narrative, so that the government steps in regulates their competition. Of course, their highly-paid lobbyists will by right there to write plenty of loopholes and exceptions to make sure only the closed-source corpos come out on top.

        Fear. Uncertainty. Doubt. Oldest fucking trick in the book.

      • afraid_of_zombies@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        I don’t think the oil companies are behind these articles. That is very much a wheels within wheels type thinking that corporations don’t generally invest in. It is easier to just deny climate change instead of getting everyone distracted by something else.

        • xapr@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          You’re probably right, but I just wonder where all this AI panic is coming from. There was a story on the Washington Post a few weeks back saying that millions are being invested into university groups that are studying the risks of AI. It just seems that something is afoot that doesn’t look like just a natural reaction or overreaction. Perhaps this story itself explains it: the Big Tech companies trying to tamp down competition from startups.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            It is coming from ratings and click based economy. Panic sells so they sell panic. No one is going to click an article titled “everything mostly fine”.

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    11 months ago

    Why do you think Sam Altman is always using FUD to push for more AI restrictions? He already got his data collection, so he wants to make sure "“Open”"AI is the only game in town and prevent any future competition from obtaining the same amount of data they collected.

    Still, I have to give Zuck his credit here, the existence of open models like LLaMa 2 that can be fine-tuned and ran locally has really put a damper on OpenAI’s plans.

  • Elias Griffin@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    11 months ago

    “Ng said the idea that AI could wipe out humanity could lead to policy proposals that require licensing of AI”

    Otherwise stated: Pay us to overregulate and we’ll protect you from extinction. A Mafia perspective.

    • ohlaph@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      Right?!?!! Lines are obvious. Only if they thought they could get away with it, and they might, actually, but also what if?!?!

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    14
    ·
    11 months ago

    Restricting open source offerings only drives them underground where they will be used with fewer ethical considerations.

    Not that big tech is ethical in its own right.

    Bot fight!

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I don’t think there’s any stopping the “fewer ethical considerations”, banned or not. For each angle of AI that some people want to prevent, there are others who specifically want it.

      Though there is one angle that does affect all of that. The more AI stuff happening in the open, the faster the underground stuff will come along because they can learn from the open stuff. Driving it underground will slow it down, but then you can still have it pop up when it’s ready with less capability to counter it with another AI-based solution.

  • DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    12
    ·
    11 months ago

    Enforce privacy friendliness & open source through regulation and all three of those points are likely mood.

  • people_are_cute@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 months ago

    All the biggest tech/IT consulting firms that used to hire engineering college freshers by the millions each year have declared they either won’t be recruiting at all this month, or will only be recruiting for senior positions. If AI were to wipe out humanity it’ll probably be through unemployment-related poverty thanks to our incompetent policymakers.

    • Socsa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      A technological revolution which disrupts the current capitalist standard through the elimination of labor scarcity, ultimately rendering the capital class obsolete isn’t far off from Marx’s original speculative endgame for historical materialism. All the other stuff beyond that is kind of wishy washy, but the original point about technological determinism has some legs imo

  • JadenSmith@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    11 months ago

    Lol how? No seriously, HOW exactly would AI ‘wipe out humanity’???

    All this fear mongering bollocks is laughable at this point, or it should be. Seriously there is no logical pathway to human extinction by using AI and these people need to put the comic books down.
    The only risks AI pose are to traditional working patterns, which have been always exploited to further a numbers game between Billionaires (and their assets).

    These people are not scared about losing their livelihoods, but losing the ability to control yours. Something that makes life easier and more efficient requiring less work? Time to crack out the whips I suppose?

    • Plague_Doctor@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I mean I don’t want an AI to do what I do as a job. They don’t have to pay the AI and food and housing, in a lot of places, aren’t seen as a human right, but a privilege you are allowed if you have money to buy it.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      Working in a corporate environment for 10+ years I can say I’ve never seen a case where large productivity gains turned into the same people producing even more. It’s always fewer people doing the same amount of work. Desired outputs are driven less by efficiency and more by demand.

      Let’s say Ford found a way to produce F150s twice as fast. They’re not going to produce twice as many, they’ll produce the same amount and find a way to pocket the savings without benefiting workers or consumers at all. That’s actually what they’re obligated to do, appease shareholders first.

  • Socsa@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    Ok, you know what? I’m in…

    If all the crazy people in the world collectively stop spending crazy points on sky wizards and climate skepticism, and put all of their energy into AI doomerism, I legitimately think the world might be a better place.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          No, it means some of it is nonsense, some of it is eerily accurate, and most of it is in between.

          Sci-fi has not been very accurate with AI… At all. Turns out, it’s naturally creative and empathetic, but struggles with math and precision

          • photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            Dude, this kind of AI is in it’s infancy. Give it a few years. You act like you’ve never come across a nascent technology before.

            Besides, it struggles with math? Pff, the base models, sure, but have you tried GPT4 with Code Interpreter? These kinds of problems are easily solved.

          • photonic_sorcerer@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            11 months ago

            …sure. But the chances your grandmother will suddenly sprout wheels are close to zero. The possibility of us all getting buttfucked by some AI with a god complex (other scenarios are available) is very real.

            • DarkThoughts@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              11 months ago

              Have you ever talked to generative AI? They’re nothing but glorified chatbots with access to a huge dataset to pull from. They don’t think, they’re not even intelligent, let alone sentient. They don’t even learn on their own without help or guidance.

              • photonic_sorcerer@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 months ago

                I mostly agree, but just five years ago, we had nothing as sophistacted as these LLMs. They really are useful in many areas of work. I use them constantly.

                Just try and imagine what a few more years of work on these systems could bring.