• megopie@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    64
    ·
    2 days ago

    The reality is, that it’s often stated that generative AI is an inevitability, that regardless of how people feel about it, it’s going to happen and become ubiquitous in every facet of our lives.

    That’s only true if it turns out to be worth it. If the cost of using it is lower than the alternative, and the market willing to buy it is the same. If the current cloud hosted tools cease to be massively subsidized, and consumers choose to avoid it, then it’s inevitably a historical footnote, like turbine powered cars, Web 3.0, and laser disk.

    Those heavily invested in it, ether literally through shares of Nvidia, or figuratively through the potential to deskill and shift power away from skilled workers at their companies don’t want that to be a possibility, they need to prevent consumers from having a choice.

    If it was an inevitability in it’s own right, if it was just as good and easily substitutable, why would they care about consumers knowing before they payed for it?

    • U7826391786239@lemmy.zip
      link
      fedilink
      English
      arrow-up
      48
      ·
      2 days ago

      relevant article https://www.theringer.com/2025/11/04/tech/ai-bubble-burst-popping-explained-collapse-or-not-chatgpt

      AI storytelling is an amalgam of several different narratives, including:

      Inevitability: AI is the future; its eventual supremacy is both imminent and certain, and therefore anyone who doesn’t want to be left behind had better embrace the technology. See Jensen Huang, the CEO of Nvidia, insisting earlier this year that every job in the world will be impacted by AI “immediately.”

      Functionality: AI performs miracles, and the AI products that have been released to the public wildly outperform the products they aim to replace. To believe this requires us to ignore the evidence obtained with our own eyes and ears, which tells us in many cases that the products barely work at all, but it’s the premise of every TV ad you watch out of the corner of your eye during a sports telecast.

      Grandiosity: The world will never be the same; AI will change everything. This is the biggest and most important story AI companies tell, and as with the other two narratives, big tech seems determined to repeat it so insistently that we come to believe it without looking for any evidence that it’s true.

      As far as I can make out, the scheme is essentially: Keep the ship floating for as long as possible, keep inhaling as much capital as possible, and maybe the tech will get somewhere that justifies the absurd valuations, or maybe we’ll worm our way so far into the government that it’ll have to bail us out, or maybe some other paradigm-altering development will fall from the sky. And the way to keep the ship floating is to keep peddling the vision and to seem more confident that the dream is inevitable the less it appears to be coming true.

      speaking for myself, MS can thank AI for being the thing that made me finally completely ditch windows after using it 30+ years

    • Katana314@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      2 days ago

      Don’t forget, “Turns out it was a losing bet to back DEI and Trans people”.

      This is something scared, pathetic, loser, feral, spineless, sociopathic, moronic fascists come up with to try to win a crowd larger than an elevator; Assume the outcome as a foregone conclusion and try to talk around it, or claim it’s already happened.

      Respond directly. “What? That’s ridiculous. I’ve never even seen ANY AI that I liked. Who told you it was going to pervade everything?”

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      That reminds me how McDonald’s and other gaat food chains are struggling. People figure it’s too expensive for what you get after prices going up and quality going down for years. They forgot that people buy if the price and quality are good. Same with AI. It’s all fun if it’s free or dirt cheap, but people don’t buy expensive slop.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 days ago

      If the cost of using it is lower than the alternative, and the market willing to buy it is the same. If the current cloud hosted tools cease to be massively subsidized, and consumers choose to avoid it, then it’s inevitably a historical footnote, like turbine powered cars, Web 3.0, and laser disk.

      There’s another scenario: Turns out that if Big AI doesn’t buy up all the available stock of DRAM and GPUs, running local AI models on your own PC will become more realistic.

      I run local AI stuff all the time from image generation to code assistance. My GPU fans spin up for a bit as the power consumed by my PC increases but other than that, it’s not much of an impact on anything.

      I believe this is the future: Local AI models will eventually take over just like PCs took over from mainframes. There’s a few thresholds that need to be met for that to happen but it seems inevitable. It’s already happening for image generation where the local AI tools are so vastly superior to the cloud stuff there’s no contest.

    • CatsPajamas@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      MIT, like two years out from a study saying there is no tangible business benefit to implementing AI, just released a study saying it is now capable of taking over more than 10% of jobs. Maybe that’s hyperbolic but you can see that it would require a massssssive amount of cost to make that not be worth it. And we’re still pretty much just starting out.

      • Jayjader@jlai.lu
        link
        fedilink
        English
        arrow-up
        1
        ·
        21 hours ago

        I would love to read that study, as going off of your comment I could easily see it being a case of “more than 10% of jobs are bullshit jobs à la David Graeber so having an « AI » do them wouldn’t meaningfully change things” rather than “more than 10% of what can’t be done by previous automation now can be”.

        • CatsPajamas@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          Summarized by Gemini

          The study you are referring to was released in late November 2025. It is titled “The Iceberg Index: Measuring Workforce Exposure in the AI Economy.” It was conducted by researchers from MIT and Oak Ridge National Laboratory (ORNL). Here are the key details from the study regarding that “more than ten percent” figure:

          • The Statistic: The study found that existing AI systems (as of late 2025) already have the technical capability to perform the tasks of approximately 11.7% of the U.S. workforce.
          • Economic Impact: This 11.7% equates to roughly $1.2 trillion in annual wages and affects about 17.7 million jobs.
          • The “Iceberg” Metaphor: The study is named “The Iceberg Index” because the researchers argue that visible AI adoption in tech roles (like coding) is just the “tip of the iceberg” (about 2.2%). The larger, hidden mass of the iceberg (the other ~9.5%) consists of routine cognitive and administrative work in other sectors that is already technically automated but not yet fully visible in layout stats.
          • Sectors Affected: Unlike previous waves of automation that hit blue-collar work, this study highlights that the jobs most exposed are in finance, healthcare, and professional services. It specifically notes that entry-level pathways in these fields are collapsing as AI takes over the “junior” tasks (like drafting documents or basic data analysis) that used to train new employees. Why it is different from previous studies: Earlier MIT studies (like one from early 2024) focused on economic feasibility (i.e., it might be possible to use AI, but it’s too expensive). This new 2025 study focuses on technical capacity—meaning the AI can do the work right now, and for many of these roles, it is already cost-competitive.

          https://www.csail.mit.edu/news/rethinking-ais-impact-mit-csail-study-reveals-economic-limits-job-automation?hl=en-US#%3A~%3Atext=This+important+result+commands+a%2Cthe+barriers+are+too+high.”

          • Jayjader@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 hours ago

            I’ll be honest, that “Iceberg Index” study doesn’t convince me just yet. It’s entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can’t access. I also can’t figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available “tools” (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor’s 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM’s output so I guess that counts.

            Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools

            from https://iceberg.mit.edu/report.pdf

            Large Population Models is https://arxiv.org/abs/2507.09901 which mostly references https://github.com/AgentTorch/AgentTorch, which gives as an example of use the following:

            user_prompt_template = "Your age is {age} {gender},{unemployment_rate} the number of COVID cases is {covid_cases}."
            # Using Langchain to build LLM Agents
            agent_profile = "You are a person living in NYC. Given some info about you and your surroundings, decide your willingness to work. Give answer as a single number between 0 and 1, only."
            

            The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn’t been near academia in 7 years like myself. Most of the procedure looks like they know what they’re doing, but if the entire thing is built on a faulty premise then there’s no guaranteeing any of their results.

            In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn’t necessarily a case of MIT as a whole changing it’s tune.

            (The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)