• 6 Posts
  • 95 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle



  • If I were to try and play up his argument, I might appeal to ‘we can shorten the dark times’, Asimov’s foundation style. But I admit my hearts not in it. Things will very likely get worse before they get better, partially because I don’t particularly trust anyone with the ability to influence things just a bit to actually use that influence productively.

    I do think this oligarchy has very different tools than those of old; far fewer mercenary assassinations of labor leaders, a very different and weirdly shaped strangle-hold on media, and I put lower odds on a hot conflict with strikers.

    I don’t know the history of hubris from oligarchs; were the Tsar’s or Barons also excited about any (absurd and silly) infrastructure projects explicitly for the masses? I guess there were the Ford towns in the amazon?


  • I am primarily trying to restate or interpret Schneiers argument. Bring the link into the comments. I’m not sure I’m very good at it.

    He points out a problem which is more or less exactly as you describe it. AI is on a fast track to be exploited by oligarchs and tyrants. He then makes an appeal: we should not let this technology, which is a tool just as you say, be defined by the evil it does. His fear is: “that those with the potential to guide the development of AI and steer its influence on society will view it as a lost cause and sit out that process.”

    That’s the argument afaict. I think the “so what” is something like: scientists will do experiments and analysis and write papers which inform policy, inspire subversive use, and otherwise use the advantages of the quick to make gains against the strong. See the 4 action items that they call for.




  • Success would lead to AI use that properly accounted for its environmental impact and had to justify it’s costs. That likely means much AI use stopping, and broader reuse of models that we’ve already invested in (less competition in the space please).

    The main suggestion in the article is regulation, so I don’t feel particularly understood atm. The practical problem is that, like oil, LLM use can be done locally at a variety of scales. It also provides something that some people want a lot:

    • Additional (poorly done) labor. Sometimes that’s all you need for a project
    • Emulation of proof of work to existing infrastructure (eg, job apps)
    • Translation and communication customization

    It’s thus extremely difficult to regulate into non-existence globally (and would probably be bad if we did). So effective regulation must include persuasion and support for the folks who would most benefit from using it (or you need a huge enforcement effort, which I think has its own downsides).

    The problem is that even if everyone else leaves the hole, there will still be these users. Just like drug use, piracy, or gambling, it’s easier to regulate when we make a central easy to access service and do harm reduction. To do this you need a product that meets the needs and mitigates the harms.

    Persuading me I’m directionally wrong would require such evidence as:

    • Everyone does want to leave the hole (hard, I know people who don’t. And anti-AI messaging thus far has been more about signaling than persuasion)
    • That LLMs really can’t/can be made difficult to be done locally (hard, the Internet gives too much data, and making computing time expensive has a lot of downsides)
    • Proposed regulation that would actually be enforceable at reasonable cost (haven’t thought hard about it, maybe this is easy?)


  • I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.

    So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.