AI one-percenters seizing power forever is the real doomsday scenario, warns AI godfather::The real risk of AI isn’t that it’ll kill you. It’s that a small group of billionaires will control the tech forever.
Business Insider warning about late stage capitalism feels more than a little ironic.
As does being warned of technological oligarchs monopolizing AI by someone who works for fucking Meta.
Not to mention the reason we can all fuck around with llama models despite the fact. Props to yann and other meta AI researchers. Also eager to see future jepa stuff.
If only openAI was so open.
Today on PBS, we got an insider warning from a lifelong Republican that the fascism got put of hand and is going for full autocracy, even though he’d been pushing through pro-fash policies for the last thirty years.
Everyone thinks The One Ring will be theirs to control.
And in other news, the Leopards Eating Faces Party continues to eat faces, confusing Leopards Eating Faces voters…
Was that the Adam Kinzinger one? It’s a low bar, but I’ll give him a modicum of credit for saying his vote against the first impeachment was cowardice and that he’d vote for Biden in 2024 if Trump is the Republican nominee. Doesn’t totally feel like a lesson learnt that he still considers himself a Republican, though.
They should rename themselves to Business Balls Deep Insider.
Business Insider is run by college students making minimum wage.
That’s how they got inside.
This is why we need large-scale open-source AI efforts, even if it scares the everliving shit out of me.
AI safety experts are worried that capitalists will be too eager to get AGI first and will discard caution (friendly AI principles) for mad science.
And I, for one, welcome our new robot overlords!
Any AI safety experts that believes these oligarchs are going to get AGI and not some monkey paw are also drinking the cool aide.
If we have to choose between corporations or the government ruling us with AI I think I’m gonna just take a bullet.
Might be one of the key democratizing forces us plebs will have…I do suggest people try out some of the open solutions out there already just to have that skill in their back pockets (e.g. GPT4All).
I’ve been thinking about how to do that. The code for most AI is pretty basic and uninteresting. It’s mostly modifying the input for something usable. Companies could open source their entire code base without letting anything important out.
The dataset is the real problem. Say you want to classify fruit to check if it’s ripe enough for harvesting. You’ll need a whole lot of pictures of your preferred fruit where it’s both ripe and not ripe. You’ll want people who know the fruit to classify those images, and then you can feed it into a model. It’s a lot of work, and needs to attract a bunch of people to volunteer their time. Largely the sort of people who haven’t traditionally been a part of open source software.
If we set up some kind of blockchain to just pay people to honestly differentiate between pictures, it could be done.
Nah, using Recaptcha is the way to get free labor for that training
There is no problem in this world so serious that someone will not suggest blockchain as a potential solution.
Your being hyperbolic and silly. Find me a solution to mass shootings or racism using blockchain.
deleted by creator
Me running various models that outperform gpt or bard just fine on a 4080: 👌👍
Yann LeCun the Godfather of AI? He feels more like a Fredo to me.
This is the best summary I could come up with:
He named OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei in a lengthy weekend post on X.
“Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment,” LeCun wrote, referring to these founders’ role in shaping regulatory conversations about AI safety.
That’s significant since, as almost everyone who matters in tech agrees, AI is the biggest development in technology since the microchip or the internet.Altman, Hassabis, and Amodei did not immediately respond to Insider’s request for comment.
Thanks to @RishiSunak & @vonderleyen for realizing that AI xrisk arguments from Turing, Hinton, Bengio, Russell, Altman, Hassabis & Amodei can’t be refuted with snark and corporate lobbying alone.
In March, more than 1,000 tech leaders, including Elon Musk, Altman, Hassabis, and Amodei, signed a letter calling for a minimum six-month pause on AI development.
Those risks include worker exploitation and data theft that generates profit for “a handful of entities,” according to the Distributed AI Research Institute (DAIR).
The original article contains 768 words, the summary contains 163 words. Saved 79%. I’m a bot and I’m open source!
No one can fucking run it locally right now only people who have 1%er money can run it
Uhh what? You can totally run LLMs locally.
Inference, yes. Training, no. Derived models don’t count.
I have Llama 2 running on localhost, you need a fairly powerful GPU but it can totally be done.
I’ve run one of the smaller models on my i7-3770 with no GPU acceleration. It is painfully slow but not unusably slow.
To get the same level as something like chat gpt?