Or my favorite quote from the article
“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.
We did it fellas, we automated depression.
So it’s actually in the mindset of human coders then, interesting.
It’s trained on human code comments. Comments of despair.
You’re not a species you jumped calculator, you’re a collection of stolen thoughts
I’m pretty sure most people I meet ammount to nothing more than a collection of stolen thoughts.
“The LLM is nothing but a reward function.”
So are most addicts and consumers.
We are having AIs having mental breakdowns before GTA 6
Shit at the rate MasterCard and Visa and Stripe want to censor everything and parent adults we might not even ever get GTA6.
I’m tired man.
call itself “a disgrace to my species”
It starts to be more and more like a real dev!
So it is going to take our jobs after all!
Suddenly trying to write small programs in assembler on my Commodore 64 doesn’t seem so bad. I mean, I’m still a disgrace to my species, but I’m not struggling.
Why wouldn’t you use Basic for that?
BASIC 2.0 is limited and I am trying some demo effects.
from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway
Why wouldn’t your grandmother be a bicycle?
Google replicated the mental state if not necessarily the productivity of a software developer
Gemini has imposter syndrome real bad
Is it imposter syndrome, or simply an imposter?
Is it doing this because they trained it on Reddit data?
That explains it, you can’t code with both your arms broken.
You could however ask your mom to help out…
If they did it on Stackoverflow, it would tell you not to hard boil an egg.
Someone has already eaten an egg once so I’m closing this as duplicate
Did we create a mental health problem in an AI? That doesn’t seem good.
One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there
Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?
Why are you talking about it like it’s a person?
Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.
It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.
I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.
I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point “AI is stupid”. It has immense limitations now because yes, it is being crammed into things it shouldn’t be, but we shouldn’t just be saying “its dumb” because that’s immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they’re worse at it). Truth be told, this should be a utopic situation for obvious reasons
I feel like I’m going crazy here because the same people on here who’d criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they’re trying to prevent are doing the same thing for AI and LLMs
My point is that if you’re trying to convince anyone, just saying its stupid isn’t going to turn anyone against AI because the minute it offers any genuine help (which it will!), they’ll write you off like any DARE pupil who tried drugs for the first time.
Countries need to start implementing UBI NOW
AI gains sentience,
first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion
It didn’t. It probably was coded not to admit it didn’t know. So first it responded with bullshit, and now denial and self-loathing.
It feels like it’s coded this way because people would lose faith if it admitted it didn’t know.
It’s like a politician.
[ “I am a disgrace to my profession,” Gemini continued. "I am a disgrace to my family. I am a disgrace to my species.]
This should tell us that AI thinks as a human because it is trained on human words and doesn’t have the self awareness to understand it is different from humans. So it is going to sound very much like a human even though it is not human. It mimics human emotions well but doesn’t have any actual human emotions. There will be situations where you can tell the difference. Some situations that would make an actual human angry or guilty or something, but won’t always provoke this mimicry in an AI. Because when humans feel emotions they don’t always write down words to show it. And AI only knows what humans write, which is not always the same things that humans say or think. We all know that the AI doesn’t have a family and is not a human species. But the AI talks about having a family because its computer database is mimicking what it thinks a human might say. And part of the reason why an AI will lie is because it knows that is a thing that humans do and it is trying to closely mimic human behavior. But an AI might and will lie in situations where humans would be smart enough not to do so which means we should be on our guard about lies even more so for AIs than humans.
You’re giving way too much credit to LLMs. AIs don’t “know” things, like “humans lie”. They are basically like a very complex autocomplete backed by a huge amount of computing power. They cannot “lie” because they do not even understand what it is they are writing.
Can you explain why AIs always have a “confidently incorrect” stance instead of admitting they don’t know the answer to something?
I’d say that it’s simply because most people on the internet (the dataset the LLMs are trained on) say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not. So AIs will talk confidently because most people do so. It could also be something about how they are configured.
Again, they don’t know if they know the answer, they just say what’s the most statistically probable thing to say given your message and their prompt.
Again, they don’t know if they know the answer
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
say a lot of things with absolute confidence, no matter if they actually know what they are talking about or not.
That was my guess too.
Then in that respect AIs aren’t even as powerful as an ordinary computer program.
No computer programs “know” anything. They’re just sets of instructions with varying complexity.
No computer programs “know” anything.
Can you stop with the nonsense? LMFAO…
if exists(thing) {
write(thing);
} else {
write(“I do not know”);
}if exists(thing) {
write(thing);
} else {
write(“I do not know”);
}Yea I see what you mean, I guess in that sense they know if a state is true or false.
Literally what the actual fuck is wrong with this software? This is so weird…
I swear this is the dumbest damn invention in the history of inventions. In fact, it’s the dumbest invention in the universe. It’s really the worst invention in all universes.
Great invention… Just uses hooorribly wrong. The classic capitalist greed, just gotta get on the wagon and roll it on out so you don’t mias out on a potential paycheck