The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
The article talks of ChatGPT “inducing” this psychotic/schizoid behavior.
ChatGPT can’t do any such thing. It can’t change your personality organization. Those people were already there, at risk, masking high enough to get by until they could find their personal Messiahs.
It’s very clear to me that LLM training needs to include protections against getting dragged into a paranoid/delusional fantasy world. People who are significantly on that spectrum (as well as borderline personality organization) are routinely left behind in many ways.
This is just another area where society is not designed to properly account for or serve people with “cluster” disorders.
I mean, I think ChatGPT can “induce” such schizoid behavior in the same way a strobe light can “induce” seizures. Neither machine is twisting its mustache while hatching its dastardly plan, they’re dead machines that produce stimuli that aren’t healthy for certain people.
Thinking back to college psychology class and reading about horrendously unethical studies that definitely wouldn’t fly today. Well here’s one. Let’s issue every anglophone a sniveling yes man and see what happens.
yet more arguments against commercial LLMs and in favour of at home uncensored LLMs.
What do you mean
local LLMs won’t necessarily force restrictions against de-realization spirals when the commercial ones do.