It’s less that you think differently, and more that people are questioning how much you’re thinking in the first place.
I’ve used AI enough to know that they struggle with basic concepts in things I understand well, which means I can’t trust it much on topics I’m unfamiliar with since the likelihood that it’s bad in my field (technology/programming) but good in others is drastically low. We regularly see how bad AI is in all kinds of fields, and the problems we see are all the same archetypical problems like hallucinations, basic misunderstanding of concepts, and it’s obsequiousness is out of control and regularly applied to incorrect user responses and corrections, and an inability to consistently follow basic directives.
I struggle to see how this isn’t a bubble given how all of the money being pumped into AI is predicated on it becoming something useful, and so far the main use I’m seeing out of LLMs is revenge porn, copyright infringement, mass market propaganda, and lowering the quality of output across tons of sectors because people think this shit is useful when it spits out trash.
I work in IT also and everyone I know use it constantly at work, daily. It has flaws and sometimes hallucinates, yeah. But humanity has never had something like this where they can ask questions and get replies that are often completely correct, just not always.
I dont think its a bubble, but thats the point of having a discussion about it. I think downvoting people because they like Ai is borderline retarded.
If you have read threads about this on hacker news, you will see both sides of this. People who like Ai and people who doesnt. But you have arguments instead of downvoting because people are actually smart enough to value the discussion.
Typical dumb response of “you are stupid because you dont think what I think”. :)
It’s less that you think differently, and more that people are questioning how much you’re thinking in the first place.
I’ve used AI enough to know that they struggle with basic concepts in things I understand well, which means I can’t trust it much on topics I’m unfamiliar with since the likelihood that it’s bad in my field (technology/programming) but good in others is drastically low. We regularly see how bad AI is in all kinds of fields, and the problems we see are all the same archetypical problems like hallucinations, basic misunderstanding of concepts, and it’s obsequiousness is out of control and regularly applied to incorrect user responses and corrections, and an inability to consistently follow basic directives.
I struggle to see how this isn’t a bubble given how all of the money being pumped into AI is predicated on it becoming something useful, and so far the main use I’m seeing out of LLMs is revenge porn, copyright infringement, mass market propaganda, and lowering the quality of output across tons of sectors because people think this shit is useful when it spits out trash.
I work in IT also and everyone I know use it constantly at work, daily. It has flaws and sometimes hallucinates, yeah. But humanity has never had something like this where they can ask questions and get replies that are often completely correct, just not always.
I dont think its a bubble, but thats the point of having a discussion about it. I think downvoting people because they like Ai is borderline retarded.
If you have read threads about this on hacker news, you will see both sides of this. People who like Ai and people who doesnt. But you have arguments instead of downvoting because people are actually smart enough to value the discussion.
Oh no, no no, people can disagree with me all the time. It’s a good sign.
You’re stupid because you like AI.
Lol ok. :)