Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition. Deep Blue Chess was an AI, an artificial intelligence. If you mean human or beyond level general intelligence, you’re probably talking about AGI or ASI (general or super intelligence, respectively).
And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work. The early chatbots were actual parrots, saying prewritten sentences that they had either been preprogrammed with or got from their users. LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training. Their temperature can be changed to give more or less predictable output, and as such, they have the potential for actually original output, unlike their parrot predecessors.
Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition.
That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.
And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work.
LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.
Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.
You completely missed the point. The point is people have been lead to believe LLM can do jobs that humans do because the output of LLMs sounds like the jobs people do, when in reality, speech is just one small part of these jobs. It turns, reasoning is a big part of these jobs, and LLMs simply don’t reason.
LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training.
Yeah this is the exact criticism. They recombine language pieces without really doing language. The end result looks like language, but it lacks any of the important characteristics of language such as meaning and intention.
If I say “Two plus two is four” I am communicating my belief about mathematics.
If an llm emits “two plus two is four” it is outputting a stochastically selected series of tokens linked by probabilities derived from training data. If the statement is true or false then that is accidental.
If i train an LLM to do math, for the training data i generate a+b=cstatements, never showing it the same one twice.
It would be pointless for it to “memorize” every single question and answer it gets since it would never see that question again. The only way it would be able to generate correct answers would be if it gained a concept of what numbers are, and how the add operation operates on them to create a new number.
Rather than memorizing and parroting it would have to actually understand it in order to generate responses.
It’s called generalization, it’s why large amounts of data is required (if you show the same data again and again then memorizing becomes a viable strategy)
If I say “Two plus two is four” I am communicating my belief about mathematics.
Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true
You’re arguing against a position I didn’t put forward. Also
Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true
This is what excessive reduction does to a mfer. That is just such a hysterically absurd take.
Well, yeah. People are acting like language models are full fledged AI instead of just a parrot repeating stuff said online.
Spicy auto complete is a useful tool.
But these things are nothing more
Whenever any advance is made in AI, AI critics redefine AI so its not achieved yet according to their definition. Deep Blue Chess was an AI, an artificial intelligence. If you mean human or beyond level general intelligence, you’re probably talking about AGI or ASI (general or super intelligence, respectively).
And the second comment about LLMs being parrots arises from a misunderstanding of how LLMs work. The early chatbots were actual parrots, saying prewritten sentences that they had either been preprogrammed with or got from their users. LLMs work differently, statistically predicting the next token (roughly equivalent to a word) based on all those that came before it, and parameters finetuned during training. Their temperature can be changed to give more or less predictable output, and as such, they have the potential for actually original output, unlike their parrot predecessors.
That stems from the fact that AI is an ill-defined term that has no actual meaning. Before Google maps became popular, any route finding algorithm utilizing A* was considered “AI”.
Bullshit. These people know exactly how LLMs work.
LLMs reproduce the form of language without any meaning being transmitted. That’s called parroting.
Even if (and that’s a big if) an AGI is going to be achieved at some point, there will be people calling it parroting by that definition. That’s the Chinese room argument.
You’re moving the goalposts.
Me? How can I move goalposts in a single sentence? We’ve had no previous conversation… And I’m not agreeing with the previous poster either…
By entering the discussion, you also engaged in the previops context. The discussion uas about LLMs being parrots.
You completely missed the point. The point is people have been lead to believe LLM can do jobs that humans do because the output of LLMs sounds like the jobs people do, when in reality, speech is just one small part of these jobs. It turns, reasoning is a big part of these jobs, and LLMs simply don’t reason.
Which is what a parrot does.
Yeah this is the exact criticism. They recombine language pieces without really doing language. The end result looks like language, but it lacks any of the important characteristics of language such as meaning and intention.
If I say “Two plus two is four” I am communicating my belief about mathematics.
If an llm emits “two plus two is four” it is outputting a stochastically selected series of tokens linked by probabilities derived from training data. If the statement is true or false then that is accidental.
Hence, stochastic parrot.
If i train an LLM to do math, for the training data i generate
a+b=c
statements, never showing it the same one twice.It would be pointless for it to “memorize” every single question and answer it gets since it would never see that question again. The only way it would be able to generate correct answers would be if it gained a concept of what numbers are, and how the add operation operates on them to create a new number.
Rather than memorizing and parroting it would have to actually understand it in order to generate responses.
It’s called generalization, it’s why large amounts of data is required (if you show the same data again and again then memorizing becomes a viable strategy)
Seems like a pointless distinction, you were told it so you believe it to be the case? Why can’t we say the LLM outputs what it believes is the correct answer? You’re both just making some statement based on your prior experiences which may or may not be true
You’re arguing against a position I didn’t put forward. Also
This is what excessive reduction does to a mfer. That is just such a hysterically absurd take.