The term “emergent behavior” is used in a very narrow and unusual sense here. According to the common definition, pretty much everything that LLMs and similar AIs do is emergent. We can’t figure out what a neural net does by studying its parts, just like we can’t figure out what an animal does by studying its cells.
We know that bigger models perform better in tests. When we train bigger and bigger models of the same type, we can predict how good they will be, depending on their size. But some skills seem to appear suddenly.
Think about someone starting to exercise. Maybe they can’t do a pull-up at first, but they try every day. Until one day they can. They were improving the whole time in the various exercises they did, but it could not be seen in this particular thing. The sudden, unpredictable emergence of this ability is, in a sense, an illusion.
Emergent behavior is pretty much anything an old model couldn’t do that a new model can. Simple reasoning, creating coherent sentences, “theory of mind”, basic math, translation, I think are a few examples.
They aren’t “amazing” in the sense that a human can’t do them, but they are in the sense that a computer is doing it.
One of those things I remember reading was the ability of ChatGPT to translate texts. It was trained with texts in multiple languages, but never translation specifically. Still, it’s quite good at it.
That is just its core function doing its thing transforming inputs to outputs based on learned pattern matching.
It may not have been trained on translation explicitly, but it very much has been trained on these are matching stuff via its training material. Since you know what its training set most likely contained… dictionaries. Which is as good as asking it to learn translation. Another stuff most likely in training data: language course books, with matching translated sentences in them. Again well you didnt explicitly tell it to learn to translate, but in practice the training data selection did it for you.
That’s interesting. My trilingual kids definitely translate individual words, but I guess the real bar here is to translate sentences such that the structure is correct for the languages?
What always irks me about those “emergent behavior” articles: no one ever really defines what those amazing"skills" are supposed to be.
The term “emergent behavior” is used in a very narrow and unusual sense here. According to the common definition, pretty much everything that LLMs and similar AIs do is emergent. We can’t figure out what a neural net does by studying its parts, just like we can’t figure out what an animal does by studying its cells.
We know that bigger models perform better in tests. When we train bigger and bigger models of the same type, we can predict how good they will be, depending on their size. But some skills seem to appear suddenly.
Think about someone starting to exercise. Maybe they can’t do a pull-up at first, but they try every day. Until one day they can. They were improving the whole time in the various exercises they did, but it could not be seen in this particular thing. The sudden, unpredictable emergence of this ability is, in a sense, an illusion.
For a literal answer, I will quote:
Emergent behavior is pretty much anything an old model couldn’t do that a new model can. Simple reasoning, creating coherent sentences, “theory of mind”, basic math, translation, I think are a few examples.
They aren’t “amazing” in the sense that a human can’t do them, but they are in the sense that a computer is doing it.
… without specifically being trained for it, to be precise.
One of those things I remember reading was the ability of ChatGPT to translate texts. It was trained with texts in multiple languages, but never translation specifically. Still, it’s quite good at it.
That is just its core function doing its thing transforming inputs to outputs based on learned pattern matching.
It may not have been trained on translation explicitly, but it very much has been trained on these are matching stuff via its training material. Since you know what its training set most likely contained… dictionaries. Which is as good as asking it to learn translation. Another stuff most likely in training data: language course books, with matching translated sentences in them. Again well you didnt explicitly tell it to learn to translate, but in practice the training data selection did it for you.
The data is there, but simpler models just couldn’t do it, even when trained with that data.
Bilingual human children also often can’t translate between their two (or more) native languages until they get older.
That’s interesting. My trilingual kids definitely translate individual words, but I guess the real bar here is to translate sentences such that the structure is correct for the languages?
A lot of the training set was probably Wiktionary and Wikipedia which includes translations, grammar, syntax, semantics, cognates, etc.