I think of an LLM as extraordinarily lossy compression. All the training data is essentially encoded in the model. You can get an approximation of the data back out again with the right input.
I don’t think it’s any less reliable that random blogs on the web, and I don’t have to wade through SEO tripe either.
The annoying thing though is that all the random blogs on the web are written with using these LLMs now. It makes it much harder to be critical of your sources, because they’re all coming from a unnamed, proprietary LLM with no information about who owns it or the training data. At least before, I could look up the user or check out their other articles, now every article is randomly generated from some unknown prompt.
I would argue this isn’t only a bad thing though. Even before AI, many bogus articles and information existed. Eg. that people swallow spiders in their sleep, which many outlets parroted.
I would guess most people never checked (m)any sources on most information they found so long as the ‘vibe’ felt trustworthy. There is no cure to make reality simple, and the more pressure we have to teach people to think critically, the better.
No disagreement here. I’m simply saying because you are more likely to be misled now than ever, being lazy about it isn’t an option anymore, and teachers can use that fact to drive the point home stronger. In the past if you were lazy about checking sources and verifying information, chances were much higher you still got somewhat valid information that didn’t harm your life down the road. Now you might just hurt yourself by putting glue on your pizza. Not saying I desire that, but the consequences of intellectual laziness have never been bigger, so the emphasis on teaching understanding must match that, since the alternative is being taken advantage of.
#3 is very important, as this is the core thing a school should teach. But lets not kid ourselves that kids weren’t cheating their way out of homework since the start of time 😄
But lets not kid ourselves that kids weren’t cheating their way out of homework since the start of time 😄
I don’t mean to come off as too aggressive because I don’t think we’re really arguing with each other. But, I tend to see statements like this as a kind of handwaving apologia for something that, to be clear, real people are doing to us on purpose. The same way that people might lament the coming of a hurricane season; nothing really to be done about it.
It can certainly be used for that, I will admit. But no that isn’t my intention. I hear many good stories on that front of teachers that have gotten a really good nose for AI and are using it as learning moments for their students. The world is filled with ways to cheat, and teachers are well aware of that. In the end, the process to unlearn them from cheating with AI is the same as cheating in conventional manners, is all I’m saying.
When I have a hard technical problem I often search for and read through a dozen different sources. Many of them are wrong, or are right but not covering exactly the situation I’m looking at. Eventually I’ll find one that’s either right and answers my problem, or gives me the clue I need so I can figure out the solution for myself.
If I ask an LLM to solve the problem, it will make up an answer that would seamlessly blend in with all its training data. In other words, it’s most likely to produce something that’s wrong, or something that’s right but not for my particular case, or something that’s close but incomplete. That’s effectively useless. At worst it blends in with its training data enough to convince me it’s right, while not actually being right. At best it’s something that is close enough to give me the clue I need. Most of the time it’s going to be something that’s wrong and I know it’s wrong because if it were that simple I wouldn’t have had to resort to the AI bullshit generator.
I think of an LLM as extraordinarily lossy compression. All the training data is essentially encoded in the model. You can get an approximation of the data back out again with the right input.
I don’t think it’s any less reliable that random blogs on the web, and I don’t have to wade through SEO tripe either.
The annoying thing though is that all the random blogs on the web are written with using these LLMs now. It makes it much harder to be critical of your sources, because they’re all coming from a unnamed, proprietary LLM with no information about who owns it or the training data. At least before, I could look up the user or check out their other articles, now every article is randomly generated from some unknown prompt.
I would argue this isn’t only a bad thing though. Even before AI, many bogus articles and information existed. Eg. that people swallow spiders in their sleep, which many outlets parroted.
I would guess most people never checked (m)any sources on most information they found so long as the ‘vibe’ felt trustworthy. There is no cure to make reality simple, and the more pressure we have to teach people to think critically, the better.
No disagreement here. I’m simply saying because you are more likely to be misled now than ever, being lazy about it isn’t an option anymore, and teachers can use that fact to drive the point home stronger. In the past if you were lazy about checking sources and verifying information, chances were much higher you still got somewhat valid information that didn’t harm your life down the road. Now you might just hurt yourself by putting glue on your pizza. Not saying I desire that, but the consequences of intellectual laziness have never been bigger, so the emphasis on teaching understanding must match that, since the alternative is being taken advantage of.
#3 is very important, as this is the core thing a school should teach. But lets not kid ourselves that kids weren’t cheating their way out of homework since the start of time 😄
I don’t mean to come off as too aggressive because I don’t think we’re really arguing with each other. But, I tend to see statements like this as a kind of handwaving apologia for something that, to be clear, real people are doing to us on purpose. The same way that people might lament the coming of a hurricane season; nothing really to be done about it.
It can certainly be used for that, I will admit. But no that isn’t my intention. I hear many good stories on that front of teachers that have gotten a really good nose for AI and are using it as learning moments for their students. The world is filled with ways to cheat, and teachers are well aware of that. In the end, the process to unlearn them from cheating with AI is the same as cheating in conventional manners, is all I’m saying.
That’s what makes them shitty though.
When I have a hard technical problem I often search for and read through a dozen different sources. Many of them are wrong, or are right but not covering exactly the situation I’m looking at. Eventually I’ll find one that’s either right and answers my problem, or gives me the clue I need so I can figure out the solution for myself.
If I ask an LLM to solve the problem, it will make up an answer that would seamlessly blend in with all its training data. In other words, it’s most likely to produce something that’s wrong, or something that’s right but not for my particular case, or something that’s close but incomplete. That’s effectively useless. At worst it blends in with its training data enough to convince me it’s right, while not actually being right. At best it’s something that is close enough to give me the clue I need. Most of the time it’s going to be something that’s wrong and I know it’s wrong because if it were that simple I wouldn’t have had to resort to the AI bullshit generator.