I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws.
Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.
I remember this happening with Uber too. All that VC money dried up, their prices skyrocketed, people stopped using them, and they went bankrupt. A tale as old as time.
That was the intended path.
A lot of those things have a business model that relies on putting the competition out of business so you can jack up the price.
Uber broke taxis in a lot of places. It completely broke that industry by simply ignoring the laws. Uber had a thing that it could actually sell that people would buy.
It took years before it started making money, in an industry that already made money.
LLMs Don’t even have a path to profitability unless they can either functionally replace a human job or at least reliably perform a useful task without human intervention.
They’ve burned all these billions and they still don’t even have something that can function as well as the search engines that proceeded them no matter how much they want to force you to use it.
And yet a great many people are willingly, voluntarily using them as replacements for search engines and more. If they were worse then why are they doing that?
I suspect it because search results require manually parsing through them for what you are looking for, with the added headwinds of widespread, and in many ways intentional degradation of conventional search.
Searching with an LLM AI is thought-terminating and therefore effortless. You ask it a question and it authoritatively states a verbose answer. People like it better because it is easier, but have no ability to evaluate if it is any better in that context.
So it has advantages, then.
BTW, all the modern LLMs I’ve tried that do web searching provide citations for the summaries they generate. You can indeed evaluate the validity of their responses.
These kinds of questions are strange to me.
A great many people are using them voluntarily, a lot of people are using them because they don’t know how to avoid using them and feel that they have no alternative.
But the implication of the question seems to be that people wouldn’t choose to use something that is worse.
In order to make that assumption you have to first assume that they know qualitatively what is better and what is worse, that they have the appropriate skills or opportunity necessary to choose to opt in or opt out, and that they are making their decision on what tools to use based on which one is better or worse.
I don’t think you can make any of those assumptions. In fact I think you can assume the opposite.
The average person doesn’t know how to evaluate the quality of research information they receive on topics outside of their expertise.
The average person does not have the technical skills necessary to engage with non-AI augmented systems presuming they want to.
The average person does not choose their tools based on what is the most effective at producing the correct truth but instead on which one is the most usable, user friendly, convenient, generally accepted, and relatively inexpensive.