There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.
And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.
There’s a few replies talking about humans misrepresenting the news. This is true, but part of the problem here is that most people understand the concept of bias - even if only to the extent of “my people neutral, your people biased”. But this is less true for LLMs. There’s research which shows that because LLMs present information authoritatively that not only do people tend to trust them, but they’re actually less likely to check the sources that the LLM provides than they would be with other forms of being presented with information.
And it’s not just news. I’ve seen people seriously argue that fringe pseudo-science is correct because they fed a very leading prompt into a chatbot and got exactly the answer they were looking for.
I wonder if people trust ChatGPT more or less than an international celebrity who is also their best friend.