Makes sense. I have used AI for software development tasks such as manipulating SQL queries and XML files (tedious things) and am always disappointed with how AI will misinterpret some things. But it’s obvious with those when the requests fail. But for things like “the news” where there is no QA team to point out the defect, it will be much harder to notice. And when AI starts (or continues) to use AI generated posts as sources, it will get much worse.
Makes sense. I have used AI for software development tasks such as manipulating SQL queries and XML files (tedious things) and am always disappointed with how AI will misinterpret some things. But it’s obvious with those when the requests fail. But for things like “the news” where there is no QA team to point out the defect, it will be much harder to notice. And when AI starts (or continues) to use AI generated posts as sources, it will get much worse.