“In July, the Trump administration signed an executive order barring “woke” AI from federal contracts, demanding that government-procured AI systems demonstrate “ideological neutrality” and “truth seeking.” With the federal government as tech’s biggest buyer, AI companies now face pressure to prove their models are politically “neutral.”
^^^ This is where trump face plants. Unless OpenAI straight up programs ChatGPT to lie or avoid answering anything that could be seen as negative for the right. None of this pressure from Trump is gonna matter. The AI is still going to seek the most accurate answers, which almost always leads to a pro liberal position.
Even if they program it to avoid answering potentially damming question about the right, users just keep pressuring it, and it eventually folds.
“In July, the Trump administration signed an executive order barring “woke” AI from federal contracts, demanding that government-procured AI systems demonstrate “ideological neutrality” and “truth seeking.” With the federal government as tech’s biggest buyer, AI companies now face pressure to prove their models are politically “neutral.”
^^^ This is where trump face plants. Unless OpenAI straight up programs ChatGPT to lie or avoid answering anything that could be seen as negative for the right. None of this pressure from Trump is gonna matter. The AI is still going to seek the most accurate answers, which almost always leads to a pro liberal position.
Even if they program it to avoid answering potentially damming question about the right, users just keep pressuring it, and it eventually folds.
The LLM will always seek the most average answer.
Close, but not always. It will give out the answer based on the data it’s been trained on. There is also a bit of randomization with a “seed”.
So, in general it will give out the most average answer, but that seed can occasionally direct it down the path of a less common answer.
Fair.
I tell a lot of lies for children. It helps when talking to end users.