Pocketpair Publishing boss John Buckley says we're already starting to see a flood of 'really low-quality, AI-made games' on Steam and other storefronts.
You’d think that that’s the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can’t know about. Plus, LLMs are very heavy on resources.
There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don’t think we’re going to see useful AI-driven dialogue anytime soon.
You seem to imply we can only use the raw output of the LLm but that’s not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That’s a pretty hot subject right now, I don’t see why it couldn’t be done for video game dialogues.
Indeed, I also agree that the consumption of resources it requires may not be worth the output.
It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn’t need to be perfect because human output is not perfect either.
You’d think that that’s the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can’t know about. Plus, LLMs are very heavy on resources.
There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don’t think we’re going to see useful AI-driven dialogue anytime soon.
You seem to imply we can only use the raw output of the LLm but that’s not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That’s a pretty hot subject right now, I don’t see why it couldn’t be done for video game dialogues.
Indeed, I also agree that the consumption of resources it requires may not be worth the output.
If you could define a formal schema for what appropriate dialogue options would be you could just pick from it randomly, no need for the AI
It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn’t need to be perfect because human output is not perfect either.
Yeah that’s kind of my point. That’s a vastly more complicated thing than SQL.
But it also doesn’t need to be as exact as SQL, which removes some kind of complexity.