Pocketpair Publishing boss John Buckley says we're already starting to see a flood of 'really low-quality, AI-made games' on Steam and other storefronts.
I think it could work to give dynamic and varied answers to secondary characters given good prompts and other guardrails to preserve the immersion. As long as the core elements of the games are not AI generated slope, and developers are honest about where it was used.
As an amateur game dev, I believe AI will crash out for the public before it becomes truly useful for programming. I’ve heard colleagues try to use AI , but it often just creates more work. When the AI doesn’t know the answer, which is often. it makes something up, leading to errors, crashes, or hidden issues like memory leaks. I’d rather write the code correctly from the start and understand how it works, than spend hours hunting down problems in AI-generated code, only to never find the issue. Full disclosure I use Chatgpt to edit my dialogue as my English is not great.
Your anecdote checks out with a study I heard about. Office teams that were using LLMs for a few months reported that results are faster, but editing took longer than doing it conventionally in the first place. Generating boiler plate code and documentation could be another very useful use case in software dev, and I don’t really care if that’s used. Like in your use case, spell/grammar checking, using LLMs is a natural development of the tools that we already had. Your text processors marks errors, who cares if it’s powered by an LLM or by a huge heuristic rule set?
I am using them as a side tool for development. I think LLMs are already very performent for web knowledge search (e.g. replacing a search on stackoverflow), suggestions, explanations and error detection. Although is it worth the resources consumption? Not sure, but I can’t afford not staying on top of the tooling available for my job. However, I agree, in my experience, the edit/agent modes are not efficient for coding, for now.
Generating secondary dialogues for a video game is quite a lower quality requirement than software engineering. So I think it could work there. It requires sounding natural, not being exact, LLMs are good at this.
yeah for low stakes how-tos I’ve been asking more and more using one of the free LLMs. For higher stakes I ask for their sources if they can give it and go from them on my own.
It’s been really nice to be able to type a plain question (in any language) into Google and receive a concise answer before scrolling down to confirm with more trustworthy sources. In particular it’s been very good for solving annoyances with UI options by directing me to exactly what I’m looking for. A traditional search will often conflate my search with synonyms (even when using quotations, which is some bullshit), and even ignore what language my search was in.
e: Also you should be careful when clicking on any links provided by an LLM because they can accidentally send you phishing links.
SEO destroyed google’s usefulness. AI is a cope for that but AI kills the incentives for very thing it depends on for it’s usefulness, user generated content.
My anecdote for AI and coding is that it’s a good replacement for google searching, especially when you are learning a new language.
You need to understand the fundamentals first, but asking the AI how to do a task in C when you’ve only coded in JS is very helpful. It’s still wrong, but it’s not like Stack Overflow is more accurate.
You’d think that that’s the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can’t know about. Plus, LLMs are very heavy on resources.
There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don’t think we’re going to see useful AI-driven dialogue anytime soon.
You seem to imply we can only use the raw output of the LLm but that’s not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That’s a pretty hot subject right now, I don’t see why it couldn’t be done for video game dialogues.
Indeed, I also agree that the consumption of resources it requires may not be worth the output.
It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn’t need to be perfect because human output is not perfect either.
I am really praying for the day corporate drops this foolish nonsense of foisting it on their company and employees - maybe even gasp enabling their teams to access and use the tools that help them do better and more creative jobs.
Because AI can fit into a lot of people’s toolsets really nicely, especially in creative fields like game design. Just need to drop the idea that AI is an authoritative final answer to our design problems and instead realize that it’s just another tool to help us get to those solutions.
I think it could work to give dynamic and varied answers to secondary characters given good prompts and other guardrails to preserve the immersion. As long as the core elements of the games are not AI generated slope, and developers are honest about where it was used.
As an amateur game dev, I believe AI will crash out for the public before it becomes truly useful for programming. I’ve heard colleagues try to use AI , but it often just creates more work. When the AI doesn’t know the answer, which is often. it makes something up, leading to errors, crashes, or hidden issues like memory leaks. I’d rather write the code correctly from the start and understand how it works, than spend hours hunting down problems in AI-generated code, only to never find the issue. Full disclosure I use Chatgpt to edit my dialogue as my English is not great.
I don’t think AI code generation is going to be a revolution anytime soon, but AI voice and AI image generation is likely going to stay.
Your anecdote checks out with a study I heard about. Office teams that were using LLMs for a few months reported that results are faster, but editing took longer than doing it conventionally in the first place. Generating boiler plate code and documentation could be another very useful use case in software dev, and I don’t really care if that’s used. Like in your use case, spell/grammar checking, using LLMs is a natural development of the tools that we already had. Your text processors marks errors, who cares if it’s powered by an LLM or by a huge heuristic rule set?
I am using them as a side tool for development. I think LLMs are already very performent for web knowledge search (e.g. replacing a search on stackoverflow), suggestions, explanations and error detection. Although is it worth the resources consumption? Not sure, but I can’t afford not staying on top of the tooling available for my job. However, I agree, in my experience, the edit/agent modes are not efficient for coding, for now.
Generating secondary dialogues for a video game is quite a lower quality requirement than software engineering. So I think it could work there. It requires sounding natural, not being exact, LLMs are good at this.
yeah for low stakes how-tos I’ve been asking more and more using one of the free LLMs. For higher stakes I ask for their sources if they can give it and go from them on my own.
It’s been really nice to be able to type a plain question (in any language) into Google and receive a concise answer before scrolling down to confirm with more trustworthy sources. In particular it’s been very good for solving annoyances with UI options by directing me to exactly what I’m looking for. A traditional search will often conflate my search with synonyms (even when using quotations, which is some bullshit), and even ignore what language my search was in.
e: Also you should be careful when clicking on any links provided by an LLM because they can accidentally send you phishing links.
SEO destroyed google’s usefulness. AI is a cope for that but AI kills the incentives for very thing it depends on for it’s usefulness, user generated content.
My anecdote for AI and coding is that it’s a good replacement for google searching, especially when you are learning a new language.
You need to understand the fundamentals first, but asking the AI how to do a task in C when you’ve only coded in JS is very helpful. It’s still wrong, but it’s not like Stack Overflow is more accurate.
You’d think that that’s the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can’t know about. Plus, LLMs are very heavy on resources.
There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don’t think we’re going to see useful AI-driven dialogue anytime soon.
You seem to imply we can only use the raw output of the LLm but that’s not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That’s a pretty hot subject right now, I don’t see why it couldn’t be done for video game dialogues.
Indeed, I also agree that the consumption of resources it requires may not be worth the output.
If you could define a formal schema for what appropriate dialogue options would be you could just pick from it randomly, no need for the AI
It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn’t need to be perfect because human output is not perfect either.
Yeah that’s kind of my point. That’s a vastly more complicated thing than SQL.
But it also doesn’t need to be as exact as SQL, which removes some kind of complexity.
I am really praying for the day corporate drops this foolish nonsense of foisting it on their company and employees - maybe even gasp enabling their teams to access and use the tools that help them do better and more creative jobs.
Because AI can fit into a lot of people’s toolsets really nicely, especially in creative fields like game design. Just need to drop the idea that AI is an authoritative final answer to our design problems and instead realize that it’s just another tool to help us get to those solutions.