There’s nothing to complain about here. Games require tons of placeholders, in art, dialogue, and code. They will iterate dozens of times before the final product, and given Larian’s own production standards, there’s no chance anything but the most inconsequential or forgotten items made by an LLM will stay in.
They honestly should have expected this given peoples visceral reaction to anything AI. Personally, I have huge problems with AI and refuse to play most games that have used it. I think it’s poisoning every creative industry and replacing important jobs while using vague the excuse that it makes things “easier” while making the game soulless in the process. I’m willing to give Larian the benefit of the doubt simply because of their previous games being amazing, but imma wait for the reviews on this one. This game is still going to be in development for another 4 years and none of us will no what’ll happen between then and now, but for now I’ll remain hopefully optimistic
Most people—even obsessive gamers—don’t give two shits about AI. There’s a very loud minority that gets in everyone’s face saying all AI is evil like we’re John Connor or something. They are so obsessive and extreme about it, it often makes the news (like this article).
The market has already determined that if a game is fun, people will play it. How much AI was used to make it is irrelevant.
Such a nuanced, unique opinion you have
Yeah, the outrage is overblown.
This doesn’t mean they’re enforcing a CoPilot quota or vibe coding the game or shipping slop; it could be simple autocompletion, or (say) a component that makes the mocap pipeline easier.
Don’t let Tech Bros poison dumb tools that could help out devs like Larian.
…Now, if they ship slop into the final game or announce an “OpenAI partnership,” that’s a different story.
Nothing wrong with using AI to organize or supplement workflow. That’s literally the best use for it.
Except for the ethical question of how the AI was trained, or the environmental aspect of using it.
There is no ethics under capitalism, so that’s a moot point.
There are AI’s that are ethically trained. There are AI’s that run on local hardware. We’ll eventually need AI ratings to distinguish use types, I suppose.
There are AI’s that are ethically trained
Can you please share examples and criteria?
Adobe’s image generator (Firefly) is trained only on images from Adobe Stock.
Does it only use that or doesn’t it also use an LLM to?
The Firefly image generator is a diffusion model, and the Firefly video generator is a diffusion transformer. LLMs aren’t involved in either process - rather the models learn image-text relationships from meta tags. I believe there are some ChatGPT integrations with Reader and Acrobat, but that’s unrelated to Firefly.
Surprising, I would expect it’d rely at some point on something like CLIP in order to be prompted.




