Study shows 97% of developers believe gen AI is transforming the industry, with a focus on creating more dynamic worlds, intelligent nonplayer characters (NPCs), and more efficient workflows...
I want to see it myself real bad. The reason for this is actually very simple: more traditional handcoded worldgen algorithms usually operate with some basic noise functions controlled by some parameters like “biome” or “temperature” or “height” and then slap some heuristics on top to smooth rough edges or to introduce a bit more of interest. Those heuristics you code there are rather limited. You ofcourse could spend a lot of efforts and hardcode a lot of stuff there, but it’s still limited. And in practice they are most often are very limited. With AI though, what developers can hope for is multistep generation with self-feedback. We may manually model some prefabs, modular pieces and ask AI to stitch them together in a way that resembles some special symbol per map, possibly generating some intermediate pieces by itself if those are lacking, also come up with enemy placements and look at the thing at whole and try to rebalance it for certain difficulty, etc. It’s more flexible and it’s potentially unbounded. You can ask it to reprompt itself however times needed if it see there are some problematic places or missed opportunities in map it generated. You can give it a list of gimmicks and ask to try to compose and balance every map around random gimmick picked from this list. You can also ask it to roll a dice and with probability of 15% it will invent a gimmick itself instead of picking from the list. Possibilities are wild.
Nothing of what you suggested is particularly difficult with real dev work. You basically just said, “I want to vibe code it all.” It’s trivially easy to set up pseudorandom generators; deciding where enemies and objects go should not be left up to chance through some black-box algorithmic “magic.” Game theory exists for a reason, and AI doesn’t “know” about it, because it’s just a complex pattern generator at the end of the day.
Also, what happens when the model generates an environment that can’t be traversed? What if it places invisible walls in weird places? What about an environment that’s rife with bugs? What if the code is plain wrong? Now you have to go into the code, learn how it works, and debug it manually. Thank god you saved yourself some time by vibe coding. /s
I can see we won’t agree, so you’re welcome to get the last word, but I won’t reply afterwards.
Also, what happens when the model generates an environment that can’t be traversed? What if it places invisible walls in weird places?
That’s also one of the reasons why it’s interesting. This happens a lot when implementing regular mapgen and you have to fix it until it only generates correct maps. AI can perceive what it generated and make sure certain invariants are holding and if not, modify map to fix it, and continue going and going. You can ask it to start with noise and carve space for villages and carve roads between them. You can ask to start with noise and quests and generate roads based on what makes sense for progression, and so on.
I want to see it myself real bad. The reason for this is actually very simple: more traditional handcoded worldgen algorithms usually operate with some basic noise functions controlled by some parameters like “biome” or “temperature” or “height” and then slap some heuristics on top to smooth rough edges or to introduce a bit more of interest. Those heuristics you code there are rather limited. You ofcourse could spend a lot of efforts and hardcode a lot of stuff there, but it’s still limited. And in practice they are most often are very limited. With AI though, what developers can hope for is multistep generation with self-feedback. We may manually model some prefabs, modular pieces and ask AI to stitch them together in a way that resembles some special symbol per map, possibly generating some intermediate pieces by itself if those are lacking, also come up with enemy placements and look at the thing at whole and try to rebalance it for certain difficulty, etc. It’s more flexible and it’s potentially unbounded. You can ask it to reprompt itself however times needed if it see there are some problematic places or missed opportunities in map it generated. You can give it a list of gimmicks and ask to try to compose and balance every map around random gimmick picked from this list. You can also ask it to roll a dice and with probability of 15% it will invent a gimmick itself instead of picking from the list. Possibilities are wild.
Nothing of what you suggested is particularly difficult with real dev work. You basically just said, “I want to vibe code it all.” It’s trivially easy to set up pseudorandom generators; deciding where enemies and objects go should not be left up to chance through some black-box algorithmic “magic.” Game theory exists for a reason, and AI doesn’t “know” about it, because it’s just a complex pattern generator at the end of the day.
Also, what happens when the model generates an environment that can’t be traversed? What if it places invisible walls in weird places? What about an environment that’s rife with bugs? What if the code is plain wrong? Now you have to go into the code, learn how it works, and debug it manually. Thank god you saved yourself some time by vibe coding. /s
I can see we won’t agree, so you’re welcome to get the last word, but I won’t reply afterwards.
Another bad faith / inexperienced take.
That’s also one of the reasons why it’s interesting. This happens a lot when implementing regular mapgen and you have to fix it until it only generates correct maps. AI can perceive what it generated and make sure certain invariants are holding and if not, modify map to fix it, and continue going and going. You can ask it to start with noise and carve space for villages and carve roads between them. You can ask to start with noise and quests and generate roads based on what makes sense for progression, and so on.