• KingGimpicus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        3
        ·
        edit-2
        17 hours ago

        Nintendo wasn’t “inspired” by shit. They made an ice cream cone a Pokémon. Keys on a ring? Pokémon. 8 varieties of elemental flavored dog? Check. Oh hey cool look a 2d image on a computer oh wait it’s actually a Pokémon. Dog? Cat? Snake? Bird? Horse? All Pokémon. IMO nothing in Pokémon is actually “inspired”, only ripped off.

        • Zahille7@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          16 hours ago

          They made Goth Mommy GF into a pokemon with “Gothita”

          How many sentient clouds are also pokemon? Or that one that’s literally just a balloon?

          I swear pokemon ran out of creativity by gen 3 - and I’m not even a pokemon fan.

        • omarfw@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          8 hours ago

          Yeah because they spent money making it. If you spend millions of dollars making a video game, you can’t just shut it down and refund every order of it without putting your studio in serious jeopardy financially. That’s not an option for them, so the best they can do is just not rely on AI anymore. Clearly they tried it and learned that it actually sucks ass.

          • Tattorack@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            It’s a small game. They have Palworld to rake in the big bucks, so if their stance on AI use has changed, they can just remove that game.

  • oce 🐆@jlai.lu
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    14
    ·
    1 day ago

    I think it could work to give dynamic and varied answers to secondary characters given good prompts and other guardrails to preserve the immersion. As long as the core elements of the games are not AI generated slope, and developers are honest about where it was used.

    • OttoVonNoob@lemmy.ca
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      1 day ago

      As an amateur game dev, I believe AI will crash out for the public before it becomes truly useful for programming. I’ve heard colleagues try to use AI , but it often just creates more work. When the AI doesn’t know the answer, which is often. it makes something up, leading to errors, crashes, or hidden issues like memory leaks. I’d rather write the code correctly from the start and understand how it works, than spend hours hunting down problems in AI-generated code, only to never find the issue. Full disclosure I use Chatgpt to edit my dialogue as my English is not great.

      • ryathal@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        I don’t think AI code generation is going to be a revolution anytime soon, but AI voice and AI image generation is likely going to stay.

      • dukemirage@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        Your anecdote checks out with a study I heard about. Office teams that were using LLMs for a few months reported that results are faster, but editing took longer than doing it conventionally in the first place. Generating boiler plate code and documentation could be another very useful use case in software dev, and I don’t really care if that’s used. Like in your use case, spell/grammar checking, using LLMs is a natural development of the tools that we already had. Your text processors marks errors, who cares if it’s powered by an LLM or by a huge heuristic rule set?

      • oce 🐆@jlai.lu
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        1 day ago

        I am using them as a side tool for development. I think LLMs are already very performent for web knowledge search (e.g. replacing a search on stackoverflow), suggestions, explanations and error detection. Although is it worth the resources consumption? Not sure, but I can’t afford not staying on top of the tooling available for my job. However, I agree, in my experience, the edit/agent modes are not efficient for coding, for now.

        Generating secondary dialogues for a video game is quite a lower quality requirement than software engineering. So I think it could work there. It requires sounding natural, not being exact, LLMs are good at this.

        • JayGray91🐉🍕@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          1 day ago

          web knowledge search

          yeah for low stakes how-tos I’ve been asking more and more using one of the free LLMs. For higher stakes I ask for their sources if they can give it and go from them on my own.

          • stray@pawb.social
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            1 day ago

            It’s been really nice to be able to type a plain question (in any language) into Google and receive a concise answer before scrolling down to confirm with more trustworthy sources. In particular it’s been very good for solving annoyances with UI options by directing me to exactly what I’m looking for. A traditional search will often conflate my search with synonyms (even when using quotations, which is some bullshit), and even ignore what language my search was in.

            e: Also you should be careful when clicking on any links provided by an LLM because they can accidentally send you phishing links.

            • Kindness is Punk@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 hours ago

              SEO destroyed google’s usefulness. AI is a cope for that but AI kills the incentives for very thing it depends on for it’s usefulness, user generated content.

      • the16bitgamer@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        1 day ago

        My anecdote for AI and coding is that it’s a good replacement for google searching, especially when you are learning a new language.

        You need to understand the fundamentals first, but asking the AI how to do a task in C when you’ve only coded in JS is very helpful. It’s still wrong, but it’s not like Stack Overflow is more accurate.

    • Jesus_666@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      You’d think that that’s the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can’t know about. Plus, LLMs are very heavy on resources.

      There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don’t think we’re going to see useful AI-driven dialogue anytime soon.

      • oce 🐆@jlai.lu
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        4
        ·
        22 hours ago

        You seem to imply we can only use the raw output of the LLm but that’s not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That’s a pretty hot subject right now, I don’t see why it couldn’t be done for video game dialogues.
        Indeed, I also agree that the consumption of resources it requires may not be worth the output.

        • porous_grey_matter@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          21 hours ago

          If you could define a formal schema for what appropriate dialogue options would be you could just pick from it randomly, no need for the AI

          • oce 🐆@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            13 hours ago

            It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn’t need to be perfect because human output is not perfect either.

    • Kühlschrank@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 day ago

      I am really praying for the day corporate drops this foolish nonsense of foisting it on their company and employees - maybe even gasp enabling their teams to access and use the tools that help them do better and more creative jobs.

      Because AI can fit into a lot of people’s toolsets really nicely, especially in creative fields like game design. Just need to drop the idea that AI is an authoritative final answer to our design problems and instead realize that it’s just another tool to help us get to those solutions.

  • Jerkface (any/all)@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    20
    ·
    1 day ago

    The difference between “generative AI” and “procedural generation” cannot be meaningfully nailed down.

    • pankuleczkapl@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      I think it can - procedural generation consist of procedures, that is elements designed by humans, which are just connected into a bigger structure. Every single template, rule and atomic object (e.g. a single room in a generated house) is hand-designed, and as such no matter what comes out the elements and connections were considered by a real human. On the other hand, generative AI is almost always some sort of machine learning, that is an approximation of what a good structure of something should be, but it is only a very poor, randomised approximation. You have absolutely no guarantees nor constraints on what might pop out of the model - that is my main concern with genAI, though the whole outputted thing looks reasonable, upon closer inspection it has a lot of inconsistenties.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        1 day ago

        I think you are reading in the “designed by humans” part. Even when that is nominally true, the whole point of procedural generation is to create a level of complexity and emergence that the outputs are surprising and novel. Things no one expected are desirable. I think the distinction being drawn is not meaningful; in both cases, it is entirely possible and likely that no human being understands how a given output was arrived at.

    • luciole (he/him)@beehaw.org
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 day ago

      Nonsense. Procedural generation is a rule-based deterministic system while generative AI is probabilistic and data driven. It’s fundamentally different.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        edit-2
        1 day ago

        Markov chains are both probabilistic and data-driven. For example. LLMs are not that far removed from markov chains. Should game developers be allowed to use latent spaces or is that too sloppy AI?

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        5
        ·
        edit-2
        1 day ago

        Okay, but (ignoring that procedural generation can also be probabilistic) what is the functional difference? The point I’m getting at is that you cannot banish the one without necessarily limiting the other.

  • CallMeAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    26
    ·
    1 day ago

    🤣🤣🤣👌👍

    Yeah the company that ripped off Nintendo (I couldn’t give two shits, don’t screech) totally hates AI 🤣👌👍.