• capital@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    edit-2
    7 months ago

    People keep saying this but it’s just wrong.

    Maybe I haven’t tried the language you have but it’s pretty damn good at code.

    Granted, whatever it puts out needs to be tested and possibly edited but that’s the same thing we had to do with Stack Overflow answers.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      7 months ago

      I’ve tried a lot of scenarios and languages with various LLMs. The biggest takeaway I have is that AI can get you started on something or help you solve some issues. I’ve generally found that anything beyond a block or two of code becomes useless. The more it generates the more weirdness starts popping up, or it outright hallucinates.

      For example, today I used an LLM to help me tighten up an incredibly verbose bit of code. Today was just not my day and I knew there was a cleaner way of doing it, but it just wasn’t coming to me. A quick “make this cleaner: <code>” and I was back to the rest of the code.

      This is what LLMs are currently good for. They are just another tool like tab completion or code linting