• jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    14 hours ago

    I wish they had broke it out by AI. The article states:

    “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

    But I don’t see that anywhere in the linked PDF of the “full results”.

    This sort of study should also be re-done from time to time to track AI version numbers.

    • Rothe@piefed.social
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      4
      ·
      11 hours ago

      It doesn’t really matter, “AI” is being asked to do a task it was never meant to do. It isn’t good at it, and it will never be good at it.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        edit-2
        2 hours ago

        Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.

        The excerpt they quoted said:

        “Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”

        So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.

        This was the point, what was reported was insufficient information.

      • snooggums@piefed.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        11 hours ago

        Using an LLM to return accurate information is like using a shoe to hammer a nail.