Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.
The excerpt they quoted said:
“Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.”
So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.
This was the point, what was reported was insufficient information.
It doesn’t really matter, “AI” is being asked to do a task it was never meant to do. It isn’t good at it, and it will never be good at it.
Wow, way to completely ignore the content of the comment you’re replying to. Clearly, some are better than others… so, how do the others perform? It’s worth knowing before we make assertions.
The excerpt they quoted said:
So that implies that “the other assistants” performed more than twice as well, so presumably that means encountering serious issues less than 38% of the time (still not great, but better). But they said “more than double the other assistants”, does that mean double the rate of one of the others or double the average of the others? If it’s an average it would mean that some models probably performed better, while others performed worse.
This was the point, what was reported was insufficient information.
Using an LLM to return accurate information is like using a shoe to hammer a nail.
We’ve all done it?
Nope, my soles are too soft.
Except that a shoe is vaguely hammer ish. More like pounding a screw in with your forehead.