• Lodespawn@aussie.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    15 days ago

    Why is a researcher with a PhD in social sciences researching the accuracy confidence of predictive text, how has this person gotten to where they are without being able to understand that LLMs don’t think? Surely that came up when he started even considering this brainfart of a research project?

      • Lodespawn@aussie.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        15 days ago

        I guess, but it’s like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.

        • FanciestPants@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          I work in risk management, but don’t really have a strong understanding of LLM mechanics. “Confidence” is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I’m not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn’t model any other possible outcomes other than the answer it is providing, which does seem troubling.