• tfowinder@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    3 months ago

    LLM just mirror real world data they are trained on,

    Other than censorship i don’t think there is a way to make it stop. It doesn’t understand moral good or bad it just spits out what it was trained on.

    • themurphy@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      3 months ago

      You mean *people. Even though I might not agree. ChatGPT is better at financial advice than alot of people. Just dont ask it how to become rich. Because you cant.

  • hoshikarakitaridia@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    3 months ago

    Now I really wanna know if that’s actually the best advice or sexism. Because I could see that our society might be so bad that this is genuinely good advice.

      • Tenderizer78@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        That’s not the question.

        It wasn’t about whether the LLM was well reasoned, it was about whether the conclusion was (pragmatically speaking) correct.

          • Tenderizer78@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            Again, that wasn’t the original question.

            The question was about whether women are genuinely more likely to be passed over for a job offer if they ask for as much pay as a man would ask for, or if (as you described), or both. A broken clock is right twice a day, and it’s missing the point of the question if you go and explain why you can’t rely on said broken clock.

            Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?