- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
LLM just mirror real world data they are trained on,
Other than censorship i don’t think there is a way to make it stop. It doesn’t understand moral good or bad it just spits out what it was trained on.
women who ask chatgtp for financial advice should make less money.
You mean *people. Even though I might not agree. ChatGPT is better at financial advice than alot of people. Just dont ask it how to become rich. Because you cant.
Now I really wanna know if that’s actually the best advice or sexism. Because I could see that our society might be so bad that this is genuinely good advice.
.
I can’t believe we should ever say this. No, the chat machine is the problem.
.
Absolutely, so who is building a study that uses it for the wrong thing and then publishing articles about it
deleted by creator
the society is also the problem
That’s not the question.
It wasn’t about whether the LLM was well reasoned, it was about whether the conclusion was (pragmatically speaking) correct.
.
Again, that wasn’t the original question.
The question was about whether women are genuinely more likely to be passed over for a job offer if they ask for as much pay as a man would ask for, or if (as you described), or both. A broken clock is right twice a day, and it’s missing the point of the question if you go and explain why you can’t rely on said broken clock.
Are hiring managers actually less likely to hire women if they ask for market-rate pay, as opposed to men when they do the same?
.




