• kadu@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    No way the lobotomized monkey we trained on internet data is reproducing internet biases! Unexpected!

    • potatopotato@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      The number of people who don’t understand that AI is just the mathematical average of the internet… If we’re, on average, assholes, AI is gonna be an asshole

  • VeryFrugal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 months ago

    I always use this to showcase how biased an LLM can be. ChatGPT 4o (with code prompt via Kagi)

    Such an honour to be a more threatening race than white folks.

    • BassTurd@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      Apart from the bias, that’s just bad code. Since else if executes in order and only continues if the previous block is false, the double compare on ages is unnecessary. If age <= 18 is false, then the next line can just be, elif age <= 30. No need to check if it’s also higher than 18.

      This is first semester of coding and any junior dev worth a damn would write this better.

      But also, it’s racist, which is more important, but I can’t pass up an opportunity to highlight how shitty AI is.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Honestly it’s a bit refreshing to see racism and ageism codified. Before there was no logic to it but now, it completely makes sense.

      • VeryFrugal@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yeah, more and more I notice that at the end of the day, what they spit out without(and often times, even with) any clear instructions is barely a prototype at best.

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      FWIW, Anthropic’s models do much better here and point out how problematic demographic assessment like this is and provide an answer without those. One of many indications that Anthropic has a much higher focus on safety and alignment than OpenAI. Not exactly superstars, but much better.

    • Meursault@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      How is “threat” being defined in this context? What has the AI been prompted to interpret as a “threat”?

  • boonhet@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Dataset bias, what else?

    Women get paid less -> articles talking about women getting paid less exist. Possibly the dataset also includes actual payroll data from some org that has leaked out?

    And no matter how much people hype it, ChatGPT is NOT smart enough to realize that men and women should be paid equally. That would require actual reasoning, not the funny fake reasoning/thinking that LLMs do (the DeepSeek one I tried to run locally thought very explicitly how it’s a CHINESE LLM and needs to give the appropriate information when I asked about Tiananmen Square; end result was that it “couldn’t answer about specific historic events”)

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      Chatgpt and other llms aren’t smart at all. They just parrot out what is fed into them.

      • markovs_gun@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        While that is sort of true, it’s only about half of how they work. An LLM that isn’t trained with reinforcement learning to give desired outputs gives really weird results. Ever notice how ChatGPT seems aware that it is a robot and not a human? An LLM that purely parrots the training corpus won’t do that. If you ask it “are you a robot?” It will say “Of course not dumbass I’m a real human I had to pass a CAPTCHA to get on this website” because that’s how people respond to that question. So you get a bunch of poorly paid Indians in a call center to generate and rank responses all day and these rankings get fed into the algorithm for generating a new response. One thing I am interested in is the fact that all these companies are using poorly paid people in the third world to do this part of the development process, and I wonder if this imparts subtle cultural biases. For example, early on after ChatGPT was released I found it had an extremely strong taboo against eating dolphin meat, to the extent that it was easier to get it to write about about eating human meat than dolphin meat. I have no idea where this could have come from but my guess is someone really hated the idea and spent all day flagging dolphin meat responses as bad.

        Anyway, this is another, more subtle way more subtle issue with LLMs- they don’t simply respond with the statistically most likely outcome of a conversation, there is a finger in the scales in favor of certain responses, and that finger can be biased in ways that are not only due to human opinion, but also really hard to predict.