• SavedKriss@lemmy.world
    link
    fedilink
    English
    arrow-up
    119
    arrow-down
    15
    ·
    edit-2
    6 months ago

    What a surprise! A traditional outfit appears statistically significant to a large statistical model and shows more frequently. What a novel finding. I’m flabbergasted! What will be next? CEOs in jacket and tie? Dogs with fur? Why my 512x512 picture of a Inuit in a snowfield doesn’t portrait the subject wearing a bikini? Why can’t meta read my mind? WHY, MARK? WHHHHY?

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      13
      ·
      6 months ago

      A traditional outfit

      How traditional? How statistically relevant is it? Most Indians i know do not wear turbans at all.

      If these stats are trustworthy (and i think they are), the only Indians that wear turbans are Sikhs (1.7%) and Muslims (14.2%). I’d say 15.9% is not statistically significant.

      • catsarebadpeople@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        55
        arrow-down
        1
        ·
        edit-2
        6 months ago

        I think you’re looking at it wrong. The prompt is to make an image of someone who is recognizable as Indian. The turban is indicative clothing of that heritage and therefore will cause the subject to be more recognizable as Indian to someone else. The current rate at which Indian people wear turbans isn’t necessarily the correct statistic to look at.

        What do you picture when you think, a guy from Texas? Are they wearing a hat? What kind? What percentage of Texans actually wear that specific hat that you might be thinking of?

      • otp@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        29
        ·
        6 months ago

        I think the idea is that it’s what makes a person “an Indian” and not something else.

        Only a minority of Indians wear turbans, but more Indians than other people wear turbans. So if someone’s wearing a turban, then that person is probably Indian.

        I’m not saying that’s true necessarily (though it may be), but that’s how the AI interprets it…or how pictures get tagged.

        It’s like with Canadians and maple leaves. Most Canadians aren’t wearing maple leaf stuff, but I wouldn’t be surprised if an AI added maple leaves to an image of “a Canadian”.

      • rimjob_rainer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        3
        ·
        6 months ago

        Imagine a German man from Bavaria… You just thought of a man wearing Lederhosen and holding a beer, didn’t you? Would you be surprised if I told you that they usually don’t look like that outside of a festival?

      • SavedKriss@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        6 months ago

        A traditional dress is not a religious dress, it’s a dress used for a long time for it’s usefulness or fashion.

        The historical use of the turban is fascinating, spanning for millennia and in a lot of regions and ethnic groups of the world.

        I suggest the wiki page for further info, more precise than I have on hand.

        An excerpt about the Pagri:

        In Rajasthan state of India these turbans, known as Pagri or Safa, is a traditional headwear that is an integral part of the state’s cultural identity.

        My point was (but it might be lost in sarcasm) that being the “hat” of Indian kings, nobles and emperors for millennia, we have a lot of drawings and also photos of Indian people with turbans, that most probably these generative models have been trained on.

        On a footnote: why should the concept of a traditional dress be offensive? A lot of human groups have one.

        Edit these are the words most associated with “Pagri” in english. It’s a matter of data.

        • 0x0@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          6 months ago

          A traditional dress is not a religious dress,

          Point taken.

          On a footnote: why should the concept of a traditional dress be offensive?

          Ain’t to me, couldn’t care less. I was just trying to point out that most Indians do not seem to wear turbans (and based my reasoning on the religions dress alone).

          • SavedKriss@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            6 months ago

            Probably they don’t because it’s not context appropriate, as we all do with our dresses. More so if you and them live in a state or city with a different dress code. These things strongly depend on context.

            For generative models though, they produce usually the most stereotyped answers possible, with a pinch of randomness, so we shouldn’t be surprised about this phenomenon. They are rewarded by these things.

      • Duamerthrax@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        What’s the data that the model is being fed? What percentage of imaging featuring Indian men are tagged as such? What percentage of imaging featuring men wearing Turbans are tagged as Indian Men? Are there any images featuring Pakistan men wearing Turbans? Even if only a minority of Indian feature Turbans, if that’s the only distinction between Indian and Pakistan men in the model data, the model will favor Turbans for Indian Men. That’s just a hypothetical explanation.

      • HopeOfTheGunblade@kbin.social
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        6 months ago

        You don’t think nearly 1/6th is statistically significant? What’s the lower bound on significance as you see things?

        To be clear, it’s obviously dumb for their generative system to be overrepresenting turbans like this, although it’s likely to be a bias in the inputs rather than something the system came up with itself, I just think that 5% is generally enough to be considered significant and calling three times that not significant confuses me.

        • 0x0@programming.dev
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          6 months ago

          You don’t think nearly 1/6th is statistically significant?

          For statistics’ sake? Yes.

          For the LLM bias? No.

          • SavedKriss@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 months ago
            1. It’s not an LLM, it’s a GAN and it’s inner workings are very different.

            2. If that 1/6th has the most positive feedback in recognizability, for the GAN it becomes a high weighted part of the standard. These model’s categorizing flow favors unique features of images.

          • tabular@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            6 months ago

            The fact less people of that group actually wear it than do is significant when you want an average sample. When categorizing a collection of images then, naturally, the traditional garments of a group is associated more with that group than any other group: 1/6 is bigger than any other race.

          • Womble@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            6 months ago

            so if there was a country where 1 in 6 people had blue skin you would consider that insignificant because 5 out of 6 didn’t?

      • ramble81@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 months ago

        Except if they trained it on something that has a large proportion of turban wearers. It is only as good as the data fed to it, so if there was a bias, it’ll show the bias. Yet another reason this really isn’t “AI”

      • VirtualOdour@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        Put in western or Texas and that’s what you get, the west is a huge area even just of America but the word is linked to a lot of movie tropes and stuff so that’s what you get.

        This is also only when the language is English, ask in urdu or Bengali and you get totally different results, in fact just use urdu instead of Indian and get less turbans or put in Punjabi and you’ll get more turbuns.

        Or just put turban in the negatives if you want

      • SavedKriss@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Does it help the model to produce images that are indoubtably “american” for it’s raters or for it’s automated rating system? If yes they are statistically significant. Low frequency and systematic rarity can be both significant in a statistical analysis.

  • FMT99@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    8
    ·
    6 months ago

    Why would you ask a bot to generate a stereotypical image and then be surprised it generates a stereotypical image. If you give it a simplistic prompt it will come up with a simplistic response.

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      16
      ·
      6 months ago

      So the LLM answers what’s relevant according to stereotypes instead of what’s relevant… in reality?

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        2
        ·
        edit-2
        6 months ago

        It just means there’s a bias in the data that is probably being amplified during training.

        It answers what’s relevant according to its training.

  • gerryflap@feddit.nl
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    5
    ·
    edit-2
    6 months ago

    Kinda makes sense though. I’d expect images where it’s actually labelled as “an Indian person” to actually over represent people wearing this kind of clothing. An image of an Indian person doing something mundane in more generic clothing is probably more often than not going to be labelled as “a person doing X” rather than “An Indian person doing X”. Not sure why these authors are so surprised by this

  • VirtualOdour@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    21
    ·
    6 months ago

    Articles like this kill me because the nudge it’s kinda sorta racist to draw images like the ones they show which look exactly like the cover of half the bollywood movies ever made.

    Yes, if you want to get a certain type of person in your image you need to choose descriptive words, imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’ unless they’re bad at their job they’ll give you a bollywood movie cover with a guy from rajistan in a turbin - just like their official tourist website does

    Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 months ago

      the ones they show which look exactly like the cover of half the bollywood movies ever made.

      Almost certainly how they’re building up the data. But that’s more a consequence of tagging. Same reason you’ll get Marvel’s Iron Man when you ask an AI generator for “Draw me an iron man”. Not as though there’s a shortage of metallic-looking people in commercial media, but by keyword (and thanks to aggressive trademark enforcement) those terms are going to pull back a superabundance of a single common image.

      imagine gong to an artist snd saying ‘I need s picture and almost nothing matters beside the fact the look indian’

      I mean, the first thing that pops into my head is Mahatma Gandhi, and he wasn’t typically in a turbine. But he’s going to be tagged as “Gandhi” not “Indian”. You’re also very unlikely to get a young Gandhi, as there are far more pictures of him later in life.

      Ask for an business man in delhi or an urdu shop keeper with an Elvis quiff if that’s what you want.

      I remember when Google got into a whole bunch of trouble by deliberately engineering their prompts to be race blind. And, consequently, you could ask for “Picture of the Founding Fathers” or “Picture of Vikings” and get a variety of skin tones back.

      So I don’t think this is foolproof either. Its more just how the engine generating the image is tuned. You could very easily get a bunch of English bankers when querying for “Business man in delhi”, depending on where and how the backlog of images are sources. And urdu shopkeeper will inevitably give you a bunch of convenience stores and open-air stalls in the background of every shot.

  • 0x0@programming.dev
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    There are a lot of men in India who wear a turban, but the ratio is not nearly as high as Meta AI’s tool would suggest. In India’s capital, Delhi, you would see one in 15 men wearing a turban at most.

    Probably because most Sikhs are from the Punjab region?

      • Dultas@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        I’m guessing this relates to training data. Most training data that contains skin cancer is probably coming from medical sources and would have a ruler measuring the size of the melanoma, etc. So if you ask it to generate an image it’s almost always going to contain a ruler. Depending on the training data I could see generating the opposite as well, ask for a ruler and it includes skin cancer.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    6 months ago

    1000002349

    I’m not sure how AI could be possibility racist. (Image is of a supposed Native American but my point still stands)

  • Haus@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    Whenever I try, I get Ravi Bhatia screaming “How can she slap?!”