Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media.

But the AIs also picked up on subtler cues, like location-specific slang, and could estimate a salary range from a user’s profession and location.

Reference:

arXiv DOI: 10.48550/arXiv.2310.07298

  • Kalash@feddit.ch
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You can also do that without AI. We’ve had metadata analysis for a while now.

  • SatanicNotMessianic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Okay, I think I must absolutely be misreading this. They started with 1500 potential accounts, then picked 500 that, by hand, they could make guesses about based on people doing things like actually posting where they live or how much they make.

    And then they’re claiming their LLMs have 85% accuracy based on that subset of data? There has to be more than this. Were they 85% on the full 1500? How did they confirm that? Was it just on the 500? Then what’s the point?

    There was a study on Facebook that showed that they could predict with between 80-95% accuracy (or some crazy number like that) your gender, orientation, politics, and so on just based on your public likes. That was ten years ago at least. What is this even showing?

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      SnoopSnoo was able to pick out phrases from Reddit posters based on declarative statements they made in their posts, and that site has been down for years.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    I wonder how long it will take for the media to get past the “AI is GOD DAMN AMAZING” phase and start real journalism about AI.

    Seriously, neural networks have existed since the 1990s. The tech is not all that amazing, really.

    Find someone that can explain what’s going on inside a neural net. Then I’ll be impressed.

    • TheChurn@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Explaining what happens in a neural net is trivial. All they do is approximate (generally) nonlinear functions with a long series of multiplications and some rectification operations.

      That isn’t the hard part, you can track all of the math at each step.

      The hard part is stating a simple explanation for the semantic meaning of each operation.

      When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: “First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem”, and so on.

      Neural nets don’t appear to solve problems that way, each atomic operation does not have that semantic meaning. That is the root of all the reporting about how they are such ‘black boxes’ and researchers ‘don’t understand’ how they work.

      • ComradeSharkfucker@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Yeah but most people don’t know this and have never looked. It seems way more complex to the layman than it is because instinctually we assume that anything that accomplishes great feats must be incredibly intricate