• melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    I think AI is positioned to make better decisions than execs. The money saved would be huge!

  • philpo@feddit.org
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    In the other news: Meta pays another 3 billion Euro due to not following the DSA and getting banned in Europe.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    2 days ago

    Well hey that actually sounds like a job AI could be good at. Just give it a prompt like “tell me there are no privacy issues because we don’t care” and it’ll do just that!

  • AstralPath@lemmy.ca
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    1
    ·
    2 days ago

    Honestly, I’ve always thought the best use case for AI is moderating NSFL content online. No one should have to see that horrific shit.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      Yup.
      It’s a traumatic job/task that gets farmed to the cheapest supplier which is extremely unlikely to have suitable safe guards and care for their employees.

      If I were implementing this, I would use a safer/stricter model with a human backed appeal system.
      I would then use some metrics to generate an account reputation (verified ID, interaction with friends network, previous posts/moderation/appeals), and use that to either: auto-approve AI actions with no appeals (low rep); auto-approve AI actions with human appeal (moderate rep); AI actions must be approved by humans (high rep).

      This way, high reputation accounts can still discuss & raise awareness of potentially moderatable topics as quickly as they happen (think breaking news kinda thing). Moderate reputation accounts can argue their case (in case of false positives). Low reputation accounts don’t traumatize the moderators.

    • ouch@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 days ago

      What about false positives? Or a process to challenge them?

      But yes, I agree with the general idea.

      • blargle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 days ago

        Not sufficiently fascist leaning. It’s coming, Palantir’s just waiting for the go-ahead…

  • TransplantedSconie@lemm.ee
    link
    fedilink
    English
    arrow-up
    39
    ·
    edit-2
    2 days ago

    Meta:

    Here, AI. Watch all the horrible things humans are capable of and more for us. Make sure nothing gets through.

    AI:

    becomes SKYNET

        • HowdWeGetHereAnyways@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 day ago

          No, they give you an answer that should sound correct enough to enable them to score a positive interaction.

          Why do you think so many GPT answers seem plausible but don’t work? Because it has very very little actual logic

          • And009@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Expecting current gen tool to be as smart as humans? Doesn’t mean they’re useless. They can translate words to images and explain art in terms of business.

            They add capabilities not replace.

            • leftzero@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              16 hours ago

              They add capabilities not replace.

              They poison all repositories of knowledge with their useless slop.

              They are plummeting us into a dark age which we are unlikely to survive.

              Sure, it’s not the LLMs fault specifically, it’s the bastards who are selling them as sources of information instead of information-shaped slop, but they’re still being used to murder the future in the name of short term profits.

              So, no, they’re not useless. They’re infinitely worse than that.

            • HowdWeGetHereAnyways@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              17 hours ago

              I don’t disagree, but this is a wildfire of interest right now and there’s a lot of people not recognizing this facet of how gpts operate. You have to really vocally recognize it’s weakness so it can be mitigated (hopefully).

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    44
    ·
    edit-2
    2 days ago

    Great move for Facebook. It’ll let them claim they’re doing something to curb horrid content on the platform without actually doing anything.

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 days ago

    moderation on facebook? i’m sure it can be found right next to bigfoot

    (other than automated immediate nipple removal)

  • pelespirit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    2 days ago

    This might be the one time I’m okay with this. It’s too hard on the humans that did this. I hope the AI won’t “learn” to be cruel from this though, and I don’t trust Meta to handle this gracefully.

    • chrash0@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 days ago

      pretty common misconception about how “AI” works. models aren’t constantly learning. their weights are frozen before deployment. they can infer from context quite a bit, but they won’t meaningfully change without human intervention (for now)

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      2 days ago

      I mean, you could hire people who would otherwise enjoy the things they moderate. Keep em from doing shit themselves.

      But, if all the sadists, psychos, and pedos were moderating, it would be reddit, I guess.

      • themurphy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        2 days ago

        My guess is you dont know how bad it is. These people at Meta has real PTSD, and it would absolutly benefit everyone, if this in any way could be automatic with AI.

        Next question is though, do you trust Meta to moderate? Nah, should be an independent AI, they couldnt tinker with.

  • wwb4itcgas@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    2 days ago

    I’ve never had a horse in this race, and I never will - but I’m sure this will work out well for those who do. /s

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 days ago

    Would be a shame if people had so sift through AI generated gore before the bots like and comment it. But seriously, good on them.