Artists have finally had enough with Meta’s predatory AI policies, but Meta’s loss is Cara’s gain. An artist-run, anti-AI social platform, Cara has grown from 40,000 to 650,000 users within the last week, catapulting it to the top of the App Store charts.

Instagram is a necessity for many artists, who use the platform to promote their work and solicit paying clients. But Meta is using public posts to train its generative AI systems, and only European users can opt out, since they’re protected by GDPR laws. Generative AI has become so front-and-center on Meta’s apps that artists reached their breaking point

  • QuadratureSurfer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Well, now’s a great time to let them know about Pixelfed, although explosive growth like this will be a strain on any website.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      5 months ago

      I get the sense that a federated image hosting/sharing system would be counter to their goals, that being to lock away their art from AI trainers. An AI trainer could just federate with them and they’d be sending their images over on a silver platter.

      Of course, any site that’s visible to humans is also visible to AIs in training, so it’s not really any worse than their current arrangement. But I don’t think they want to hear that either.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Aside from it not really working, though.

          Glaze attempts to “poison” AI training by using adversarial noise to trick AIs into perceiving it as something that it’s not, so that when a description is generated for the image it’ll be incorrect and the AI will be trained wrong. There are a couple of problems with this, though. The adversarial noise is tailored to specific image recognition AIs, so it’s not future-proof. It also isn’t going to have an impact on the AI unless a large portion of the training images are “poisoned”, which isn’t the case for typical training runs with billions of images. And it’s relatively fragile against post-processing, such as rescaling the image, which is commonly done as an automatic part of preparing data for training. It also adds noticeable artefacts to the image, making it look a bit worse to the human eye as well.

          There’s a more recent algorithm called Nightshade, but I’m less familiar with its details since it got a lot less attention that Glaze and IIRC the authors tried keeping some of its details secret so that AI trainers couldn’t develop countermeasures. There was a lot of debate over whether it even worked in the first place, since it’s not easy to test something like this when there’s little information about how it functions and training a model just to see if it breaks is expensive. Given that these algorithms have been available for a while now but image AIs keep getting better I think that shows that whatever the details it’s not having the desired effect.

          Part of the reason why Cara’s probably facing such financial hurdles is that it’s computationally expensive to apply these things. They were also automatically running “AI detectors” on images, which are expensive and unreliable. It’s an inherently expensive site to run even if they were doing it efficiently.

          IMO they would have been much better served just adding “No AI-generated images allowed” to their ToS and relying on their users to police themselves and each other. Though given the witch-hunts I’ve seen and the increasing quality of AI art itself I don’t think that would really work for very long either.