• General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    4
    ·
    10 months ago

    Explanation of how this works.

    These “AI models” (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual “image maker” (U-Net).

    A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

    This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI “sees” is very different from the image the human sees.

    Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.


    I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).

    It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      10 months ago

      So, it only works against open source AI; basically only Stable Diffusion at this point.

      I very much doubt it even works against the multitude of VAEs out there. There’s not just the ones derived from StabilitiyAI’s models but ones right now simply intended to be faster (at a loss of quality): TAESD can also encode and has a completely different architecture thus is completely unlikely to be fooled by the same attack vector. That failing, you can use a simple affine transformation to convert between latent and rgb space (that’s what “latent2rgb” is) and compare outputs to know whether the big VAE model got fooled into generating something unrelated. That thing just doesn’t have any attack surface, there’s several magnitudes too few weights in there.

      Which means that there’s an undefeatable way to detect that the VAE was defeated. Which means it’s only a matter of processing power until Nightshade is defeated, no human input needed. They’ll of course again train and try to fool the now hardened VAE, starting another round, ultimately achieving nothing but making the VAE harder and harder to defeat.

      It’s like with Russia: They’ve already lost the war but they haven’t noticed, yet – though I wouldn’t be too sure that Nightshade devs themselves aren’t aware of that: What they’re doing is a powerful way to grift a lot of money from artists without a technical bone in their body.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        10 months ago

        Those companies don’t make the technical details public and I don’t follow the leaks and rumors. They almost certainly use, broadly, the same approach (latent diffusion). That is, their AIs work with a compressed version of the image to save on computing power.

    • LadyAutumn@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      10 months ago

      Yeah. Not that it’s the fault of artists that capitalism exists in its current form. Their art is the fruit of their labor, and therefore, means should be taken to ensure that their labor is properly compensated. And I’m a marxist anarchist, no part of me agrees with any part of the capitalist system. But artists are effectively workers, and we enjoy the fruits of their labor. They are rarely fairly compensated for their work. In this particular instance, under the system we live in, artists rights should be prioritized over

      I’m all for janky (getting less janky as time goes on) AI images, but I don’t understand why it’s so hard to ask artists permission first to use their data. We already maintain public domain image databases, and loads of artists have in the past allowed their art to be used freely for any purpose. How hard is it to gather a database of art who’s creators have agreed to let it be used for AI? All the time we’ve (the collective we) been arguing over thise could’ve been spent implementing a system to create such a database.

        • LadyAutumn@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          Fair enough, and I can’t claim to be a fan of copyright law or how it’s used. Maybe what I’m moreso talking about is a standard of ethics? Or some laws governing the usage of image and text generating AI specifically as opposed to copyright law. Like just straight up a law making it mandatory for AI to provide a list of all the data it used, as well as proof of the source of that data having consented to it’s use in training the AI.

          • Even_Adder@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            10 months ago

            There’s nothing wrong with being able to use others’ copyrighted material without permission though. For analysis, criticism, research, satire, parody and artistic expression like literature, art, and music. In the US, fair use balances the interests of copyright holders with the public’s right to access and use information. There are rights people can maintain over their work, and the rights they do not maintain have always been to the benefit of self-expression and discussion.

            It would be awful for everyone if IP holders could take down any review, finding, reverse engineering, or indexes they didn’t like. That would be the dream of every corporation, bully, troll, or wannabe autocrat. It really shouldn’t be legislated.

            • LadyAutumn@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              2
              ·
              10 months ago

              I’m not talking about IP holders, and I do not agree with copyright law. I’m not having a broad discussion on copyright here. I’m only saying, and not saying anything more, that people who sit down and make a painting and share it with their friends and communities online should be asked before it is scanned to train a model. That’s it.

              • Even_Adder@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                2
                ·
                10 months ago

                How’re we supposed to have things like reviews, research findings, reverse engineering, or indexes if you have to ask first? The scams you could pull if you could attack anyone caught reviewing you. These rights exist to protect us from the monopolies on expression that would increase disparities and divisions, manipulate discourse, and in the end, fundamentally alter how we interact online with each other for the worse.

                • LadyAutumn@lemmy.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  10 months ago

                  I’m just gonna ask you to read my above comment again. What I’m suggesting is:

                  “Before you scrape and analyze art with the specific purpose of making an AI art generator model, you must ask permission from the original creating artist.”

                  • Even_Adder@lemmy.dbzer0.com
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    10 months ago

                    I read that. That’s what I’ve been responding to the whole time. This is a way to analyze and reverse engineer images so you can make your own original works. In the US, the first major case that established reverse engineering as fair use was Sega Enterprises Ltd. v. Accolade, Inc in 1992, and then affirmed in Sony Computer Entertainment, Inc. v. Connectix Corporation in 2000. Do you think SONY or SEGA would have allowed anyone to reverse engineer their stuff if they asked nice? Artists have already said they would deny anyone.

                    It’s not about the data, people having a way to make quality art themselves is an attack on their status, and when asked about generators that didn’t use their art, they came out overwhelmingly against with the same condescending and reductive takes they’ve been using this whole time.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        That’s not quite right. A traditional worker is someone who operates machines, they don’t own, to make products, they don’t own. Artists, who are employed, do not own the copyrights to what they make. These employed artists are like workers, in that sense.

        Copyrights are “intellectual property”. If one needed permission (mostly meaning, pay for it), then the money would go to the property owners. These worker-artists would not receive anything. Note that, on the whole, the owners already made what profit they could expect. Say, if it’s stills from a movie, then that movie already made a profit (or not).

        People who use their own tools and own their own product (EG artisans in Marx’s time) are members of the Petite Bourgeoisie. I think a Marxist analysis of the class dynamics would be fruitful here, but it’s beyond me.

        The spoilered bit is something I have written about the NYT lawsuit. I think it’s illuminating here, too.

        spoiler

        The NYT wants money for the use of its “intellectual property”. This is about money for property owners. When building rents go up, you wouldn’t expect construction workers to benefit, right?

        In fact, more money for property owners means that workers lose out, because where else is the money going to come from? (well, “money”)

        AI, like all previous forms of automation, allows us to produce more and better goods and services with the same amount of labor. On average, society becomes richer. Whether these gains go to the rich, or are more evenly distributed, is a choice that we, as a society, make. It’s a matter of law, not technology.

        The NYT lawsuit is about sending these gains to the rich. The NYT has already made its money from its articles. The authors were paid, in full, and will not get any more money. Giving money to these property owners will not make society any richer. It just moves wealth to property owners for being property owners. It’s about more money for the rich.

        If OpenAI has to pay these property owners for no additional labor, then it will eventually have to increase subscription fees to balance the cash flow. People, who pay a subscription, probably feel that it benefits them, whether they use it for creative writing, programming, or entertainment. They must feel that the benefit is worth, at least, that much in terms of money.

        So, the subscription fees represent a part of the gains to society. If a part of these subscription fees is paid to property owners, who did not contribute anything, then that means that this part of the social gains is funneled to property owners, IE mainly the ultra-rich, simply for being owners/ultra-rich.


        why it’s so hard to ask artists permission first to use their data.

        SD was trained on images from the internet. Anything. There are screenshots, charts and pure text jpgs in there. There’s product images from shopping sites and also just ordinary snapshots that someone posted. The people with the biggest individual contribution are almost certainly professional photographers. SD is not built on what one usually calls art (with apologies to photographers). An influencer who has a lot of good, well tagged images on the net has made a more positive contribution than someone who makes abstract art or stick figure comics. And let’s not forget the labor of those who tagged those images.

        You could not practically get permission from these tens or hundreds of millions of people. It would really be a shame, because the original SD reveals a lot about the stereotypes and biases on the net.

        Using permissively licensed images wouldn’t have helped a lot. I have seen enough outrage over datasets with exactly such material. People say, that’s not what they had in mind when they gave these wide permissions.

        Practically, look at wikimedia. There are so many images there which are “pirated”. Wikimedia can just take them down in response to a DMCA notice. Well, you can’t remove an image from a trained AI model. It’s not in there (if everything has worked). So what now? If that means that the model becomes illegal, then you just can’t have a model trained on such a database.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 months ago

          People who use their own tools and own their own product (EG artisans in Marx’s time) are members of the Petite Bourgeoisie. I think a Marxist analysis of the class dynamics would be fruitful here, but it’s beyond me.

          Please don’t. Marxists, at least Marxist-Leninists, tend to start talking increasing amounts of nonsense once the Petite Bourgeoisie and Lumpen get involved.

          In any case the whole thing is (as Marx would tell you, but Marxist ignore) a function of one’s societal relations, not of the individual person, or job. That relation might change from hour to hour (e.g. if you have a dayjob), and “does not have an employment contract” doesn’t imply “does not depend on capital for survival” – it’s perfectly possible as an artist, or pipe fitter, to own your own means of production (computer, metal tongs) and be, as a contractor, in a very similar relationship to capital as the Lumpen day-labourer: To have no say in the greater work that gets created, to be told “do this, or starve”, to be treated as an easily replaceable cog. That may even be the case if you have employees of your own. The question is, and that’s why Anarchist analysis >>> Marxist analysis, is whether you’re beholden to an unjust hierarchy, in this case, that created by capital ownership, not whether you happen to own a screw driver. As e.g. a farmer you might own millions upon millions in means of production, doesn’t mean that supermarket chains aren’t squeezing your bones dry and you can barely afford your utility bills. Capitalism is unjust hierarchy all the way up and down.

          Well, you can’t remove an image from a trained AI model. It’s not in there (if everything has worked). So what now? If that means that the model becomes illegal, then you just can’t have a model trained on such a database.

          I also can’t possibly unhear this, doesn’t mean that my mind or any music I might compose is illegal. If it is overfitted in my mind and I want to compose music and publish that then I’ll have to pay attention that my stuff is sufficiently different, have to run an adversarial model against myself, so to speak, if I don’t want to end up having to pay royalties. If I just want to have it bouncing around my head and sing it in the shower then I might be singing copyrighted material, but there’s no obligation for me to pay royalties either as many aspects of copyright necessitate things such as publishing or ability to damage the original author’s income.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            Well, Marx believed that the Petite Bourgeoisie would disappear. Their members, unable to economically compete, would become employed workers. Hasn’t happened, though. He also observed that this class emulated the outlook of the Haute Bourgeoisie, the rich. IDK more about that. I find it interesting how vocally in favor of right-wing economic policies some artists are, even though these policies massively favor the rich. The phrase temporarily embarrassed millionaire comes to mind. I’m curious about that, is all.

            I like how empathic your anarchist take is but I’m not really sure what to do with it.