A sex offender convicted of making more than 1,000 indecent images of children has been banned from using any “AI creating tools” for the next five years in the first known case of its kind.

Anthony Dover, 48, was ordered by a UK court “not to use, visit or access” artificial intelligence generation tools without the prior permission of police as a condition of a sexual harm prevention order imposed in February.

The ban prohibits him from using tools such as text-to-image generators, which can make lifelike pictures based on a written command, and “nudifying” websites used to make explicit “deepfakes”.

Dover, who was given a community order and £200 fine, has also been explicitly ordered not to use Stable Diffusion software, which has reportedly been exploited by paedophiles to create hyper-realistic child sexual abuse material, according to records from a sentencing hearing at Poole magistrates court.

  • jkrtn@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    4
    ·
    8 months ago

    I thought pedophiles looking at CSAM were more likely to attack a child, not less. They are actively fantasizing about it, and that can escalate.

    I am basing this belief on what I remember of discussions regarding that “ask a rapist” reddit megathread. Apparently psychologists thought that was horrifying.

    • Allero@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      8 months ago

      The bias with this approach is that it highlights those who did offend, while telling us nothing of those who didn’t. This is often repeated throughout research as well.

      It’s very likely that a lot of child abusers did watch CSAM (after all, if you see no issue in child abuse, there’s no issue for you in the creation of such imagery), but how many CSAM viewers end up being abusers and is there an elevated risk? That is the question.

      I guess if we’d make an “ask a pedophile” thread instead of “ask a rapist”, we could get some insights. Pedophiles, catch the idea!

      • jkrtn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        But then we cannot say that in either direction. We simply don’t know if they are more or less likely to attack a child without data about it.

        • Allero@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          By “harmful ways” I meant consuming more real CSAM - something that is frustratingly underresearched as well, but one can guess.