Some thoughts on how useful Anubis really is. Combined with comments I read elsewhere about scrapers starting to solve the challenges, I’m afraid Anubis will be outdated soon and we need something else.

  • mfed1122@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    Yeah, you’re absolutely right and I agree. So then do we have to resign the situation to being an eternal back-and-forth of just developing random new challenges every time the scrapers adapt to them? Like antibiotics for viruses? Maybe that is the way it is. And honestly that’s what I suspect. But Anubis feels so clever and so close to something that would work. The concept of making it about a cost that adds up, so that it intrinsically only effects massive processes significantly, is really smart…since it’s not about coming up with a challenge a computer can’t complete, but just a challenge that makes it economically not worth it to complete. But it’s disappointing to see that, at least with the current wait times, it doesn’t seem like it will cost enough to dissuade scrapers. And worse, the cost is so low that it seems like making the cost significant to the scrapers will require really insufferable wait times for users.

    • Guillaume Rossolini@infosec.exchange
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      @mfed1122 yeah that is my worry, what’s an acceptable wait time for users? A tenth of a second is usually not noticeable to a human, but is it useful in this context? What about half a second, etc

      I don’t know that I want a web where everything is artificially slowed by a full second for each document