For instance, say I search for “The Dark Knight” on my Usenet indexer. It returns to me a list of uploads and where to get them via my Usenet provider. I can then download them, stitch them together, and verify that it is, indeed, The Dark Knight. All of this costs only a few dollars a month for me.
My question is, why can’t copyright holders do this as well? They could follow the same process, and then send takedown requests for each individual article which comprises the movie. We already know they try to catch people torrenting so why don’t they do this as well?
I can think of a few reasons, but they all seem pretty shaky.
- The content is hosted in countries where they don’t have to comply with takedown requests.
It seems unlikely to me that literally all of it is hosted in places like this. Plus, the providers wouldn’t be able to operate at all in countries like the US without facing legal repercussions.
- The copyright holders feel the upfront cost of indexer and provider access is greater than the cost of people pirating their content.
This also seems fishy. It’s cheap enough for me as an individual to do this, and if Usenet weren’t an option, I’d have to pay for 3+ streaming services to be able to watch everything I do currently. They’d literally break even with this scheme if they could only remove access to me.
- They do actually do this, but it’s on a scale small enough for me not to care.
The whole point of doing this would be to make Usenet a non-viable option for piracy. If I don’t care about it because it happens so rarely, then what’s the point of doing it at all?
Off topic, but is there any tutorial on how to do this Usenet thing? Feel free to contact me on Matrix, it’s on my profile.
You’ll need 3 things:
A usenet client such as SABnzbd. This is equivalent to a torrent client like qbittorrent.
An NZB indexer such as NZBGeek, again equivalent to torrent indexers, but for nzb files.
And finally a usenet provider such as FrugalUsenet. This is where you’re actually downloading articles from. (there are other providers listed in the photo in my other comment here)
Articles are individual posts on usenet servers. NZB files contain lists of articles that together result in the desired files. There are also additional articles included so if some are lost (taken down due to dmca/ntd) they can be rebuilt from the remaining data. Your nzb client handles the process of reading nzb files, trying to download the articles from each of your configured usenet providers, then decompressing, rebuilding lost data, and finally stitching it all together into the files you wanted.
I’d like to know too, but people are so cryptic about it every time this shit is brought up that I’m overwhelmed before I even begin. So I just stick to the tried and true methods I know
deleted by creator
Install the *arr programs that you want to manage your libraries - Radarr (for movies), Sonarr (for tv shows), Lidarr (for music)
Install NZBGet (for downloading the files)
Sign up to a usenet provider.
Sign up to an indexer like NZBGeek, NZBFinder, etc.
Put your usenet provider details in to NZBGet under the News-Server section.
Put your indexer details into the indexer settings in *arr programs.
Put your NZBGet details into the Download Clients setting section in *arr programs.
Pretty much the gist of it. Then you can just search for and add the content you’re after in Radarr/Sonarr/Lidarr and it will go looking.
If you need a hand I’m happy to help.
Find a Usenet provider. A quick web search and some reading should get you to the right place. I’m not sure if any good free servers are available anymore, but there’s probably one that’s cheap enough.
Looks like https://sabnzbd.org/ is a free and open source Windows/MacOS/Linux client that can download files. I haven’t tried it, but it’s highly rated on alternativeto.net
deleted by creator
There’s about 100 tutorials that’ll come up in a Google search.