Hello, I’m relatively new to self-hosting and recently started using Unraid, which I find fantastic! I’m now considering upgrading my storage capacity by purchasing either an 8TB or 10TB hard drive. I’m exploring both new and used options to find the best deal. However, I’ve noticed that prices vary based on the specific category of hard drive (e.g., Seagate’s IronWolf for NAS or Firecuda for gaming). I’m unsure about the significance of these different categories. Would using a gaming or surveillance hard drive impact the performance of my NAS setup?
Thanks for any tips and clarifications! 🌻
I’m a big fan of Backblaze’s failure statistics. https://www.backblaze.com/cloud-storage/resources/hard-drive-test-data
Annualized failure rates go from 0.3%/year to 3+%/year, even just looking at the drives they have million+ hours for, and I’d rather be at the lower end of that 10x range.
As you are looking for bulk data storage, the drive’s speed isn’t of too much concern. A 5400RPM drive is plenty.
If you are looking to put this drive into an array with other drives, make sure you get a CMR drive as SMR drives can drop out of arrays due to controllers finding them unresponsive. If a drive does not list it is CMR, it’s best to assume it isn’t. Seagate has a handy CMR chart, for example.
Additionally, if there are multiple spinning drives in the same enclosure, getting drives with vibration resistance is a good bonus. Most drives listed for NAS use will have this extra vibration resistance.
Thanks for this, will read up and check out the links!
Yeah, you don’t want a surveillance drive. They are optimized for continuous writes, not random IO.
It’s probably worth familiarizing yourself with the difference between CMR and SMR drives.
If you expect this to keep growing, it might make sense to switch to SAS now - then you can find some really cheap enterprise class drives on ebay that will perform a bit better in this type of configuration. You’d just need a cheap HBA (like a 9211-8i) and a couple breakout cables. You can use SATA drives with a SAS HBA, but not the other way around.
Thanks for the tips! I dought I’ll be going higher that 50TB at max. Would SAS still be nessecary for that you reccon?
Definitely isn’t necessary, but if you search for ‘3.5" SAS lot’ on ebay you might find all the drives you’ll need to get to 50TB for the price of a couple new SATA drives.
I live in Scandinavia so ebay isn’t much of an option for us really. Also prefer to use lesser big-corp when buying tech, while more expensive usually worth it due to better warranty and customer service. But thanks for the suggestion nonetheless!
10000RPM SAS drives are noisy (and expensive), something to keep in mind. If I needed this kind of performance I would probably go full SSD.
Apart from the SMR vs. CMR, if your NAS will run 24/7 you need to make sure to use 24/7 capable drives or find a way to flash a 24/7-specific firmware/setting to a consumer drive. Normal consumer drives (e.g. WD Green) tend to have a lot of energy saving features, e.g. they park the drive heads after a few seconds of inactivity. This isn’t a problem with normal use as an external drive that only gets connected once in a while. But in a 24/7 NAS the drive will wake up lots of times and park again, wake up, park again … and these cycles kill the drive pretty fast.
https://www.truenas.com/community/threads/hacking-wd-greens-and-reds-with-wdidle3-exe.18171/
Lots of good advice here. I’ve got a bunch of older WD Reds still in service (from before the SMR BS). I’ve also had good luck shucking drives from external enclosures as well as decommissioned enterprise drives. If you go that route, depending on your enclosure or power supply in these scenarios you may run into issues with a live 3.3V SATA power pin causing drives to reboot. I’ve never had this issue on mine but it can be fixed with a little kapton tape or a modified SATA adapter. It’s definitely cheaper to shuck or get used enterprise for capacity! I’m running at least a dozen shucked drives right now and they’ve been great for my needs.
Also, if you start reaching the point of going beyond the ports available on your motherboard, do yourself a favor and get a quality HBA card flashed in IT mode to connect your drives. The cheapo 4 port cards I originally tried would have random dropouts in Unraid from time to time. Once I got a good HBA it’s been smooth sailing. It needs to be in IT mode to prevent hardware raid from kicking in so that Unraid can see the individual identifiers of the disks. You can flash it yourself or use an eBay seller like ThArtOfServer who will preflash them to IT mode.
Finally, be aware that expanding your array is a slippery slope. You start with 3 or 4 drives and next thing you know you have a rack and 15+ drive array.
On the power disable feature topic, I’ve only bought a few used enterprise drives from Goharddrive.com and Serverpartsdeals.com, but they both included a handy little SATA power adapter with each drive for exactly that reason.
The first desktop I installed them in worked just fine with the factory PSU cables, but when I upgraded I was left scratching my head for a few minutes until I remembered those adapters!
I bought a small roll of kapton tape years ago and just use a sliver of it to cover the 3v3 pin.
Thanks for all the input and feedback - really appriciate it :) I still have quite the way to go to learn some of what these terms are. I have one PCI-e card for expanding the amount of Sata ports, wether is a cheapo card or not im not entierly sure(got it secondhand via a package deal), but have been using it for half a year now without any issues :)
The concern for the specific disk technology is usually around the use case. For example, surveillance drives you expect to be able to continuously write to 24/7 but not at crazy high speeds, maybe you can expect slow seek times or whatever. Gaming drives I would assume are disposable and just good value for storage size as you can just redownload your steam games. A NAS drive will be a little bit more expensive because it’s assumed to be for backups and data storage.
That said in all cases if you use them with proper redundancy like RAIDZ or RAID1 (bleh) it’s kind of whatever, you just replace them as they die. They’ll all do the same, just not with quite the same performance profile.
Things you can check are seek times / latency, throughput both on sequential and random access, and estimated lifespan.
I keep hearing good things about decomissioned HGST enterprise drives on eBay, they’re really cheap.
Other people have suggested good info to gain nuisanced knowledge. I recommend starting with a simple fact. With enough time and/or the right conditions all storage will fail. Design your setup with redundancy. I personally had to replace 2x 12tb drives this year. I have raidz3 (3 parity drives) and a hot spare. So I just bought cheap replacements from a reputable seller on eBay and consider it part of the cost of self hosting.
It depends what your parameters are. For spinning hard disks, you want to look at total power cycles, and mean time between failures. More enterprise drives have very long mean time between failures
In fact for spinnig hard disks, turning on can be on a likely failure mode, so there’s machines out there if you power off there’s a good chance they won’t come back on in the enterprise data centers
Your solid state hard disks, you want to look at meantime between failures, but also total write volume. Enterprise discs tend to have much much much much much much greater write capacity
So all of these trade-offs cost money, if you’re looking at archival, where you write the data only once, then you can go with a disk that has a low total write volume
Read about the specific features of the “WD RED” drives. There are some pretty good articles out there, and you are going to learn a whole lot reagarding your question.
I got a bunch of them in my private server. I didn’t know all these details when I bought them LOL, but they do a good job, reliable, silent, for 6 years and counting.
Thanks for the tip 🌻
I highly recommend watching this guys videos on his analysis of the backblaze data https://www.youtube.com/watch?v=IgJ6YolLxYE&t=1
And a comparison of the difference WD drive colours, which might not be what you expect https://www.youtube.com/watch?v=QDyqNry_mDo&t=2
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=IgJ6YolLxYE&t=1
https://www.piped.video/watch?v=QDyqNry_mDo&t=2
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Also remember that your parity has to be more or equal to the biggest drive in your array. If you buy a 10 but don’t have another 10 you must use the 10 as parity.
That is indeed my current issue haha, was not aware of that when I got into this; so currently have 10TB in parity and only use 3TB for my storage… So wanna get most of out of that parity by buying another disk.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage PSU Power Supply Unit SATA Serial AT Attachment interface for mass storage SSD Solid State Drive mass storage ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #750 for this sub, first seen 15th May 2024, 23:25] [FAQ] [Full list] [Contact] [Source code]
Yes three are differences but you’re running a redundant array of independent disks in order not to care about those differences.
I think this really depends on what you’re storing. I have a large media collection and doing full redundancy would be extremely wasteful, but it’s fairly easy to repopulate things if something goes awry. If it’s irreplaceable or smaller files, redundancy definitely makes sense.
Sure but technically non-redundant schemes also fall under the category. E.g. RAID0, multiple non-redundant ZFS vdevs, etc. Those would be reducing the performance effects of single disks.
Wasn’t sure if that mattered or not in the case of Unraid. Had a feeling that only counted for the size of the disk. Just trying to make sure im not buying an expensive 10TB that I wont be able to use :P
It can make quite a big difference. https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2024/
I’ve read advice against buying used storage unless you don’t mind being at more risk of losing the data in there.
I’ll buy it used if it has been properly checked by another vendor before sold again. Otherwise I wont.