Decided on the Dark Side and I want to build a 120TB Monster and WOW, do I have questions.

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I appreciate the input for my new NAS. I need to know the why it wouldn't be a good idea on why I couldn't set up 2 drives in mirrored vdev and have 12 Vdev's to accoomplish 120TB.
Nothing wrong with it, Just trade offs. Mirrors with 2 disks will give you killer performance and fast rebuild times. Raidz2 will give you a little better redundency but less performance. You could also look at doing a 3 way mirror and get good redundency and performance.

Sent from my Nexus 5X using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The power usage is actually not that much different.

WD Red is 5.7/2.8 and for WD Enterprise it is 7.1/5.0

The quality difference makes it worth it for me. Fewer failures, longer life and better warranty are hard to beat when using large drives (>8GB)

Significantly harder to keep 7200 rpm conventional drives cool than 5400 rpm. Perhaps the He filled drives are different...

Meanwhile, Mirrors vs RaidZ2/3 comes down to storage efficiency. RaidZ will typically have faster sequential write speeds than mirrors, BUT significantly worse random IO. Mainly because random i/o speed is a function of how many vdevs you have, and with mirrors only being 2 or 3 disks wide, you simply have more vdevs out of the same number of drives.

For a media storage system, RaidZ2 is probably where its at. Two vdevs of a 8-way RaidZ2 made of 10TB drives gets you about 120TB of storgage, with 4 drives of parity (2 per 8 drives). Of course, you never want to fill all that 120TB, in fact, you don't want to go past 90% utilization. RaidZ1 would present, imo, an unacceptable risk of total pool failure as soon as 1 drive failed. If you went with a 24 bay system, you could still add another 60TB of storage with another 8x10TB drives when you get to about 80% on the exisiting system.

Alternatively, I think 8TB drives are cheaper per TB than the 10TB drives.

Regarding not caring about redundancy... How much of a pain would it be to reaquire/capture/digitize/etc the 120TB of content if you suffered total pool failure? Is it worth investing in a couple of disks of redundancy to prevent that?
 
Last edited:

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Significantly harder to keep 7200 rpm conventional drives cool than 5400 rpm. Perhaps the He filled drives are different...

He drives are roughly ~20% cooler. They use less energy than normal drive.

For a media storage system, RaidZ2 is probably where its at. Two vdevs of a 8-way RaidZ2 ...

If it is primarily for media and as long a he has the DVD's to recover from, a Z1 would not be bad. It is all about trying to reduce failures. This is one of the few times I would think about going Z1.

Alternatively, I think 8TB drives are cheaper per TB than the 10TB drives.

WD Enterprise 10GB drives are about $6 per TB cheaper than 8GB.
WD Red 10GB drives are about $2.90 per TB MORE expensive per TB than 8GB
WD Red Pro 10GB drives are about $13.75 per TB MORE expensive per TB than 8GB
WD Red Pro 10Gb drives are about $3.50 per TB MORE expensive per TB than WD Enterprise!

Seagate 10GB Enterprise are $4.52 per TB cheaper than 8GB
Seagate 10GB Ironwolf are $3.85 per TB more expensive than 8GB
Seagate Ironwolf Pro 10GB are $2.50 per TB cheaper than 8GB

Stux check out the spreadsheet I made. Generally it is pretty reliable for http://www.newegg.com

Regarding not caring about redundancy... How much of a pain would it be to reaquire/capture/digitize/etc the 120TB of content if you suffered total pool failure? Is it worth investing in a couple of disks of redundancy to prevent that?

Well said...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I appreciate the input for my new NAS. I need to know the why it wouldn't be a good idea on why I couldn't set up 2 drives in mirrored vdev and have 12 Vdev's to accoomplish 120TB.
This is a little while now, have you decided on what you will do? If you have any questions or is is up and running already?
 

DeltaOscarMike

Dabbler
Joined
Aug 3, 2017
Messages
11
I built a NAS but not the one I wanted, but having some problems.
I picked up used Xeon cpu's 2670
System:
CPU: Dual Xeon 2670
MOB: ASRock EP2C602-4L/D16
RAM: 128 GB DDR 3 ECC
Chassis: Norco 4224 Hot Swapable Bay
PSU: Corsair RMX 1000
Cache Drive: PCI Card for NMVE 960 EVO 1TB
GPU: Quadro P2000
HBA: 3x HP220 (But only 2 in the system).
Parity Drives: 2x 8TB Parity Drives (1 Parity drive is being RMA due to failure).
Storage: 14x 8TB JBOD Drives

IMG_0272.JPG


IMG_0273.JPG


IMG_0274.JPG


IMG_0275.JPG


I decided to stick with unRAID because I want to add disks to the pool later on and didn't have the cash to spend all that money upfront, also with the new unraid plugin we can now leverage nvidia drivers for plex docker container for decoding/encoding (with script wrapper).

Problems
1. My experience is making me second guess my CPU & MOBO choice. For some reason the motherboard IPMI records a false temp and it keeps on De-asserting itself from the system. This is my 3rd board that was RMA'd because of that. I find once it the cpu De-asserts I start getting RAM errors or SEGFAULT Errors. (Thinking of an upgrade to a unused 7700k system that is around doing nothing).
2. Cooling, The Server is located in my basement where it is relatively cool but the drives in the center tend to heat up above 45 Degrees Celsius, ever since i put a makeshift fan in front, my temps have been no higher that 38 Degrees Celsius.
3. I'm short one BackPlane due to operator error, when i tried to insert the cable on the BackPlane, i ruined the connection and been having a hard time getting a replacement through NORCO, the CSR for them is very poor at their job. No response to emails. I ordered a replacement a few months ago and never showed up, so i put a claim through pay pal and they decided in my favour and got a refund. I tried to buy it again and still no response on them shipping the item again, looks like another pay pal grievance.
4. HBA Cards is my confusion, at first i didn't know the difference between RAID Card and a HBA Card. I have bought a few in the past only to find out they were RAID Cards (They are going up for sale). RocketRaid 2670A (16 Lanes), Adaptec 72405( 24Lanes, also this borked my current setup and lost all my data scared to use it again), and 3x HP220. This process is very confusing for me with performance. I currently only have two installed because I have the other slots full with the Quadro P2000 and a NVME Drive
My problem is that I want to use the third HP220 but I can't get the third card installed because I have the NVME drive installed on a PCIE 3.0 and the only slot available is a PCIE 2.0. If I moved the NVME to the 2.0 slot how much of performance loss would I have? I don't know that answer and didn't want to experiment either.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I decided to stick with unRAID
It would seem that any questions about the system, issues with it, etc., would then be best directed to the unRAID forums or other support channels.
because I want to add disks to the pool later on
...which you can do with FreeNAS as well--you haven't had time to read the documentation in two years?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
HBA Cards is my confusion, at first i didn't know the difference between RAID Card and a HBA Card. I have bought a few in the past only to find out they were RAID Cards (They are going up for sale).
There is a lot of information on the forum, just use the search feature and it might answer some of your hardware questions.
Those cases are terrible about airflow if there are empty drive bays. There is a solution for that on this forum, you might want to read this thread:
https://www.ixsystems.com/community...ro-x10-sri-f-xeon-e5-1650v4.46262/post-315996

I really don't see how anyone here can help you with unRAID.
 

DeltaOscarMike

Dabbler
Joined
Aug 3, 2017
Messages
11
Mixture of Seagate Drives and Western Digital shucked Drives.
The one seen in the picture is one of the Parity Drives and that is connected directly to Motherboard
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The Seagate Archive drives are "shingled" or "SMR" type - while there's some long-term potential for this type of drive to work very well in ZFS (once the entire filesystem has been rewritten to be optimized for them) there's some scenarios where things go pear-shaped.

In your case, the workload of long-term archival storage of movies/documents/backups is probably not too bad. Regular operation doesn't result in a lot of writes (hardly any, I'd wager) and as such the "cache" on the drives will never get overwhelmed. It will hopefully be able to reshingle itself at its own leisurely pace.

Where it will potentially fall down hard is if you ever have to replace a drive due to failure, or potentially when you expand the array in unRAID (if this causes data releveling/restriping) and your old drives end up with free space fragmentation. You could be asking for a lot of writes all over the disk, which makes SMR grind to a pretty big halt.

As long as you don't overwhelm the PMR "cache" on the Archive drives it should be fine, but keep a close eye on performance of sustained writes long-term. The initial load onto clean LBAs should be fine, it'll be when you go to rewrite data that it can choke up.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
But OP isn't using ZFS; he's using unRAID.
Skimming through the thread I thought OP was looking to move away from unRAID to ZFS.

If not, that changes things significantly in favor of SMR then. unRAID looks a lot more like the type of object-based storage that works well with SMR drives; although if the "dedicated parity drive" requires more random writes and rewrites as data is updated, SMR might not be the best choice for that specific spindle.

But for file-based redundancy where it's exclusively "write once, delete never" then SMR drives are exactly the ticket. Enjoy the lower $/TB.
 

tfran1990

Patron
Joined
Oct 18, 2017
Messages
294
...which you can do with FreeNAS as well--you haven't had time to read the documentation in two years?

I actually lol at that, some of the people in the office are looking at me....
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
....But that case does. The OP clearly needs a better case, one that is engineered for that many drives. Squeezing stuff in and compromising air flow to the point where a room fan is used to supplement the interior fans is a sure sign that not all is well.

Supermicro cases on eBay are cheap and can hold both the drives and the motherboard in happiness. Or go the chenbro route. Or the storinator at 45 drives.

With that much data/$$$ tied up in a single system, I’d be looking at redundant PSUs and ups’s. It’s odd to spend literally thousands of bucks for the storage drives only to skimp on the underlying hardware.

My experience with Norco was so bad, the mere mention of the name is giving me flashbacks to that stinker of a case I bought from them.
 
Top