Best RaidZ configuration for 36 x 3TB Drives?

Crm

Dabbler
Joined
Sep 14, 2015
Messages
27
Hi All,

Looking to build another NAS out of my Supermicro SC847 36 bay host.

I have 36 drives hooked up to a 9311-8i HBA card and looking to figure out the best config for optimal speed but retain some redundancy

They are all 3TB 7200RPM Sata drives and 24 drives are on one port of the HBA and 12 on the other port.

I was thinking 3 x 12 drive vdevs in ZRaid1? which would give me 3 drives for redundancy? (1 per vdev)

performance is key but redundancy is of equal concern but not enough to run purely Raid 1
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
performance is key but redundancy is of equal concern but not enough to run purely Raid 1
You probably want to be more specific about your intended purpose. "Performance is key" means virtualization to me, but might mean something different to every other person.

For virtualization, you need to consider using mirrored pairs in most cases (maybe not every single case). I'd do some serious testing before choosing a final config. Your system might be able to get good performance with a bunch of Z1s or something, but write performance is usually tied to the number of vDevs you have, more is better. **Edit, I was perfectly fine with using Z1 when I was using small SAS drives, they resilver quickly. multi-terabyte drives can take many hours, even a day or two. If all those drives are the same age, either go more redundancy at the cost of some performance, or have really good backups and be willing to do a restore if that's an option for your purposes.

If this is general file storage, then read https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/ just to make sure you are comfortable with using Z1. It's a matter of opinion if you completely agree or not, but ultimately choose your risk/performance ratio.

Have you checked out CyberJock's ppt? https://drive.google.com/file/d/0BzHapVfrocfwblFvMVdvQ2ZqTGM/view
Edit: This guide gives you some good recommendations to creating groups of disks. Your write performance needs might dictate how many vDevs you want to split those drives into. I'd try a couple of configs and test it. I think the guide will tell you not to go over 11 disks. I'm sure you're aware of the performance cost of calculating parity, so have fun trying some different pool configs.

I can't answer better without knowing what you intend to do with the system, but for file storage, I think a couple of vDevs at Z2 might be real nice. You could try 6 groups at Z2 with 6 drives or maybe risk it and try 3 groups at z2 with 12. The six vdevs will write better, but be less capacity.
 
Last edited:

Crm

Dabbler
Joined
Sep 14, 2015
Messages
27
Hi there I had a read through the PPT but was just curious as I knew there is a calculation to work out optimal config.

As far as use at the moment it is filer storage so media. data etc but needs to be quite performant as I have applications running from it as storage.

in regards to virtuallization i did try it in the past to use it to store ESX VMDKs via iSCSI / nfs but the performance was awful so use local SAS SSD storage for that.

Thanks
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
in regards to virtuallization i did try it in the past to use it to store ESX VMDKs via iSCSI / nfs but the performance was awful so use local SAS SSD storage for that.
Yeah, you have to build the system very specifically for that to work well. I run VM datastores, but I have 72GB RAM, quad SSD for SLOG, three SSD cache, dual fabric fibre channel connectivity, and use mirrors. Z1/Z2 sucked hard. My pool is a 6 vdev 2TB SATA. It performs well enough with caching for my purposes. Without caching, it's terrible. All SSD pool would shred! 128GB RAM would be nice also with a P3700! If money was no object.

A guy built a spread sheet one time that listed all the possible drive number/Z raid combinations, but I can't find it off hand.

Try of couple of the configs I edited in the first post. Also can try a couple of Z1 configs if you are slightly more risk accepting. Personal choice.

I would record the read/write/latency in these configs doing exactly what you intend to use the system for. Try not to use synthetic testing like crystal disk or something like that unless you're going to compare results in a relative fashion and not expecting the results to translate directly to performance of what you're planning to get for files/apps. Usually the IO profile isn't quite the same.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I was thinking 3 x 12 drive vdevs in ZRaid1? which would give me 3 drives for redundancy? (1 per vdev)

performance is key but redundancy is of equal concern but not enough to run purely Raid 1
The wider the vdev (12 drives) the lower the performance. Also, if you are running drives larger than 1TB, the recommended redundancy is RAIDz2. RAIDz1 is absolutely a bad idea, especially with 12 drives in a vdev. I have a server at work that suffered 3 drive faults in one vdev over a weekend. If not for hot-spares, I would have lost the whole pool even running RAIDz2.
Your best performance, short of setting it all up as mirrors, would likely be 6 drive vdevs with each vdev being RAIDz2.
That would give you 6 vdevs of 6 drives with a "raw" capacity of 98.2TB and a usable capacity of 62.9TB. With the 20% free space suggestion to preserve performance, you would be able to use 49.9TB and I would estimate that the IO/s potential would be 1500 ish.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
in regards to virtuallization i did try it in the past to use it to store ESX VMDKs via iSCSI / nfs but the performance was awful so use local SAS SSD storage for that.
You would need to use a SLOG device and possibly a L2ARC also to get good performance for iSCSI to ESXi as it would be synchronous traffic.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Your best performance, short of setting it all up as mirrors, would likely be 6 drive vdevs with each vdev being RAIDz2.
...and for a sacrifice of some IOPS, but a gain of storage capacity, you could look at 9-disk vdevs instead, still in RAIDZ2.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Yep, 9-way RaidZ2 has optimal padding efficiency last I checked.

Either you performance is key, and you go for mirrors or 6-way/7-way Raidz2 or performant storage efficiency and go for 9-way Raidz2

Anything more or less is either too wide, or doesn’t factor well.

5x 7-way gives you a spare bay... Which is always useful.
 

enoch85

Dabbler
Joined
Nov 30, 2016
Messages
33
I have 36 drives hooked up to a 9311-8i HBA card.

Hi Crm

A little bit of topic here, but I just bought a SAS 9311-i8 card myself which runs on the 11.00.01.00-IR firmware. Problem is I want to flash it to IT mode with the latest firmware P16 (if possible) or P14.

Which FW do you run on your card, and if you run IT, how did you flash it?

Please let me know if this question is better suited for a new topic. Just wanted your input.

Thanks!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thank you very much Chris!

Does that work with 9311-8i as well? Just to be sure....
The process should be the same, you just need to get the latest version of the firmware from the manufacturer (what ever their name is now) site.
 

enoch85

Dabbler
Joined
Nov 30, 2016
Messages
33
If it helps anyone, I bricked my 9311-8i card and bought a brand new LSI 9300-8i instead (IT-mode from factory). Learned from my mistakes and flashed it correctly this time with latest FW. Works like a charm.
 
Top