Best Configuration for 26 x 3TB SAS drives

Th3D0ct0r

Cadet
Joined
Apr 11, 2019
Messages
6
I have been using FreeNAS for a number of years on various HP MicroServers, but my storage needs now far exceed 4 disks (even with 3 servers) and have recently splashed out on a used SuperMicro CS-847 36 bay Storage Server. I have got my hand on 22 x 3TB disks and was looking for advice on how best configure them.
I was considering 2 pools of 11 disks using RAID Z3, but I have seen various references about not using an more than 9 disks in a pool and now I am not sure of what configuration to use.
Looking for some suggestions.
 
Joined
Jul 3, 2015
Messages
926
What's the use case?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
We're going to need a use case. Is it just bulk storage, is there an IOPS requirement, client count, Plex/Transcoding? Etc...

I will add: You need to carefully consider your replacement spares. I'm under the impression there are no more 3Tb drives in production, and I expect sourcing them to start to become an issue. You may wish to consider the effects of replacing 3Tb drives with 4Tb replacements in your pool in the future, or hold several back for spares, etc... With 22 drives I would expect to do a drive replacement at least once a year even with new disks.
 
Joined
Jul 3, 2015
Messages
926
VM Datastores
Mirrors, lots of them. Perhaps 1 pool, 10 vdevs of mirrors with two hot-spares.

What hypervisor are you using? Will you use NFS or iSCSI?

What about your networking, 1Gb or 10Gb?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
VM Datastores, Media Server, Windows File Server, python, apache, MySQL

You're probably going to want multiple pools. It depends on your networking. If you have a 10GbE SAN for ESX, you might want to consider a mirror pool for the VM's, to get the IOPS count up, and then a RAIDz2 pool for the bulk storage, things that are mostly read-only.
 

Th3D0ct0r

Cadet
Joined
Apr 11, 2019
Messages
6
This is not for production use, at best proof of concept (LAB use) so performance is not of a concern.
 
Joined
Jul 3, 2015
Messages
926
performance is not of a concern
Ok then, one pool, 2 x 11 disk Z2/3 depending on what's more important capacity or resilience.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I was considering 2 pools of 11 disks using RAID Z3, but I have seen various references about not using an more than 9 disks in a pool
I assume you intend "vdev" in place of "pool" here (if you don't, you should). An 11-disk-wide RAIDZ3 would generally be OK, but is kind of on the edge. I'd instead suggest 3 vdevs, 8 disks each, in RAIDZ2. That gives you an additional vdev, and therefore more IOPS. Light VM usage on such a configuration isn't likely to be a problem (I run ISCSI on my system (see my sig) for my home lab), you have the same loss to redundancy, and increasing capacity doesn't require as many replacement disks.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
This is not for production use, at best proof of concept (LAB use) so performance is not of a concern.

It still depends on your networking. Remember, 1GbE Ethernet can be saturated by a single disk. RAIDz2 is going to have a write speed roughly the same as the individual drives in each vdev, round-robin between vdevs. Even in a lab environment, if you have 10GbE, you're going to want to use it just to move stuff around in a timely manner. In which case you'll want a couple vdevs for the system to hop back and forth between and get the write rate above 1GbE speeds.
 

Th3D0ct0r

Cadet
Joined
Apr 11, 2019
Messages
6
I would normally use VMware, as this is what we generally use in production, although this could be MS HyperV or Oracle Virtual Server or even the virtualization now available from FreeNAS 11.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

rvassar

Guru
Joined
May 2, 2018
Messages
972
I would normally use VMware, as this is what we generally use in production, although this could be MS HyperV or Oracle Virtual Server or even the virtualization now available from FreeNAS 11.

FWIW - I looked at the Bhyve VM's supported by FreeNAS and found them somewhat lacking. They had quite a bit of trouble keeping time, literally drifting by minutes without NTP, and stepping every few minutes with NTP. I do run MySQL in a jail, and it works quite well. The Jails share the kernel with the NAS, so they don't suffer the time wander/hop issue I experienced with the VM's.
 

Th3D0ct0r

Cadet
Joined
Apr 11, 2019
Messages
6
SuperMicro CS847 36 Bay Storage Server
SuperMicro X9DR3-LN4F+
128GB ECC RAM
2 x Intel Xeon E5-2630 @ 2.3GHz (Total 12 Cores, 24 with HT)
1 x LSI 8i 9211-8i in IT-MODE
SAS Expanders for 36 SAS drives
26 x 3TB SAS drives
4 x 250GB SAS SSD
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
4 x 250GB SAS SSD
How were you thinking to use these?
26 x 3TB SAS drives
I would do 4 vdevs of six drives in RAIDz2. That would only use 24 of the drives and keep the other two as cold spares, not hot spares, so you can swap them in manually when they are needed and while they are waiting, they are not accumulating more power on hours. That should give you more IOPS (more vdevs is more IOPS) since you are doing virtualization and you will have a net usable storage of around 33TB, after taking into account that you should not fill the pool beyond 80% capacity. However, since you are doing virtualization, it might be better to keep the pool around 50% capacity. The closer to full it gets, the more it slows down. That is both a mechanical property of the drives and an effect of how ZFS stores data. When you need more capacity, you can replace one vdev at a time to grow the pool, so a capacity upgrade is six drives away.

You will still want to do a burn-in on this system to ensure everything is working (including the drives that you will make cold spares) before you start putting data on it. Here is a good guide for all of that:

Uncle Fester's Basic FreeNAS Configuration Guide
https://www.familybrown.org/dokuwiki/doku.php?id=fester:intro

If you choose to use iSCSI or NFS for storage, you might need to handle sync writes to the storage, that means you will want SLOG, is that what you had in mind for the SSDs? Here is a forum thread where we discussed the relative merits of different hardware for that:

SLOG benchmarking and finding the best SLOG
https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/

There are many other useful links that you can find from my signature.
 
Top