4X - M.2-2280 NVMe in a PCIe x16 card with 4x4x4x4 bifurcation in RAIDZ for speed ?

petreza

Cadet
Joined
Jan 9, 2018
Messages
7
Hello, (first post, completely new to FreeNAS/ZFS)

I am building a new system based on Supermicro X11DPG-QT, two Xeon Scalable Silver 4114 (10 core each) processors and 64GB RAM for a mixed Home Lab / Desktop (VGA passthrough) use. I plan to have FreeNAS running as a ESXi VM (JBOD passthrough) which will serve (? iSCSI ?) as a datastore for the other VMs on the same host as well as VMs running on two diskless servers directly connected to this system by three ConnectX-3 Pro EN cards. The servers are old and with PCIe 2 slots so the max theoretical throughput is 4GigaBytes/s each.

I want to set up a fast storage system so that a lot of VMs could run and sometimes boot in parallel.

I could set up a HDD RAIDZ2 for size and SSDs caching for speed but I want to keep my HDDs in LSI-card RAID6 separate from FreeNAS until I learn and am comfortable with ZFS. I will set up automatic backup of the VMs from FreeNAS to the RAID6 several times a day so that if I mess up I don't lose everything. (FreeNAS can do that, right?)

Anyway, one of the fastest flash based storage devices right now are NVMe M.2 sticks with over 3GB/s sequential reads and a lot of IOPS. What do you think is sticking 4 of those in one card like these: Quattro M.2 NVMe SSD 410 and setting them in a RAIDZ1? Would FreeNAS be able to supply sustainable, say, 5-6 GB/s speeds. (The card is supposed to have some protection for power loss.)

Do I need this speed? No, not really. I just want to know whats possible even if I don't end up doing this setup.

Thank you for any advise you might have!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I want to keep my HDDs in LSI-card RAID6 separate from FreeNAS until I learn and am comfortable with ZFS.
Trust me, ZFS is far more capable, safer and easier than any hardware RAID6 solution. Don't be afraid.

I will set up automatic backup of the VMs from FreeNAS to the RAID6 several times a day so that if I mess up I don't lose everything. (FreeNAS can do that, right?)
Sure.

Anyway, one of the fastest flash based storage devices right now are NVMe M.2 sticks with over 3GB/s sequential reads and a lot of IOPS. What do you think is sticking 4 of those in one card like these: Quattro M.2 NVMe SSD 410 and setting them in a RAIDZ1? Would FreeNAS be able to supply sustainable, say, 5-6 GB/s speeds.
Maybe, it's going to depend a lot on the SSDs and the workload.

My primary concern is that the motherboard explicitly needs to support PCIe bifurcation and the specific configuration you'll be using. Do not assume this is the case, triple-check. You could also get an adapter with a PCIe switch, but that would cost you a truckload of cash.
 

petreza

Cadet
Joined
Jan 9, 2018
Messages
7
Thanks Ericloewe!

Trust me, ZFS is far more capable, safer and easier than any hardware RAID6 solution. Don't be afraid.

Maybe it is just me but the first time I read about ZFS I got the impression that it is flimsy. Sure, it is very safe and robust but make sure you use Registered RAM or you might lose everything. Then there was the don't run FreeNAS as a virtual machine, and don't give it anything but "real" drives to work with, etc. Now I know better but that was my original impression.
I am sure it is not complicated to set it up and get running, but don't I need some practice first to be able to save myself if I get into a bind?

Maybe, it's going to depend a lot on the SSDs and the workload.

My primary concern is that the motherboard explicitly needs to support PCIe bifurcation and the specific configuration you'll be using. Do not assume this is the case, triple-check. You could also get an adapter with a PCIe switch, but that would cost you a truckload of cash.

I was thinking of using Samsung 960 EVO NVMe drives. The workload will probably never really need all that bandwidth on a regular basis except when booting large number of VMs in parallel or moving large chunks of data around. I am just trying to eliminate bottlenecks, hence the 56Gbit NICs.
This motherboard (and probably most/all new X11-range motherboards from Supermicro) support a good range of bi-tri-quad-furcation, including the x4x4x4x4 type. I just got a ARC2-PELY435 card that does x16 to x8x4x4 (in x16 slots) and a Thermaltake 200mm flexible x16 PCIe extension cable. I will test this setup as soon as I can.

I am surprised that I could not find anyone that has done a setup like the one I describe. Yes, there are some benchmarks with software RAID under Windows where they get up to something like 11GB/s, but there is nothing that I can find for FreeNAS/ZFS.

Thanks again!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I got the impression that it is flimsy
Not in the slightest. The major difference is that you don't "fix" problems after the fact, you prevent them. That means no repair tools, but also no real need for them.

make sure you use Registered RAM or you might lose everything
Registered RAM has nothing to do with data safety. ECC does and is a basic requirement for any reliable system, not a ZFS thing.

I am surprised that I could not find anyone that has done a setup like the one I describe. Yes, there are some benchmarks with software RAID under Windows where they get up to something like 11GB/s, but there is nothing that I can find for FreeNAS/ZFS.
We haven't seen much in the way of numbers for SSDs, partly because ZFS is hard to benchmark. The neat features make it difficult to get universal numbers.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Maybe it is just me but the first time I read about ZFS I got the impression that it is flimsy. Sure, it is very safe and robust but make sure you use Registered RAM or you might lose everything. Then there was the don't run FreeNAS as a virtual machine, and don't give it anything but "real" drives to work with, etc. Now I know better but that was my original impression.
I am sure it is not complicated to set it up and get running, but don't I need some practice first to be able to save myself if I get into a bind?
ZFS isn't flimsy in the slightest. The concerns you list above are more round-peg, square hole kind of concerns, and not "my data is on fire!!!" kind of concerns. Most importantly, ZFS is designed around a particular use case, and if you don't fit that use case, then there's no reason to be using ZFS (or by extension, FreeNAS). If your goal is maximal data protection and reliability, ZFS is a great choice.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
ZFS is the Ferrari of file systems. It's high speed/low drag... but, as a Ferrari won't perform well on a diet of cheap gas and bulk oil, ZFS won't perform well on cheap hardware, minimal RAM, and no ECC.
 

JoeAtWork

Contributor
Joined
Aug 20, 2018
Messages
165
We haven't seen much in the way of numbers for SSDs, partly because ZFS is hard to benchmark. The neat features make it difficult to get universal numbers.

Well we just need a few more graphs, one showing the pool stats during scrubs or history of scrubs, then we need one showing previous and current iozone test data set 1/2 of the RAM on box per selected zpool and the last one showing previous and current iozone test data set 4x of the RAM on box per selected zpool.

Now users would have a button/page and results to compare with everyone else on here....
 
Top