ZFS Recommendations

Glacierdk

Cadet
Joined
Jun 12, 2019
Messages
6
Hi

Im setting up a Freenas server as a shared storage server for my VM enviroment, and i am in need of some recommendations on what ZFS setup to go with.
I have a total of 10x 600GB 10K SAS drives and 2x 250gb enterprise Samsung SSDs (meant for cache). Freenas is running of a seperate SSD.

Because it is shared VM storage, speed and redundancy is crucial. however the total amount of storage is also important, due to some of the VM's having large data drives.

What would be the best ZFS config for a setup like this?

/Glacierdk
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Best for iSCSI/block storage is to have mirrored VDEVs and as many of them as you can to increase the IOs.

Running SLOG will probably help there too, but a faster drive than those Samsung ones would be best... the Intel PCIe cards are the stuff to have.

The type of workload you have there probably isn't suited to L2ARC, so don't bother with that.

RAM will be the most important determining factor for performance... add more... much more. (not even knowing how much you have, since you didn't say)
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
What VM environment are you going to use? Some do iSCSI better than others.

The general rule of thumb, is striped mirror vdevs give the best VM performance. The mirror half's can perform reads in parallel, and writes issue round robin to the stripe sets. But with 10 x 600Gb disks, you'll only end up with about 3Tb of storage and a 5 vdev stripe. If you do 3 disk RAIDz vdevs, you get ~1.2Tb per vdev, and then round robin the pool on three vdev's, and get to keep a disk for a hot spare. The final alternative is 2 vdev's of 5 disks in RAIDz2, which has better redundancy against failure, but your write I/O rate will be slower still.

Your networking is a factor in your choice, as all three pool geometry's described are going saturate 1GbE with 10k drives. Finally... Keep the ZFS 80% rule in mind as well. ZFS is copy-on-write, and performance goes way down when the pool hits 80% full.
 

Glacierdk

Cadet
Joined
Jun 12, 2019
Messages
6
Thanks alot for the input, it was very helpfull. :)

I think i'll be going with the striped mirror vdev setup.
The box is a HP DL380p with 2x E5-2660, 64gb ram and a dual 10GB NIC. I have the possibility to install up to 200gb ram in total if necessary.
It will either run proxmox or Hyper-V.

Would it make sense to add a SLOG ssd even though it is not PCIe? and if that is the case should i add both as SLOG or only one (and maybe use the last one for L2ARC)?
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
The thing about the SLOG... The SSD's used need to have power failure protection, or you can corrupt your pool. As for L2ARC, you're always better off adding more RAM if you can, as that's the "L1ARC".
 

Glacierdk

Cadet
Joined
Jun 12, 2019
Messages
6
Would redundant PSUs, a UPS and a diesel generator be enough to compensate for the lack of power failure protection on the SSDs?

Is there an upper limit for RAM or should i just use all 200GB?
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
Would redundant PSUs, a UPS and a diesel generator be enough to compensate for the lack of power failure protection on the SSDs?

No. Virtually all of us run some kind of UPS. But... One of the modes of failure is a CPU reset. It comes down to the drive guaranteeing a write will complete when it acknowledges the request. Consumer grade SSD's may abort the transaction with the data still in a RAM buffer, when they get hit by a reset signal. That can happen even if you never loose power.

Is there an upper limit for RAM or should i just use all 200GB?

There's a resource doc for build's with more than 512Gb, but I can't say I've read it. The law of diminishing returns applies. Try 32/64Gb, and see if it meets your needs. It's easy to fall into the "more is better" trap. You're engineering a solution, engineers make measurements, and decide based on the data.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
The box is a HP DL380p with 2x E5-2660, 64gb ram and a dual 10GB NIC. It will either run proxmox or Hyper-V.

Hold on a second. Is that DL380p your VM host or your FreeNAS machine?

If it's the latter, you almost surely don't have a supported Host Bus Adapter (HBA) - layering ZFS on top of a hardware RAID card is absolutely not a recommended setup.

Even if you switch the onboard P420i into "HBA mode" it hasn't been a good experience for other FreeNAS users:
https://www.ixsystems.com/community...etecting-2-drives-of-smart-array-p420i.69973/
https://www.ixsystems.com/community/threads/hp-dl360p-gen8-p420i-controller.47956/

If the DL380p is your FreeNAS machine, I'd strongly suggest getting an HP H220 HBA, flashing it with the LSI firmware, and letting it manage your drives instead.
 

Glacierdk

Cadet
Joined
Jun 12, 2019
Messages
6
Hold on a second. Is that DL380p your VM host or your FreeNAS machine?

If it's the latter, you almost surely don't have a supported Host Bus Adapter (HBA) - layering ZFS on top of a hardware RAID card is absolutely not a recommended setup.

Even if you switch the onboard P420i into "HBA mode" it hasn't been a good experience for other FreeNAS users:
https://www.ixsystems.com/community...etecting-2-drives-of-smart-array-p420i.69973/
https://www.ixsystems.com/community/threads/hp-dl360p-gen8-p420i-controller.47956/

If the DL380p is your FreeNAS machine, I'd strongly suggest getting an HP H220 HBA, flashing it with the LSI firmware, and letting it manage your drives instead.

Thanks for the heads up :)
I had read about the P420i issues. I Have a Dell H200 i intend to flash in IT mode. Would that be sufficient or would the HP 220 make a big difference?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
Thanks for the heads up :)
I had read about the P420i issues. I Have a Dell H200 i intend to flash in IT mode. Would that be sufficient or would the HP 220 make a big difference?
The HP card is based on a newer chipset, but since you're primarily using spinning disks for the vdevs I don't think the older card will bottleneck you any.

Regarding the SLOG question though you will likely want to use an NVMe device in order to get the performance you're after for VMs (as well as forcing sync=always on your ZVOLs) - the Intel P-series or Optane drives are the units of choice these days.
 
Top