Help with FreeNAS volume tradeoff issue

markgca

Dabbler
Joined
Nov 7, 2019
Messages
46
I need help with a FreeNAS volume tradeoff issue

in the process of implementing a freenas that is intended to replace 3 ReadyNAS boxes which are well beyond their expiration date. i NEED about 24TB, which means it should be designed for at least 48tb, and 64tb would probably be better. also would like the ability to expand IF i ever need more.

this is for a small office, mixed files with only a few people accessing it, but about 6-8 vms running (being ported over from other servers), some very demanding/real time apps (which in total will fit in less than a terabyte). This new nas has dual intel e5-2620 which is way more cpu than i had before so that should not be an issue. The drives are 7 or 8 seagate EXOS 16TB drives, so there is a concern about time to rebuild the array if something dies.

The last nas was initially set up with an 8 drive raid6, but it was way too slow for vms, so i changed it to raid 10. Still too slow, but at that point i added a couple ssd's in a separate volume just for vms, which worked out great. i figured that would be my solution here unless someone says i dont need them.

without experimenting i dont really know what will work, and to be honest i was hoping i wouldnt have to do a bunch of trial and error this time or at least narrow it down to a manageable subset.

the options i have come up with are as follows; i have 15 data drive bays to work with, and 7x16TB drives initially (but can buy another one if i need 8):

1. 7 drives in raidz3 for 64TB. this is the way it is set up now, and probably the slowest of the options. but robust because i can have 3 drive failures (not sure i need this since the nas is local/quick to replace a failed drive, but with these big drives figured it might help)

2. 8 drives in 2x striped 4 drive raidz2 vdevs for 64TB (which will get the iops up, but almost certainly not enough for vms), ability to add 4 more drives later, parity = 50% of space, same as raid 10 but safer? but perhaps not as safe as option 1? not sure if there is any tradeoff regarding safety on the number of drives vs number of parity drives overall (rebuild time, etc).

3. 5 drives in raidz2 for 48TB (and would have 2 spare drives for now), with space for a 2nd and 3rd vdev. if i used this option, the third vdev would eat the space reserved for ssd drives for the vms. and the budget wont allow for 3 additional drives to complete the 2nd vdev at this time.

even though it is now configured as option 1, im thinking option 2 (8 drives in 2 vdev stripes of 4xraidz2) might be the better option; gives me a few slots for ssd's, and still allows for a third vdev for future expandability as well as a couple spaces for a slog if that would help the vms.

anything im missing? any guidance?

will a properly setup slog replace the need for ssd's for the vms?

thanks,

mark
 
Last edited:

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Keep your VMs on SSD mirrors, leave your main pool as a 7 drive RaidZ3 that you can double later by adding another 7 drive RaidZ3.

If you need more performance for VMs, switch from SATA SSD to NVMe SSD, or add more mirrors.
 

markgca

Dabbler
Joined
Nov 7, 2019
Messages
46
If you need more performance for VMs, switch from SATA SSD to NVMe SSD, or add more mirrors.

great idea, i have several pci slots open so that might be the best method, and that way i keep the bays open for expansion
thanks!
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Have a look at these numbers here: https://napp-it.org/doc/downloads/optane_slog_pool_performane.pdf

That tells you that you can speed up even an array of spinners quite nicely (sync writes for VMs) if you add a Optane Slog in front of them
Its Napp-It, not FreeNas but the principle and ballpark performance is the same; so thats what I'd recommend in your situation.
P4800x if you have the cash, 900p 280GB (overprovisioned or split in slog/L2arc) if you are short on that.

Edit: 900p 280 is cheap enough to give it a try and to reuse as slog in front of sth else (SSD pool) if its not working as intended)
 
Top