I need help with a FreeNAS volume tradeoff issue
in the process of implementing a freenas that is intended to replace 3 ReadyNAS boxes which are well beyond their expiration date. i NEED about 24TB, which means it should be designed for at least 48tb, and 64tb would probably be better. also would like the ability to expand IF i ever need more.
this is for a small office, mixed files with only a few people accessing it, but about 6-8 vms running (being ported over from other servers), some very demanding/real time apps (which in total will fit in less than a terabyte). This new nas has dual intel e5-2620 which is way more cpu than i had before so that should not be an issue. The drives are 7 or 8 seagate EXOS 16TB drives, so there is a concern about time to rebuild the array if something dies.
The last nas was initially set up with an 8 drive raid6, but it was way too slow for vms, so i changed it to raid 10. Still too slow, but at that point i added a couple ssd's in a separate volume just for vms, which worked out great. i figured that would be my solution here unless someone says i dont need them.
without experimenting i dont really know what will work, and to be honest i was hoping i wouldnt have to do a bunch of trial and error this time or at least narrow it down to a manageable subset.
the options i have come up with are as follows; i have 15 data drive bays to work with, and 7x16TB drives initially (but can buy another one if i need 8):
1. 7 drives in raidz3 for 64TB. this is the way it is set up now, and probably the slowest of the options. but robust because i can have 3 drive failures (not sure i need this since the nas is local/quick to replace a failed drive, but with these big drives figured it might help)
2. 8 drives in 2x striped 4 drive raidz2 vdevs for 64TB (which will get the iops up, but almost certainly not enough for vms), ability to add 4 more drives later, parity = 50% of space, same as raid 10 but safer? but perhaps not as safe as option 1? not sure if there is any tradeoff regarding safety on the number of drives vs number of parity drives overall (rebuild time, etc).
3. 5 drives in raidz2 for 48TB (and would have 2 spare drives for now), with space for a 2nd and 3rd vdev. if i used this option, the third vdev would eat the space reserved for ssd drives for the vms. and the budget wont allow for 3 additional drives to complete the 2nd vdev at this time.
even though it is now configured as option 1, im thinking option 2 (8 drives in 2 vdev stripes of 4xraidz2) might be the better option; gives me a few slots for ssd's, and still allows for a third vdev for future expandability as well as a couple spaces for a slog if that would help the vms.
anything im missing? any guidance?
will a properly setup slog replace the need for ssd's for the vms?
thanks,
mark
in the process of implementing a freenas that is intended to replace 3 ReadyNAS boxes which are well beyond their expiration date. i NEED about 24TB, which means it should be designed for at least 48tb, and 64tb would probably be better. also would like the ability to expand IF i ever need more.
this is for a small office, mixed files with only a few people accessing it, but about 6-8 vms running (being ported over from other servers), some very demanding/real time apps (which in total will fit in less than a terabyte). This new nas has dual intel e5-2620 which is way more cpu than i had before so that should not be an issue. The drives are 7 or 8 seagate EXOS 16TB drives, so there is a concern about time to rebuild the array if something dies.
The last nas was initially set up with an 8 drive raid6, but it was way too slow for vms, so i changed it to raid 10. Still too slow, but at that point i added a couple ssd's in a separate volume just for vms, which worked out great. i figured that would be my solution here unless someone says i dont need them.
without experimenting i dont really know what will work, and to be honest i was hoping i wouldnt have to do a bunch of trial and error this time or at least narrow it down to a manageable subset.
the options i have come up with are as follows; i have 15 data drive bays to work with, and 7x16TB drives initially (but can buy another one if i need 8):
1. 7 drives in raidz3 for 64TB. this is the way it is set up now, and probably the slowest of the options. but robust because i can have 3 drive failures (not sure i need this since the nas is local/quick to replace a failed drive, but with these big drives figured it might help)
2. 8 drives in 2x striped 4 drive raidz2 vdevs for 64TB (which will get the iops up, but almost certainly not enough for vms), ability to add 4 more drives later, parity = 50% of space, same as raid 10 but safer? but perhaps not as safe as option 1? not sure if there is any tradeoff regarding safety on the number of drives vs number of parity drives overall (rebuild time, etc).
3. 5 drives in raidz2 for 48TB (and would have 2 spare drives for now), with space for a 2nd and 3rd vdev. if i used this option, the third vdev would eat the space reserved for ssd drives for the vms. and the budget wont allow for 3 additional drives to complete the 2nd vdev at this time.
even though it is now configured as option 1, im thinking option 2 (8 drives in 2 vdev stripes of 4xraidz2) might be the better option; gives me a few slots for ssd's, and still allows for a third vdev for future expandability as well as a couple spaces for a slog if that would help the vms.
anything im missing? any guidance?
will a properly setup slog replace the need for ssd's for the vms?
thanks,
mark
Last edited: