Good evening,
First time posting so be gentle with me. I’ve been using Linux MD in a RAID 6 configuration for my file server for nearly two years. After getting my hands on a ZFS array at work and reading mind numbing amounts of ZFS documentation I’m strongly considering moving my disks over to a FreeNAS box and transforming my current server into an ESXi or Hyper-V host. I still have work to do deciding on a hypervisor but I could really use some help deciding how to set up my home zpools.
The hardware I’ll be running FreeNAS on are:
Intel Xeon E3-1220
Intel S1200BTL Server Board
16GB ECC memory (will upgrade to 32GB when money allows)
LSI SAS9211-8I flashed to, I believe, IT mode (flashed it for pass through can’t remember the exact name)
6x 4TB Seagate NAS disks
1x 4TB Western Digital Purple
My current case accommodates 12 properly mounted disks
Gigabit Cisco switches
10.5 TB of data
I currently use my server primarily for home media. I have a large collection of music, movies, TV shows, etc that I enjoy across my home network and occasionally stream remotely. I imagine myself mirroring two SSDs for a future hypervisor datastore.
I understand that drives will fail. I’m currently backing up my data with a set of external hard drives. Not the best solution, I know, but it’s something! XD
The options I’m toying with for my pool:
Thanks for your time
First time posting so be gentle with me. I’ve been using Linux MD in a RAID 6 configuration for my file server for nearly two years. After getting my hands on a ZFS array at work and reading mind numbing amounts of ZFS documentation I’m strongly considering moving my disks over to a FreeNAS box and transforming my current server into an ESXi or Hyper-V host. I still have work to do deciding on a hypervisor but I could really use some help deciding how to set up my home zpools.
The hardware I’ll be running FreeNAS on are:
Intel Xeon E3-1220
Intel S1200BTL Server Board
16GB ECC memory (will upgrade to 32GB when money allows)
LSI SAS9211-8I flashed to, I believe, IT mode (flashed it for pass through can’t remember the exact name)
6x 4TB Seagate NAS disks
1x 4TB Western Digital Purple
My current case accommodates 12 properly mounted disks
Gigabit Cisco switches
10.5 TB of data
I currently use my server primarily for home media. I have a large collection of music, movies, TV shows, etc that I enjoy across my home network and occasionally stream remotely. I imagine myself mirroring two SSDs for a future hypervisor datastore.
I understand that drives will fail. I’m currently backing up my data with a set of external hard drives. Not the best solution, I know, but it’s something! XD
The options I’m toying with for my pool:
- Purchasing another 4TB WD Purple and striping four mirrored pairs for 16TB usable space
I feel this will maximize performance at the cost of extra redundancy and I like easily being able to expand the pool with another pair of drives as I grow. In my current case I would top out at 24TB in one pool. My fear is that with so many pairs I run the risk of getting unlucky with drive failures and losing two out of the same vDev, losing the whole pool - Purchasing another 4TB WD Purple and striping three mirrored pairs in one pool, and eventually another three pairs in a second pool, giving me 12TB in my first pool and starting my second pool with 4TB.
I feel this will maximizer performance similar to option A and share many of its weaknesses as well. My thinking with this option is reducing the risk of losing everything by separating my data into two pools - Purchasing another 4TB WD Purple and striping two RAIDZ2 vDevs of four disks each giving me 16TB usable
This option seems safer as it adds an extra layer of redundancy in each pool. I’m not sure how much having RAIDZ2 pairs in the vDevs will affect performance. It would also be awkwardly expensive to purchase four drives and another controller all at once to grow the pool to its maximum 24TB. However, this option seems to be the best mix between safety and performance. It doesn’t seem to put too much data into one vDev so it will theoretically resilver quickly when something fails, but each vDev can take a hit and keep chugging - Placing all seven drives I currently own into a RAIDZ2 for 20TB of usable space maxing at 40TB
Having a vDev that large is scary as I imagine resilvering would take a long time. I would have to be diligent with manual backups. This option also doesn’t leave me with any real options for expansion other than to tear down the pool and rebuild when it’s time to expand. It would, however, maximize storage space. I do not know if a RAIDZ2 is able to saturate gigabit. Also, I do not know how ZFS does with drives of different brand in the same vDev - Placing all seven drives into a RAIDZ3 for 16TB of usable space maxing at 36TB
Compensates for a large vDev with an extra parity disc. Similar to option D I wouldn’t have any options for expansion beyond tearing down the pool. I would get a lot of storage space, but potential at the cost of significant performance. Again, this option raises the question of how that WD drive would do in the same vDev as the Seagates.
Thanks for your time
Last edited: