BUILD Optimal HDD Layout in 20 Bay enclosure

Status
Not open for further replies.

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
So, when I first built my NAS box, I used OpenIndiana with Napp-It. Did not really know a lot about ZFS at the time so I set my system up with 2 pools. The production pool is an 8 drive Z2 pool and the backup pool is a 5 drive Z1 pool (I am paranoid, I backup everything!). I use an 80GB L2ARC on each of these (these SSDs are located inside the chassis and do not take up a bay). Finally, I have a 480GB SSD that I store my VBOX machines on.

After doing a bit of reading, it seems that an 8 drive Z2 pool is not optimal, according to FreeNAS documentation that I have read. Luckily, the 5 drive Z1 does seem ideal. Small victories.

So, now that I know a little more, and I am running out of space, I am going to re-arrange my system. Much to my chagrin, my backup pool is not large enough to backup all of my production data anymore. So here is how I plan to update:
  1. Move Jails to SSD pool to get them off production pool (DONE, thanks to Dusan!!)
  2. Move System Dataset to SSD pool to get it off production pool (DONE)
  3. Mirror SSD pool as it is a simplex disk right now (SSD ordered)
  4. Ensure production pool is stable and in good health (scrub, SMART)
  5. Remove 5 drive Z1 backup pool (3TB drives) and create a new 5 drive Z1 backup pool using 4TB drives
  6. Backup all of my files to new pool (let this settle for a little while, scrub this new pool a couple times to hammer away at the drives)
  7. Destroy my 8 drive Z2 production pool (3TB drives, scares me a little..) and then use the 5 extra 3TB drives to create a 12 drive pool made up of two 6 drive Z2 vdevs (6 is supposed to be a good number for a Z2 array). This will leave me with one 3TB spare in case of a failure.
  8. Copy backup data back to the new production array
  9. Setup shares etc. from new production array (there is an amazing number of little items that you do to your system over the years, this is going to be a time vampire...)
The above steps will fill up all 20 available hot swap bays as well as uses the 2 internal 2.5" bays. So the case will be FULL.

My question is, given the 20 bay system and the drives that I have to work with, does the above seem like a good plan? Any feedback and or suggestions on how to make this go well would be warmly received.

Thanks,
 
Last edited:

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
2. you can just replace the 3TB HDDs one by one with 4TB HDDs, if autoexpand flag is set, the vdev will grow automatically. But note that you're likely to see a read error during rebuild if you use "consumer" disks with a read error each 1 in 10e14 bits - a raidz2 would protect you from that too. If you go the z2 route for backups too, you'll have to destroy and rebuild the backup pool.

The rest of your plan sounds good. Look into ZFS send for that.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
I thought about the replacement method. That will take a long time as the backup pool is 88% full... So resilvering takes a long time. Destroying the backups only puts me at risk that the Z2 production pool fails during the copy process which will take some time but the step that worries me is copying the data from the new backup array to the production array... :)

And again, due to my ignorance with ZFS, I currently use rsync for my backups. I need to look into this ifs send stuff as it does sound pretty cool!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Hmm, resilvering the indivdual disks would take time to resilver one 3TB disk *5. Acceptable for a growth of 25% without having to kill anything. Also you still have the backup pool during upgrade.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Hmm, resilvering the indivdual disks would take time to resilver one 3TB disk *5. Acceptable for a growth of 25% without having to kill anything. Also you still have the backup pool during upgrade.

I see what you are saying here. This way, I am never without a backup. I will consider this for sure.

Given that I am using rsync for my backups now, I may look into ZFS send/receive instead. Just not sure how that works but I have some time to do some reading!!

Cheers,
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
So I got to thinking. I can add the 5 new 4TB drives to my existing server without changing anything as I have the open slots now. Here is a revised order of operations:
  1. Move Jails to SSD pool to get them off production pool (DONE, thanks to Dusan!!)
  2. Move System Dataset to SSD pool to get it off production pool (DONE)
  3. Mirror SSD pool as it is a simplex disk right now (SSD ordered - arriving today!!)
  4. Ensure production pool is stable and in good health (scrub, SMART)
  5. Install 5 new 4TB drives and create a new backup pool (DONE)
  6. Backup all data to new backup pool (DONE)
  7. Turn off cron tasks and replications (DONE)
  8. Move share locations to new backup array (allows access to files when building new production array)
  9. Destroy the old 5 disk Z1 backup pool made from 3TB drives (DONE)
  10. Destroy the 8 drive production pool and create a 12 drive Z2 pool with two 6 drive vDevs (DONE)
  11. Copy data back to the new production pool from the new backup pool (zfs send/receive - actual average speed: 412MBps for 10.1TB)
  12. Setup shares, rsync/replications etc. from new production array (DONE)
  13. Crack a BEER!
Did some looking and it seems if you add a vDev to a pool, the data does not get balanced across all disks. So I will just set up the 12 drive pool at once. Seems that my system can transfer data pretty fast so I will be at some increased risk for only about a day or so.

Now I will have 2 spare 3TB drives. When will FreeNAS allow for hot spares?

Cheers,
 
Last edited:
Status
Not open for further replies.
Top