Upgrade strategy for my FreeNas

kirkdickinson

Contributor
Joined
Jun 29, 2015
Messages
174
I jumped into FreeNas in January of 2016. I built a box for my office to store database, documents, movies, etc... It has been happily purring along since then. No problems at all. I have done the point updates, but am still on 9.10 Stable. It is working fine... and if it ain't broke...

However I am running out of space. I have 6 4TB WD Reds in ZFS2 and right now am running at 84% which gives me a warning. I also would like to backup some other things to it that I currently can't.

I keep thinking the best option is just to order 6 8TB WD Reds and double my capacity. I may have room for 8 drives and if I ordered 8 6TB drives they would be cheaper and have more usable capacity in ZFS2, but would require a complete rebuild of the array.

Just trying to figure out the best way forward. Here is a spreadsheet calculation that I put together.

drive-calculations.jpg

Thanks,
Kirk
 
Joined
Oct 18, 2018
Messages
969
With the drive counts you're talking about it may be worth sticking with RAIDZ2. Given that you're looking to grow your storage I wouldn't consider the options which are less than 32GB. Doubling your capacity will give you a bit of room to grow. I would lean toward the 6 or 8 count 8TB drive RAIDZ2 array. If you go with 6 drives you don't have to rebuild your pool; you can just resilver each disk. If you want to go with 8 you'll need to rebuild the pool and restore from backup.

I personally avoid 7200rpm disks for temperature reasons. I also opt for RAIDZ2 vs RAIDZ1; the additional cost of 1 drive is well worth the reduction in anxiety for me. The rest is really up to your budget I think.

As a final option to consider . . . If you purchased a used 12-16 bay chassis (possibly server chassis) you could extend your pool by adding an additional vdev rather than replacing the old vdev. Here are some interesting options you may consider.

I think a better way to calculate the per TB cost is not the per TB cost of total space but the per TB cost of additional space. For example, if you replaced all of your 4TB disks with 4x8TB disks in RAIDZ2 it would cost you over $800 dollars but would not net you any additional space. This is clearly a bad option. Consider the following options to get a sense of how many additional TBs of storage you'll get per dollar spent. You'll notice that upgrading your chassis to one which supports more drives and adding a vdev to your pool rather than replacing the drives you already own comes out to cheaper per TB of additional storage. Of course, the total cost grows and you may want to consider additional memory if you're running on a low-memory system currently.

Replace 6 4TB disks with 6 8TB disks: 16TB storage increase @ $80.63 per TB of additional space
Replace 6 4TB disks with 8 8TB disks: 32TB storage increase @ $53.75 per TB of additional space
Add 6 8TB RAIDZ2 vdev to current vdev w/purchase of new chassis: 32TB storage increase @ $52.81 per TB of additional space
Add 8 8TB RAIDZ2 vdev to current vdev w/purchase of new chassis: 48TB storage increase @ $44.16 per TB of additional space
 

kirkdickinson

Contributor
Joined
Jun 29, 2015
Messages
174
With the drive counts you're talking about it may be worth sticking with RAIDZ2. Given that you're looking to grow your storage I wouldn't consider the options which are less than 32GB. Doubling your capacity will give you a bit of room to grow. I would lean toward the 6 or 8 count 8TB drive RAIDZ2 array. If you go with 6 drives you don't have to rebuild your pool; you can just resilver each disk. If you want to go with 8 you'll need to rebuild the pool and restore from backup.

I personally avoid 7200rpm disks for temperature reasons. I also opt for RAIDZ2 vs RAIDZ1; the additional cost of 1 drive is well worth the reduction in anxiety for me. The rest is really up to your budget I think.

As a final option to consider . . . If you purchased a used 12-16 bay chassis (possibly server chassis) you could extend your pool by adding an additional vdev rather than replacing the old vdev. Here are some interesting options you may consider.

I think a better way to calculate the per TB cost is not the per TB cost of total space but the per TB cost of additional space. For example, if you replaced all of your 4TB disks with 4x8TB disks in RAIDZ2 it would cost you over $800 dollars but would not net you any additional space. This is clearly a bad option. Consider the following options to get a sense of how many additional TBs of storage you'll get per dollar spent. You'll notice that upgrading your chassis to one which supports more drives and adding a vdev to your pool rather than replacing the drives you already own comes out to cheaper per TB of additional storage. Of course, the total cost grows and you may want to consider additional memory if you're running on a low-memory system currently.

Replace 6 4TB disks with 6 8TB disks: 16TB storage increase @ $80.63 per TB of additional space
Replace 6 4TB disks with 8 8TB disks: 32TB storage increase @ $53.75 per TB of additional space
Add 6 8TB RAIDZ2 vdev to current vdev w/purchase of new chassis: 32TB storage increase @ $52.81 per TB of additional space
Add 8 8TB RAIDZ2 vdev to current vdev w/purchase of new chassis: 48TB storage increase @ $44.16 per TB of additional space

Interesting way to look it it. Cost of increase instead of total.

Looks like my current mobo (SUPERMICRO MBD-X10SLL-F-O uATX Server Motherboard LGA 1150 )only has SATA ports, if I kept the 6 drives I have now, what would be the best way to add more ports? Seems like I had to do a bios flash to get all the ports JBOD because this was set up to use some of them for RAID. I have a Xeon E3 and 16GB of Ram in two slots with more slots available.

I actually have an ancient Server case that I purchased before deciding to go with the smaller and more consumer Silverstone Rack case that I ended up with. It is a 3U Rack mount Chassis Supermicro Server CSE-PT933-PD382 15 Bay Storage. I decided not to do that one because I didn't understand how all the internals worked, couldn't figure out how to fit a new power supply into it and just decided to build with more familiar components.

I it still here and was supposed to be working. It has a mobo, ram, and redundant power supplies in it.

It it was usable, it might be easier to just roll a second FreeNas machine with it and install all new stuff. I suspect that the CPU is ancient and not nearly as good as the E3 that I currently have.

BTW, how easy is the "reslivering" process?

Thanks for responding to my post. :) Merry Christmas!
Kirk
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
Try my spreadsheet out
 
Joined
Oct 18, 2018
Messages
969
Looks like my current mobo (SUPERMICRO MBD-X10SLL-F-O uATX Server Motherboard LGA 1150 )only has SATA ports, if I kept the 6 drives I have now, what would be the best way to add more ports? Seems like I had to do a bios flash to get all the ports JBOD because this was set up to use some of them for RAID. I have a Xeon E3 and 16GB of Ram in two slots with more slots available.
You could pick up a cheap HBA for ~50-60 bucks. I rather like this ebay seller. THat will give you access to 8 more drives with ease.

Chassis Supermicro Server CSE-PT933-PD382 15 Bay Storage. I decided not to do that one because I didn't understand how all the internals worked, couldn't figure out how to fit a new power supply into it and just decided to build with more familiar components.
This looks like it would work. Here are the things you'll want to check.
  1. Do the power supply(s) work? And if not, are you able to find cheap replacements online?
  2. What kind of backplane is in the chassis? The "easiest" would be a direct attach chassis. You'll know it is direct-attach if the back of the backplane has 15 SAS or SATA plugs, one for each drive.
  3. Are the fans working?
  4. How many fans are there? You'll need either 1 fan header on your board per fan or consider picking up splitters.
  5. Does the chassis support the form factor of your motherboard?
  6. Do you have a place to put it which can tolerate the noise?
My bet is that all of those things are straightforward and will work out just fine.

I it still here and was supposed to be working. It has a mobo, ram, and redundant power supplies in it.
What motherboard? You may be better putting your X10 board in instead.

It it was usable, it might be easier to just roll a second FreeNas machine with it and install all new stuff. I suspect that the CPU is ancient and not nearly as good as the E3 that I currently have.
One idea for the old components are to put them in your current case with a few huge drives and then use it as a backup of your main system via regular snapshots and zfs replication. The motivation for switching cases is to use the larger case for your "production" server.

BTW, how easy is the "reslivering" process?
Quite easy. Check the User Guide for instructions. Specifically look for the instructions on resilvering a drive to increase the capacity of a vdev. Note that if you opt to add a vdev rather than replace each drive in your current vdev you won't need to resilver.

Anyway, hope this was helpful. I feel like I probably gave you more options to consider rather than helped you narrow down your original thoughts. :)

Thanks for responding to my post. :) Merry Christmas!
No problem. Merry Christmas!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
BTW, how easy is the "reslivering" process?
I have always found it easy.
if I kept the 6 drives I have now, what would be the best way to add more ports?
I would add a SAS HBA, like the one described here:

 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Doubling your space wont last as long as your rate of increase will have... increased.
If you have room for (8) drives I would go that route, either 8TB or 10TB.
It would also make sense to upgrade your OS at the same time.

Suggested steps:
  • Add an HBA, attach all new drives to new HBA.
  • Test/burn-in all drives which will also burn-in the HBA.
  • Copy all data to new drives, if they all passed.
  • Unplug all old drives, remove old boot device, attach new boot device, install new OS.
  • Now you have a full backup while you test new OS, drives, HBA.
  • When you are satisfied that your new stuff is working, setup old drives as a backup of critical data.

Bonus step if you have the budget
  • Replace entire original system at the same time as the new drives.
  • This affords you a second system for backup of critical data.
  • It becomes very difficult to backup large arrays due to space requirements
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Bonus step if you have the budget
  • Replace entire original system at the same time as the new drives.
  • This affords you a second system for backup of critical data.
  • It becomes very difficult to backup large arrays due to space requirements
This is the reason I ended up with three NAS systems a few years ago.
 

ddaenen1

Patron
Joined
Nov 25, 2019
Messages
318
I jumped into FreeNas in January of 2016. I built a box for my office to store database, documents, movies, etc... It has been happily purring along since then. No problems at all. I have done the point updates, but am still on 9.10 Stable. It is working fine... and if it ain't broke...

However I am running out of space. I have 6 4TB WD Reds in ZFS2 and right now am running at 84% which gives me a warning. I also would like to backup some other things to it that I currently can't.

I keep thinking the best option is just to order 6 8TB WD Reds and double my capacity. I may have room for 8 drives and if I ordered 8 6TB drives they would be cheaper and have more usable capacity in ZFS2, but would require a complete rebuild of the array.

Just trying to figure out the best way forward. Here is a spreadsheet calculation that I put together.

View attachment 34540
Thanks,
Kirk

I wonder why this needs to be overthought. The system works troublefree - why upgrade hardware when it does the trick? If cost is a factor, Wouldn't it be the easiest way to replace the 6 4Tb one by one with 6 8Tb drives and let FreeNAS autoexpand do its work? You can probably sell off the 4Tb drives to recover some of the investment. All in all, it took you 3 years to get to 84% capacity and this way, you don't need to rebuild the array from scratch and in the mean time, you can think about a strategy for a long-term solution.

Keep it simple, is my motto.
 
Top