BUILD Advice / Critique on Hardware choice for 48TB+ ZFS array

Status
Not open for further replies.

cookiesowns

Dabbler
Joined
Jun 8, 2014
Messages
31
Heyo!

We're looking for around 64TB+ of storage to last us for the next few years. We value data security more so than performance, as most of our data sets are sequential operations. However I believe we'll need at least the iops of a 8 drive RAID-6. Not sure how that compares to in the ZFS world

Originally I was thinking of one big Raid-Z3 with 11 drives then adding another 11 drives later down the road, for a pool of 2x 11 drive Z3's. Drives used would be SAS enterprise WD Re's or Hitachi Ultrastars. We're open to using seagate drives, if people here can share their experience with the enterprise series.

Maybe going with 2x 7 Drive Z3's to start would be better? then adding another 7 drive z3 to expand to 48TB? Would just buying the total drives we need now be a better option? As then we can have stripes from DAY 1 of the array? I don't see us needing more than 64TB for 1 single pool.

We have 2 main departments, and they would prefer to have one machine dedicated to them, however, after doing some math, it seems that if we build a 2x64TB pool with 4x Raid Z2's each, we could do replication, so we can sustain one entire server going poof but having the departments share the servers.

However I'm just not sure if it's better to buy what we think we'll only use for the next ~3-5years, or expand as we go. As data won't be moved around much, so we won't gain IOPS as we add more vdevs into the pool. Which sort of defeats the purpose.

What are people's thoughts on using desktop drives? It would make building 2x 64TB servers much more feasible.

Hardware going into this would be:

CPU: Intel E5-1620 V2
MB: Supermicro X9SRH-7TF ( Yes, we have 10G ethernet for our LAN )
HBA: LSI 9207-8i
Chassis: Supermicro SC846BA-R920B
RAM: 64GB DDR3-1866 ECC REG 4x16GB. Was originally planning on going with 32GB, but the more RAM the merrier right?

Also, another question I have is backups. What are some common ways users are doing backups on FreeNAS? Rsync? Snapshots + Rsync?

How does FreeNAS handle external USB drives?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Overall expanding as you go with regards to hard drive purchases is the smarter way. Drives are only getting bigger and cheaper.

Snapshots are either stored on ZFS pools or encapsulated in files. Snapshotting from zpool to zpool is by far the most common and most recommended way. You can turn the snapshots into files to store offsite on other file systems, but its definitely not the most recommended and least common. If you do decide to do snapshots in files and the shit hits the fan you *will* have to do restores of all of your snapshots, so clearly that's a major drawback with pools as big as yours.

Your long-term solution in my opinion is to build 2 identical systems and keep one offsite or at least not physically located in proximity to the other in case of fire, etc.

As for your pools, I wouldn't do 7 disk RAIDZ3s. 6 disk RAIDZ2s or 11 disk RAIDZ3s, but 7 disk RAIDZ3s are a bit excessive in my opinion unless you are trying to horribly over-design your system because you don't expect to be able to replace a failing disk in a reasonable period of time. Not that I'm against over-designing but definitely don't want to see people spend money on stuff that doesn't matter.
 

cookiesowns

Dabbler
Joined
Jun 8, 2014
Messages
31
Overall expanding as you go with regards to hard drive purchases is the smarter way. Drives are only getting bigger and cheaper.

Snapshots are either stored on ZFS pools or encapsulated in files. Snapshotting from zpool to zpool is by far the most common and most recommended way. You can turn the snapshots into files to store offsite on other file systems, but its definitely not the most recommended and least common. If you do decide to do snapshots in files and the shit hits the fan you *will* have to do restores of all of your snapshots, so clearly that's a major drawback with pools as big as yours.

Your long-term solution in my opinion is to build 2 identical systems and keep one offsite or at least not physically located in proximity to the other in case of fire, etc.

As for your pools, I wouldn't do 7 disk RAIDZ3s. 6 disk RAIDZ2s or 11 disk RAIDZ3s, but 7 disk RAIDZ3s are a bit excessive in my opinion unless you are trying to horribly over-design your system because you don't expect to be able to replace a failing disk in a reasonable period of time. Not that I'm against over-designing but definitely don't want to see people spend money on stuff that doesn't matter.


Gotcha, then in terms of let's just say we have 24x 4TB drives, what would you recommend in terms of a good balance between security / performance?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
To be honest, I'd do either four RAIDZ2 vdevs of 6 disks each or two RAIDZ2 vdevs of 11 disks each(with 2 slots with either spares or empty). Four vdevs are probably better for performance if you plan to have have lots(think 25-100s of users).
 

cookiesowns

Dabbler
Joined
Jun 8, 2014
Messages
31
To be honest, I'd do either four RAIDZ2 vdevs of 6 disks each or two RAIDZ2 vdevs of 11 disks each(with 2 slots with either spares or empty). Four vdevs are probably better for performance if you plan to have have lots(think 25-100s of users).

Do you mean 2 RAIZ3's of 11 disks?

On a side note, given that the budget is there, wouldn't getting all the drives we'll ever need be better than adding vdevs to the pool? That way we have the performance & space from the get go, as we fill up data pretty quickly.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
Keep in mind each vdev will have the iops of your slowest device in the vdev. If you want more iops you will want more vdevs. Id go with 4 raidz2 vdevs of 6 drives each.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Do you mean 2 RAIZ3's of 11 disks?

Yes, just like I said...

To be honest, I'd do either four RAIDZ2 vdevs of 6 disks each or two RAIDZ2 vdevs of 11 disks each(with 2 slots with either spares or empty). Four vdevs are probably better for performance if you plan to have have lots(think 25-100s of users).


On a side note, given that the budget is there, wouldn't getting all the drives we'll ever need be better than adding vdevs to the pool? That way we have the performance & space from the get go, as we fill up data pretty quickly.

That's a personal question. Disks get cheaper and bigger all the time. Also if you expect to keep this box for 5 years you stand a good chance of seeing many of your disks fail by the time you get to +5 years. So I'm all about expanding as necessary and less about building it once and being done with it. Even if you have all of the disks now you'll still do multiple vdevs. So whether you add them right now or add them every year is up to you and your personal choices. This is very much a logistical problem that only you know how to solve for your situation.
 
Status
Not open for further replies.
Top