Can someone double check my configuration

Status
Not open for further replies.

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
Hi,
I'm new to FreeNAS, so I apologize if I'm not quite understanding things quite right. I've been reading through a lot of forum posts and the manual and I think I'm starting to get a grip on things. At my company we've inherited a FreeNAS box so I've needed to learn how to get up to speed very quickly because we had a big Hyper-V failure and we were moving data all over the place. The current storage configuration is really whacky so this is my opportunity to clean things up.

I have a Hyper-V server with ~60 VMs and it is connected to the FreeNAS box over 10Gb Ethernet. We currently have 24 1TB SSDs and a bunch of other spinning disks - of the remaining 48 spinning disks I plan to change 24 of them also to 1TB SSDs - I want all of the SSDs to be in a RAID 10 configuration (the first 24 already seem to be). This is where I am not totally sure, but want confirmation.

When I go to volume status of the SSD volume, this is what it looks like:

Capture.JPG


Does that mean it is in fact striped mirrors (RAID 10)? It continues all the way to mirror-0 which accounts for all 24 1TB drives

If that's the case, it looks like that volume has one Dataset and two 5TB zvols for the iSCSI connections:

Capture.JPG


Here's what I would like to do:
1. create a new 1GB zvol for the Hyper-V witness
2. get rid of SSD-2 (there is no data on it now)
3. extend HyperV-SSD1 to fill the volume (except for the witness disk above)
4. add 24 more 1TB SSDs and extend the size of the SSDV1 volume and also the HyperV-SSD1 zvol so we max out that zvol (actually just found this - http://doc.freenas.org/9.10/sharing.html#growing-luns). Is there a reason that the zvol can't use up more than 80% of the volume (per the link)?
5. is there a way to upgrade to larger 2TB drives?

Does all this seem doable? If I understand correctly, I should be able to extend the volume with the Volume Manager by extending the current volume with pairs of mirrors until I use up all of the new drives. To extend the zvol I just update the size in the GUI and then use the command-line to update the size of the associated extent. Does this all sound doable while not losing any data that already exists in HyperV-SSD1 (yes, I will have it backed up)?

Thanks!
Mike
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Is there a reason that the zvol can't use up more than 80% of the volume (per the link)?
80% is the rule of thumb to maintain a fast healthy pool, that is the only reason I can think of.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
80% is the rule of thumb to maintain a fast healthy pool, that is the only reason I can think of.

Also, the more full a pool, the more prone to fragmentation it is, and since there is no defragment in ZFS... this is the reason for the 50% recommendation with iSCSI.

Sounds like you could actually replicate with snapshots to some spinning rust disks in Raidz2 as well.

But it also sound like you should consider larger than 1TB SSDs too.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
5. is there a way to upgrade to larger 2TB drives?
You replace each drive one at a time per the manual instructions as if you were replacing a failed drive. Once both drives in a mirrored pair have been replaced the pool will expand capacity accordingly. Continue doing this until all drives are replaced or you reach the storage capacity you need.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You might want to consider upgrading in pairs of larger capacity.. for example.. replace 1TB with 4TB. You would add 3TB to your pool after replacing just one pair of SSDs.

Continue replacing pairs of 1TB SSDs with larger (most bang for buck) SSDs as necessary.

One of the benefits of using mirrors is that you can replace just two disks to see a capacity increase.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
So, I wouldn't mind going that route (4TB), but I was a bit hesitant of having the long rebuild times should a drive fail (is RAID 10 rebuild nearly as bad as RAID 5?). But seeing as it's in a RAID 10 configuration, long rebuild times wouldn't be quite as much of an issue were we using a RAID 5 configurations as we have more redundancy.

But, it seems the main problem is sourcing 2.5" drives larger than 1TB. We can get 4TB SSDs, but they're not cheap, and we wouldn't be able to get as much storage space (for the amount we'd like to spend). We currently have 80 slots available in our JBOD, many of them already filled with 1TB drives, some SSD, some HDD, and 35 of them of varying capacities. What we're planning on doing is replacing the 35 drives with new 1TB HDDs which will bring our total storage to 80TB, halve that due to RAID 10, and we now have 10TB of SSD storage for VMs that require it, and 30TB for the rest. So far the speed on the HDDs has been pretty good, and the SSD speed has been great.

The reason we decided not to go full SSD is due to price, and we really don't want to invest too much into this box as we're probably going to be purchasing a new SMB3 solution in the coming years. I inherited this box and I definitely need to keep it running, but right now the key was bringing this box up to speed with a good number of drives in a better configuration than it is currently, give us adequate space to do what we need to do, and provide rock-solid reliability which I think we'll get. I think the proposed setup should be able to do that.... would you agree?
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
It's really too bad this production box doesn't' have a 3.5" JBOD enclosure, because a 4TB HDD is the same price as the 1TB 2.5" drive we're getting - 20 3.5" 4TB drives and we would have been golden!

We do have two other SuperMicro computers that have 24 3.5" slots - we might look at those to use as a future storage solution.
 
Status
Not open for further replies.
Top