Mixed SSD/HDD zpool for VMs

Status
Not open for further replies.

FS_IT

Cadet
Joined
Oct 16, 2018
Messages
3
I am exploring the possibility of building a VM server based on FreeNAS. My current system runs on Ubuntu server (not using ZFS), and I also have a FreeNAS build to handle my local file sharing needs, which I only mention as a means to say that I have just enough experience with FreeNAS to be both intrigued and dangerous.

On to my specific question. In my Ubuntu system, I run the VMs off of a single SSD that's backed up periodically to my FreeNAS network storage. From what I've read in cyberjock's extremely helpful slideshow in the Help & Support forum, it would not be advisable to create a single disk zpool on FreeNAS to handle this function due to the way that ZFS relies on redundancy to protect against data corruption. That being said, I'd like to build an economical zpool to host these VMs that balances redundancy, speed, and cost. My initial thought is to build an array with two matched pairs of drives (one pair made up of HDDs and one pair made of SSDs) that would be placed in RAIDZ1. For example: 2 x 1TB SSDs + 2 x 1TB HDDs. In my mind, the HDDs in this configuration would be a cheap way to add the necessary redundancy without incurring much of a speed penalty (when compared with my current single SSD setup) thanks to the data being spread across multiple drives. Am I correct in this line of thinking?

I've looked around to see if anyone else talks about doing something similar to this, but so far the only posts I've found in the forum were people wanting to mix and match old drives that weren't the same nominal size, and none of them were planning to use said mixed zpool for VMs or jails. In those instances, I completely understand why it is not recommended to mix the drives, but for my use case, the only drawback that I can foresee is losing a little bit of usable space if the SSDs and HDDs aren't identically sized. Since I'm not all too worried about that (maybe I'll use that extra space for swap if I can figure out how to do that in FreeNAS), is there any other major reason I shouldn't try this disk arrangement? Am I making this more complicated than it has to be? Should I just go with the single SSD and keep regular backups on the network storage? Any experienced insights would be welcome!

P.S. Just for reference, the system I would be building on is an HP Z620 workstation with a single Xeon E5-2670 processor and 64 GB of ram.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So, firstly, you can't have only 2 drives in RAIDZ1 (minimum 3... think RAID5 in other RAID equivalence)... what you mean is a mirror (RAID1 in other RAID language).

A VDEV is only as fast as the slowest device in it, and a pool of VDEVS (meaning stripes across the VDEVS inside it) is potentially able to benefit from the speed of the fastest VDEV and can build on the speed of each VDEV to be faster that any of the individual VDEVs working together whithin it, but may also be held back by the speed of the slowest VDEV.

Mixing HDDs and SSDs in the same pool is therefore folly in my opinion and would normally be a waste of those SSDs' speed, since you would only benefit from it when ZFS wants to write only to those devices and not to any other in the pool (which should not be often if even at all in the setup you're floating).

What you could do, is have a mirror of SSDs (your main pool for writing/reading at speed) and then set up a replication job to replicate from that pool to a second one which would be a mirror of the HDDs.

I think that would be more like what you wanted to have happen.
 
Last edited:

FS_IT

Cadet
Joined
Oct 16, 2018
Messages
3
So, firstly, you can't have only 2 drives in RAIDZ1 (minimum 3... think RAID5 in other RAID equivalence)...

I did want to take advantage of RAIDZ1, so I was planning on 4 drives, not 2. I probably should have said 2 "identical pairs" instead of "matched pairs" and stated that all 4 drives would be in one VDEV in the zpool. That being said, I think you still answered my question.

A VDEV is only as fast as the slowest device in it, and a pool of VDEVS (meaning stripes across the VDEVS inside it) is potentially able to benefit from the speed of the fastest VDEV, but may also be held back by the speed of the slowest VDEV.

I hadn't considered splitting each drive up into its own VDEV, but it sounds like that comes the closest to accomplishing what I had intended.

Mixing HDDs and SSDs in the same pool is therefore folly in my opinion and would normally be a waste of those SSDs' speed, since you would only benefit from it when ZFS wants to write only to those devices and not to any other in the pool (which should not be often if even at all in the setup you're floating).

This makes total sense. If the speed of the slowest VDEV in the zpool is the bottleneck, then I think you're absolutely right. In this instance, I would expect most read/write ops to be spread across all 4 drives, making the extra speed of the 2 SSDs pointless.

So that raises another question. How does the speed of a 4 disk HDD (let's say 5400 rpm WD Red drive for argument's sake) zpool setup in RAIDZ1 compare to the speed of a single SSD?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So if you're intending to use all of the drives in a single RAIDZ1 pool, we're back to a single vdev with the slowest drive being the maximum performance of the pool.

SSD is probably between 10 and 100 times faster than an HDD, so even if you go for a striped pool (equivalent to RAID0, no fault tolerance), your performance will probably be no better than the 2 HDDs alone as each write will need to wait for them to complete.

In RAIDZ1, 2 and 3, the performance of the pool would match the slowest drive in the pool, so again, not great.

Depending on the workload and how mission critical you consider data protection and performance, you may want to consider my suggestion in order to get the best of both.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
For VMs, you want lots of vdevs. Do you need speed more than capacity? Buy SSDs and run a mirror or if you get three of them, Raidz1. Do you need capacity more? Skip the SSDs and get more drives. My old server (before downsizing) had 8 3TB drives in a "RAID10" striped mirrors. Thats 4 vdevs. I could read and write (async) over 1.2GB per second. My new setup with 4 drives again as striped mirrors will do a solid 350MB per second writes.
You could also have two pools and two datastores. One a pair of SSDs for performance critical needs, and one on spinning drives for capacity. This is more like what you would see in larger shops.
 

FS_IT

Cadet
Joined
Oct 16, 2018
Messages
3
Thanks for helping to straighten me out, guys!

Speed is more important than capacity for the zpool dedicated to running the VMs, and I have physical space for up to 8 drives (Assuming 4 of those are 3.5" drives and the other 4 are 2.5" drives). Taking that into consideration with y'all's advice, here's what I'm thinking now.

zpool 1: 3 x Samsung Evo 860 500 GB SSDs in RAIDZ1
zpool 2: 4 x WD Red 4 TB HDDs in RAIDZ2

Honestly, zpool 1 will be much faster than I need, but putting more drives in the zpool with smaller drive sizes seems like the most economical way to use SSDs to reach my usable zpool size goal of ~1 TB with the necessary redundancy. This arrangement provides the same amount of usable space as 2 mirrored 1 TB drives, and it costs significantly less. I think ~870 GB of usable space will be plenty to run 3 or 4 Windows 7 VMs.

I will probably have to build zpool 2 over a little bit of time, but once it's done I can create a replication task like you mentioned sretalla. The rest of the space on zpool 2 would be used for local network storage. I really like this idea because it would free up my current FreeNAS build to be used as an off-site backup server, which would be a huge win. In the meantime, I'll just backup zpool 1 to my old FreeNAS machine periodically until I can finish building zpool 2.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Don't forget that ZFS doesn't like to be more than 80% full, so 870GB is really more like 700GB in reality.
 
Status
Not open for further replies.
Top