RAIDZ2 pool faster than expected

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
I set up a z2 pool with 8x 8GB WD drives (white label red drives) under Hyper-V TrueNAS. I am copying 3x 8GB driver at the same time across my 10G network and they are all copying at full speed, a total of about 300MB/s? Is this expected behavior? I thought that z2 writes were supposed to be only as fast as a single drive in the pool? I removed two drives to make sure the pool was resilient and that seemed to work, but this is not what I was expecting. I am pleasantly surprised, but did I do something wrong or is it supposed to write this fast?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you haven't followed the forum rules and virtualizing truenas is highly discouraged. I am not at all familiar with hyper-V personally.
since you have given no info about how the drives are connected, not much can be known about...how the drives are connected. if you have set them up with some sort of virtualized access layer, unexpected behavior, like appearing to write faster, is totally possible.
in general, raidzX writes faster with contiguous writes, like writing a large video file to disk. it has low IOPS but writes to every disk in one go for a single write.
I thought that z2 writes were supposed to be only as fast as a single drive in the pool?
more like as fast as the slowest drive each vdev, since the pool is striped across vdevs. if the pool only has one vdev then yes, the pool performance would match vdev performance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I set up a z2 pool with 8x 8GB WD drives (white label red drives) under Hyper-V TrueNAS. I am copying 3x 8GB driver at the same time across my 10G network and they are all copying at full speed, a total of about 300MB/s? Is this expected behavior? I thought that z2 writes were supposed to be only as fast as a single drive in the pool? I removed two drives to make sure the pool was resilient and that seemed to work, but this is not what I was expecting. I am pleasantly surprised, but did I do something wrong or is it supposed to write this fast?

Good luck with the Hyper-V. Not recommended.

You've misunderstood that the IOPS capacity of a RAIDZ vdev is tightly coupled to the performance of member drives; an eight drive RAIDZ will generally feel like about the IOPS of the slowest member device, though you can contrive specific instances where this is not true. A single drive may be capable of 200 IOPS. Using that, the RAIDZ will be capable of something in that range. It won't be 200 * 8 = 1600 IOPS like people wish. By comparison, a mirror vdev grows its read IOPS capacity with the number of drives, so a 3-way mirror may be capable of up to 600 read IOPS while write is still going to be 200 IOPS.

Note that the RAIDZ is going to get its best performance with a single client accessing large sequential data, while the mirror will require multiple clients doing, well, whatever.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
Good luck with the Hyper-V. Not recommended.

You've misunderstood that the IOPS capacity of a RAIDZ vdev is tightly coupled to the performance of member drives; an eight drive RAIDZ will generally feel like about the IOPS of the slowest member device, though you can contrive specific instances where this is not true. A single drive may be capable of 200 IOPS. Using that, the RAIDZ will be capable of something in that range. It won't be 200 * 8 = 1600 IOPS like people wish. By comparison, a mirror vdev grows its read IOPS capacity with the number of drives, so a 3-way mirror may be capable of up to 600 read IOPS while write is still going to be 200 IOPS.

Note that the RAIDZ is going to get its best performance with a single client accessing large sequential data, while the mirror will require multiple clients doing, well, whatever.

Yes, these are all video files with record size set to 1M, so that is probably why performance is better than expected.

I guess people will have to get over the virtualization thing because in my testing the file system is plenty stable and resilient. If virtualization isn't supported in 2021, then I would say they are way behind the times. It is a valuable tool and asking people not to utilize it is not practical. The drives are attached as physical drives to the VM. I did several tests with small VHDX drives, even completely blew away TrueNAS and reimported, no issues. I am using TrueNAS to get more usable space out of my drives versus having a copy of every drive which is what I have now. Dumping a bunch of money into a dedicated system or buying expensive controller cards is not an option.
This guy has also tested the resilience with VMWare
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It isn't a matter of "isn't supported", it's a matter of "extremely complex software house of cards #1" being set on top of "extremely complex software house of cards #2" and then things going wrong.

The problem is that ZFS is extremely demanding on platforms and if things aren't really 100.0000%, then bad things happen -- months, sometimes a year or more down the road. You're talking to someone who has helped more people out with virtualization than probably anyone else, so it might be worth a listen when I say "the fact that you got it to make a pool or it worked for a week is not evidence of success." And every single update of both the hypervisor and the NAS VM becomes an exercise in re-validating the whole thing.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
It isn't a matter of "isn't supported", it's a matter of "extremely complex software house of cards #1" being set on top of "extremely complex software house of cards #2" and then things going wrong.

The problem is that ZFS is extremely demanding on platforms and if things aren't really 100.0000%, then bad things happen -- months, sometimes a year or more down the road. You're talking to someone who has helped more people out with virtualization than probably anyone else, so it might be worth a listen when I say "the fact that you got it to make a pool or it worked for a week is not evidence of success." And every single update of both the hypervisor and the NAS VM becomes an exercise in re-validating the whole thing.

Updates are always a risk, that is true. However, it is this or storage spaces, which do you think is the safer option? I could also unraid, but like storage spaces, performance is an issue. I am not using compression or encryption so I am finding TrueNAS is not even maxing out 2 threads of a 3700x.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
storage spaces
my experience with storage spaces was.....underwhelming. extremely. I tried to replace a basic storage disk (downloads, My docs, some portable apps) on a storage space with 2 drives and a mirrors spaces....the speed was unusable. not just slow, unusually slow.

the problem with virtualization is that, like the Grinch says, it's a house of cards on a house of cards. everything will work fine, every test you try will work, you'll put production data on it, and it'll work fine.....and then just not. it's very similar to how zfs itself tends to work perfectly and then just break because you missed something, but with virtualization there more things you can miss, more things that can go wrong.

in the end, it's your data, your system, and the risks are yours, but if you do virtualize after being recommended not to, and something does go wrong, people are generally much less likely to be inclined to devote their own spare time to help you.
that is the biggest reason for discouraging it; not that it can't be done, but that if you do it and it goes sideways, you can't expect help with a config those who can best help you have no interest in fixing.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
[..] I guess people will have to get over the virtualization thing because in my testing the file system is plenty stable and resilient.
What have you done as part of your testing? What scenario types (hardware failure, HyperV failure, TrueNAS failure, Windows update, roll-back of updates on the various levels) have been covered? I am asking this way, because it seems(!), that rather than carrying out an extensive test plan, you did various things and have not run into issues so far. I apologize if I read things the wrong way, but that is the impression I got.

If virtualization isn't supported in 2021, then I would say they are way behind the times. It is a valuable tool and asking people not to utilize it is not practical.

Yes and no. Every tools has its purpose and one can successfully "abuse" it up to a point. But there are limits and knowing those is not always easy. There are people who make a very good living from just telling others whether a given scenario is suitable for virtualization or not. Virtualization is a very old technology (over 50 years), and even on "PC-class systems" has seen more than 15 years of mainstream use. And with increased hardware performance, the number of scenarios where it is not suitable has shrunk dramatically over the last 10 years. But nobody would, as a somewhat blunt example, suggest virtualization for a system where foreign currencies are being traded.

The drives are attached as physical drives to the VM. I did several tests with small VHDX drives, even completely blew away TrueNAS and reimported, no issues. I am using TrueNAS to get more usable space out of my drives versus having a copy of every drive which is what I have now. Dumping a bunch of money into a dedicated system or buying expensive controller cards is not an option.
This guy has also tested the resilience with VMWare

It is of course your decision what you consider appropriate for your needs. The reactions you have received come from people who have seen similar endeavors fail time and again. Many of those people are professionals who would normally charge a three-digit hourly rate, when offering advice in a commercial settings. I don't want to be impolite or patronizing. But if you simply look for someone to confirm your opinion, you have come to the wrong place. What you get here is honest feedback from people who simply like to help and have no financial interest in the outcome of your decision. That is the best possible setup I can think of.
 

tony95

Contributor
Joined
Jan 2, 2021
Messages
117
What have you done as part of your testing? What scenario types (hardware failure, HyperV failure, TrueNAS failure, Windows update, roll-back of updates on the various levels) have been covered? I am asking this way, because it seems(!), that rather than carrying out an extensive test plan, you did various things and have not run into issues so far. I apologize if I read things the wrong way, but that is the impression I got.

I tested removing and adding new drives to a z2 array as well as completely blowing away the TrueNAS installation and reimporting the disks into a fresh install inside of TrueNAS in the VM. I did these tests with VHDX and then Physically attaching the disks with the same results.

Honestly, I believe the thinking about VMs that I have heard here so far is completely backwards. Every PC has it's own combination of hardware (motherboard, cpu, chipset, lan etc), but inside a VM most of this is standardized. You can expect all VMs (per brand, Hyper-V in this case) to pretty much function the same regardless of the hardware that it is built upon. In this case, it seems a virtualized system would be that much easier to support.

Chris Moore suggested I pass through a disk controller card but that isn't an option for me. I do realize that in my case I am not going to be getting SMART information inside of TrueNAS, so I will have to monitor SMART at the OS level, which is a limitation. I also get the risk of Windows updates, but I would be shocked if they completely tanked Hyper-V with an update and even more shocked if it was done in a way that was not recoverable.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Honestly, I believe the thinking about VMs that I have heard here so far is completely backwards. Every PC has it's own combination of hardware (motherboard, cpu, chipset, lan etc), but inside a VM most of this is standardized. You can expect all VMs (per brand, Hyper-V in this case) to pretty much function the same regardless of the hardware that it is built upon. In this case, it seems a virtualized system would be that much easier to support.

No, it's your thinking that is completely backwards. Some of us here are professional infrastructure people who deal with massive amounts of virtualized stuff. Yes, virtualization is great in some ways. I'm glad you're aware of that. However, there are also sharp edges, and you're completely disregarding that.

Some of us come from the UNIX legacy that saw dozens of different UNIX variants on many different hardware platforms. That was difficult to support, and you would come across strange stuff when trying to compile your services and stuff. I'm really glad I no longer have to remember the ins and outs of obscure bits of hardware where I got an item with a serial number in the low 1000's. Most of the industry now runs on Intel-platform servers with just a handful of UNIX and Linux systems. That's akin to your "virtualization" comparison and yes it's nicer.

This doesn't mean that everything is perfect. We still have Realtek ethernet cards and lots of other problematic stuff.

We have *seen* problems here on the forums with people doing virtualization. I've experienced them myself, because most of my FreeNAS hosts are virtualized. There are issues with PCI passthru, there are issues with MSI/X, there are issues with compatibility with various hypervisors, someone over in some thread has been fighting with EFI, weird limits and things that don't work where you'd expect they would be fine. These are not imaginary things.

We *assume* that the reason people are using a ZFS NASware is that they value their bits. It is possible to virtualize FreeNAS, and that can work just fine, and there's even a guide to doing so, but we've seen lots of things go sideways for lots of people too, especially when they get a bit uppity about how awesome virtualization is and disregard the warnings. The reason I've spent so much time writing about virtualization is because there are significant potential hazards, and we've enumerated as many of them as possible. I do it so that you don't lose your valuable bits.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I guess some folks just need to learn things the hard way ...
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Oof. I had my suspicions of this when I saw the "allocate memory" in a previous thread. That won't happen outside of the VM space.

@tony95 - note that in your linked video at 4:50 the YouTuber is passing through a storage controller as a PCI device. This is the recommended way to do things, and encouraged as the "safe for production" method of doing a virtual ZFS setup.


Allowing your disks to be abstracted (even as a "raw device") risks the hypervisor filtering out important SCSI commands, or deciding to tell half-truths about them. As said by several others, it will install, it will run, and it will work great - right up until it doesn't.

This isn't a matter of "don't do virtualization" it's "please take notes of the pain experienced by others and do your best to avoid being an N+1"
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
I think the pitfalls have all been clearly conveyed. If @tony95 chooses to ignore the warnings and continue on at this point it is entirely up to him.

If (when) you do run into trouble in the future with this setup I'm sure this thread will be dug up as a reminder.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Honestly, I believe the thinking about VMs that I have heard here so far is completely backwards.
Honestly, at this point, you sound like a VM fanboy, willfully blind to the possibility that it could have problems. problems that are being pointed out to you in a thread where you specifically are asking for help. since you are more interested in ignoring the advice on a topic you asked about than you are in getting infomation about the topic, I don't see you being able to get any more help, since you already know better.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@artlessknave let's keep it just a bit more gentle, please.

I realize that it is frustrating to impart knowledge on those who have preconceived notions, especially when such notions are not entirely unfounded. It's probably not helpful to go around calling people fanbois though.

I've had a number of clients over the years that I've pushed into virtualization, and I absolutely agree with the OP on the benefits of virtualization. I have one client where they used to keep extensive notes about the exact components inside each of their servers, their quirks, when they were bought (think: HDD warranty), etc., and I transformed them into mostly virtualized back around 2015. One day while discussing that, the talk turned to how much nicer it was to be able to look at the virtual hardware manifest and how there weren't the same sort of "quirks", and of course how virtual hardware can trivially be edited. You do replace one set of problems for a new set, stuff like when you have a 1Gbps network with a dozen servers and two dozen clients, you lose bandwidth when you move to a hypervisor that is only connecting at 1G, or backups saturate the hypervisor uplink, or stun times mess with VM's, etc.

But on the other hand, virtualization is a tricky topic. It's pretty trite to set up many kinds of basic VM's. I manage thousands of VM's over many dozens of hypervisors at around a dozen sites. Many of these have nothing *but* hypervisors and switches, because I do infrastructure, even routers, as virtual machines. My environments typically have a minimum of a dozen networks, with routers, firewalls, VPN servers, DHCP servers, DNS servers, NTP servers, syslog servers, Web servers, SQL servers, SLB's, MTA's, mail servers, netmon servers, CA's, and many other things. It's easy to become convinced that virtualization is always easy; it *is* often easy, and if you've deployed thousands of VM's without a counterexample, I can understand the errant attitude.

That's why I try to explain this from a technical perspective.

Virtualization is an incredibly tricky thing. This is why it took VMware years to master it, and why younger hypervisors like bhyve have many more pain points. It's easy to forget that virtualization is a highly complex house of cards, with the x64 platform being a minefield of legacy concessions and vendor quirks, where even the major vendors have problems. Consider the Intel X710 and ESXi, which for YEARS caused intermittent PSOD's (I still haven't seen a sufficient explanation why). The hypervisor then has to emulate an x64 platform for guests. Getting all the little details exactly right so that random guests will work has cost VMware hundreds of man-years of work. If it was easy, we'd have many hypervisors to choose from.

But stacking FreeNAS on top, that's adding an incredibly complicated guest on top, one that demands problem-free access to the storage, even in the case of disk errors, drive failures, or just plain ol' random glitches. In order for this to work correctly, you have to use PCI passthru, which means that you have to also have correct support for that at the hardware, BIOS, *and* hypervisor levels, something that is far from guaranteed. The support in ESXi has gotten pretty good, but many platforms, especially older ones, still cannot do the PCI passthru correctly.

To circle around to the OP, this simply hasn't been shown to work reliably in Hyper-V. Hyper-V didn't even support FreeBSD until around FreeBSD 10.3R (~~2016?), which is around the same time they started offering PCI passthru too. So these are all relatively new things for Hyper-V. I'm fine with you being a guinea pig for Microsoft, but I do want you to be aware of that.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
calling people fanbois though.
I specifically tried not do this. I described that the behavior we could see appears like a what a fanboy would look like, and described how, based on the literal words they have used, they do no appear to be interested in advice. and that was me being gentle... :/
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's too close to an ad hominem attack, which is forbidden on the forums, but I obviously understand the challenges involved with explaining and enlightening this stuff. It is not uncommon for people to arrive with notions like "any PC hardware should work" or "I've built my own gaming rig, how hard could a NAS be" or any of the other common misconceptions.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
mmm. usually an ad hominem attack would be saying that their argument is invalid because [insert irrelevant attribute here], like "they are a fanboy", "they are a junior member", "their name is Tony", "they were wrong in a previous unrelated argument therefore they are wrong in this argument". I didn't really think to comment much more on their arguement, since you....poked a number of holes in it.
I was trying to describe that the behavior I could see greatly resembles that of someone unwilling to listen, in the hopes that they might see it and reach enlightenment.
arriving with a misunderstanding isn't that frustrating, but sticking to it when presented with new information to consider, is.
 
Top