nickt
Contributor
- Joined
- Feb 27, 2015
- Messages
- 131
Hi all,
I'd appreciate some guidance on the best way to configure zvols and disks for optimal Debian guest performance.
TL;DR: I am forming the view that:
My FreeNAS box has a number of VMs, first deployed in FreeNAS 9.10, using iohyve. These are running Debian guests, and - in the main - host services deployed in Docker containers. They work fantastically well, although at times, disk performance has not been wonderful. For the most part, this hasn't bothered me - per my signature, I have a well spec'd FreeNAS box, and light demand - more than 2 concurrent users of any service at any one time is uncommon. But I've decided it's time to dig into performance.
One of the key things I've noticed is that relatively modest disk activity on one VM can lead to significant iowait impacts on all VMs. What I often see is a big write completes on the guest, but the iowait impact on all VMs persists for some time after the write has finished (10 - 15 seconds). In these cases, iowait can exceed 50%, sometimes even as far as 90% or more, which obviously has nasty impacts on VM performance.
Most of my Debian VMs were setup using iohyve under FN 9.10, which means a zvol was automatically configured. Debian was built using the installer's default partitioning scheme and LVM (I wanted the ability to expand disks in the future). More recently, I have started building a new VM using the FreeNAS 11.2 GUI, which also gets the latest Debian (buster).
My performance testing is a little simplistic, but is seems to be sufficient to reveal big differences. I am doing a large sequential (1GB) write using dd. Typically:
So I've done this simple test in a bunch of different configurations. Here's what I found:
What stands out for me:
So I'm wondering how important sync writes really are.
I realise that I could deploy a SLOG device, but a good one isn't cheap. And then I start wondering whether I wouldn't be better just using cheap SSD storage directly provisioned to the VMs avoiding the complexity. I also have a UPS, which assures orderly shutdown if there is a power failure, meaning that there aren't too many reasons data loss could occur in practice.
Lastly, I am surprised by the difference in performance between writes directly in FreeNAS (90 MB/s) vs from the Debian 10 VM best case (160 MB/s). I've repeated these two tests again and again and consistently get this kind of difference.
Looking forward to your thoughts!
Nick
I'd appreciate some guidance on the best way to configure zvols and disks for optimal Debian guest performance.
TL;DR: I am forming the view that:
- LVM + bhyve is a bad mix
- zvols defined for VM storage should have sync writes enforced, but it hammers performance, so I don't want to do it
My FreeNAS box has a number of VMs, first deployed in FreeNAS 9.10, using iohyve. These are running Debian guests, and - in the main - host services deployed in Docker containers. They work fantastically well, although at times, disk performance has not been wonderful. For the most part, this hasn't bothered me - per my signature, I have a well spec'd FreeNAS box, and light demand - more than 2 concurrent users of any service at any one time is uncommon. But I've decided it's time to dig into performance.
One of the key things I've noticed is that relatively modest disk activity on one VM can lead to significant iowait impacts on all VMs. What I often see is a big write completes on the guest, but the iowait impact on all VMs persists for some time after the write has finished (10 - 15 seconds). In these cases, iowait can exceed 50%, sometimes even as far as 90% or more, which obviously has nasty impacts on VM performance.
Most of my Debian VMs were setup using iohyve under FN 9.10, which means a zvol was automatically configured. Debian was built using the installer's default partitioning scheme and LVM (I wanted the ability to expand disks in the future). More recently, I have started building a new VM using the FreeNAS 11.2 GUI, which also gets the latest Debian (buster).
My performance testing is a little simplistic, but is seems to be sufficient to reveal big differences. I am doing a large sequential (1GB) write using dd. Typically:
Code:
~$ dd bs=1024 count=1048576 </dev/urandom >test.dd2
So I've done this simple test in a bunch of different configurations. Here's what I found:
Flavour | Built with | Disk driver | Format | Sync writes | Write speed | iowait impact |
FreeNAS 11.2-U8 | - | - | zfs | standard | 90 MB/s | light |
Debian 8.8 | iohyve | ? (iohyve assigned) | zvol / LVM / ext4 | standard | 12 MB/s | nil |
Debian 9.5 | iohyve | ? (iohyve assigned) | zvol / LVM / ext4 | standard | 40 MB/s | heavy |
Debian 10 | FreeNAS 11.2 GUI | virtio | zvol / LVM / ext4 | standard | 44 MB/s | heavy |
Debian 10 | FreeNAS 11.2 GUI | virtio | zvol / ext4 | always | 10 MB/s | nil |
Debian 10 | FreeNAS 11.2 GUI | virtio | zvol / ext4 | standard | 160 MB/s | light |
What stands out for me:
- Newer Debians are better than older Debians
- LVM seems to be directly responsible for heavy iowait impacts
- Sync writes destroy performance
So I'm wondering how important sync writes really are.
I realise that I could deploy a SLOG device, but a good one isn't cheap. And then I start wondering whether I wouldn't be better just using cheap SSD storage directly provisioned to the VMs avoiding the complexity. I also have a UPS, which assures orderly shutdown if there is a power failure, meaning that there aren't too many reasons data loss could occur in practice.
Lastly, I am surprised by the difference in performance between writes directly in FreeNAS (90 MB/s) vs from the Debian 10 VM best case (160 MB/s). I've repeated these two tests again and again and consistently get this kind of difference.
Looking forward to your thoughts!
Nick