Proxmox Dataset Configuration for Proxmox VM/LXC Storage Dataset(s) Stored on TrueNAS Core?

sinisterpisces

Dabbler
Joined
Jan 14, 2024
Messages
17
Note: I originally posted this in the wrong sub-forum. I've deleted that post and recreated it here.

Hello,

TLDR: If you're running shared storage for your VM and LXC disks off of TrueNAS Core, how are you customizing your datasets compared to the default settings? I'm aware of the need to tune volblocksize and recordsize appropriately, and will disable sync writes*. Anything else to be mindful of?*

I'm experimenting with shared storage for the first time. I've set up an all SATA SSD pool in TrueNAS core, with two mirror VDEVs, and disabled sync writes.

Aside from making sure recordsize and volblocksize are set appropriately for my VMs, is there any customization I need to do to the dataset properties? I've not compared the values set by Proxmox in its local datasets to the values set by TrueNAS Core when a dataset is created, but Proxmox is Linux and TrueNAS Core is BSD, so I wasn't sure if they were directly comparable or if I needed to do something additional to avoid having a bad time.
Thanks for any advice.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
You generally want sync writes for VM storage.
Loosing an important write can toast a VM.

SSDs are also generally fast enough for sync writes to not matter overly much. The bottleneck is usually the network, and it's more important that a power blip not require restoring from backups.
 

sinisterpisces

Dabbler
Joined
Jan 14, 2024
Messages
17
You generally want sync writes for VM storage.
Loosing an important write can toast a VM.

SSDs are also generally fast enough for sync writes to not matter overly much. The bottleneck is usually the network, and it's more important that a power blip not require restoring from backups.
Thanks! I'll do some A/B testing, but sync writes will likely be plenty fast enough for my home server purposes.

Aside from recordsize/volbocksize for the VMs and LXCs, anything else I need to tune on the dataset itself?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Because you're using enterprise SSDs with power-loss protection (Samsung PM883's) the sync writes shouldn't be a major factor. Even consumer SSDs can sometimes find themselves struggling to keep up with the sync-write demand of "Put this write on stable NAND. No, you aren't allowed to cache it. I'm going to stand here and wait until bits are on blocks."

Recordsize/volblocksize is likely the major factor to tune; larger blocks tend to improve sequential write speeds, at the cost of potentially causing a read-modify-write cycle if your guest OS is only changing small blocks of an existing record (eg: changing 4K out of a 128K record requires the whole 128K record to be rewritten, as opposed to a smaller 16K record).

Keep compression on - LZ4 is basically "free" for workloads and can often give you some decent gains.
 

sinisterpisces

Dabbler
Joined
Jan 14, 2024
Messages
17
Because you're using enterprise SSDs with power-loss protection (Samsung PM883's) the sync writes shouldn't be a major factor. Even consumer SSDs can sometimes find themselves struggling to keep up with the sync-write demand of "Put this write on stable NAND. No, you aren't allowed to cache it. I'm going to stand here and wait until bits are on blocks."
Thanks! I didn't realize enterprise SSDs were actually potentially faster at sync writes.

Recordsize/volblocksize is likely the major factor to tune; larger blocks tend to improve sequential write speeds, at the cost of potentially causing a read-modify-write cycle if your guest OS is only changing small blocks of an existing record (eg: changing 4K out of a 128K record requires the whole 128K record to be rewritten, as opposed to a smaller 16K record).

I've read that for KVM/qemu/Proxmox VMs, 64k recordsize is the sweet spot, but with the latest ZFS update Proxmox uses 16k volblocksize by default, so as usual I remain completely confused about the interplay between volblocksize and recordsize for backing storage even though I keep thinking I have it figured out. :P

See: https://forum.proxmox.com/threads/proxmox-ve-8-1-released.136960/page-9#post-625443

Keep compression on - LZ4 is basically "free" for workloads and can often give you some decent gains.

Nice to know that at least one of the default settings is actually optimized for what I'm doing. ;)

I've come a long way with it, but 60 percent of my tinkering time with Proxmox is still some varation of "ZFS does what?" :P
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I b
Thanks! I didn't realize enterprise SSDs were actually potentially faster at sync writes.
I believe you have misunderstood. sync writes will almost never be faster, as they always require 2 writes. 2 writes is rarely faster than 1. the point of sync writes is they always safer.
what I believe they are saying is that sync writes aren't needed because these have powerloss protection. im not sure I agree (powerloss protection only protects against data in the SSD cache, NOT data in ARC - sync writes ensure that ARC is never the only copy, BUT the write speed might be fast enough for that to not matter. unsure), but that's what I read.

the absolute fastest you can get sync writes will be with SLOG, which would require damn fast SLOG.
I've read that for KVM/qemu/Proxmox VMs, 64k recordsize is the sweet spot, but with the latest ZFS update Proxmox uses 16k volblocksize by default, so as usual I remain completely confused about the interplay between volblocksize and recordsize for backing storage even though I keep thinking I have it figured out. :P
I wouldn't bother trying to mess with this.
 
Last edited:
Top