Manually set ashift=9 on a new pool with 512b drives

Undexter

Cadet
Joined
Jun 2, 2019
Messages
3
Setup:
Dell R510
TrueNAS-12.0-U2.1 (Virtualized in Proxmox 6.3-2)
H200 in IT mode (fully passed through to the VM)
10x Seagate Constellation ES 3.5" ST2000NM0001 2TB 7.2K SAS, Sector size 512

From everything I've read, TrueNAS should just see the 512b drives, and automatically use ashift 9, but this does not seem to be happening. When I create a raidz2 pool it gets created with an ashift of 12, which is losing me almost a TB of storage space.
Code:
zpool get all test | grep ashift
    test    ashift    12

This will be a backup server for my homelab, so speed is not as important as storage space. Plus, I was not losing too much write speed in a simple dd test:
Code:
 ashift=12 > Write - 1.14 GB/s, Read - 3.77 GB/s
ashift=9 > Write - 0.81 GB/s, Read - 3.91 GB/s

To try and get around this, I added the following SYSCTL tunables (and rebooted after creation, just to be sure):
Code:
vfs.zfs.max_auto_ashift=9
vfs.zfs.min_auto_ashift=9

But it makes no difference, the pools are still created with an ashift of 12. I have manually created the pools which works fine, and I get the missing TB back, but the pool does not persist through reboot since it was done via the command line. Is there any way to force the ashift value via the GUI?

Thanks in advance!
 

glauco

Guru
Joined
Jan 30, 2017
Messages
526
Hi, yes, you can do it at the GUI:
1619884794054.png
 

Undexter

Cadet
Joined
Jun 2, 2019
Messages
3
Thanks, but I already tried that. I set both max and min to 9, and it still creates the pool with an ashift of 12. I also set sysctl vfs.zfs.vdev.larger_ashift_minimal=1, but again, it made no difference.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
When I create a raidz2 pool it gets created with an ashift of 12, which is losing me almost a TB of storage space.
I'd have to look at it carefully, but I don't think ashift=9 is going to magically improve your situation. Small blocks are always going to be inefficient with RAIDZ. Since RAIDZn only allocates blocks in multiples of n+1, some of the gains you'd get from the smaller blocks will be lost as padding to make all allocations sized in multiples of 3x512 bytes. If your data can consistently compress down to 7/8 the uncompressed size, you wouldn't suffer much with that, but you also gain nothing beyond that unless it compresses down to 4/8.
 
Top