Hello,
I've done a fresh install of TrueNAS SCALE and both interacting with the system and fio testing are showing truly awful performance numbers-- IOPS in the double digits whether on SSD or HDD according to FIO. The IOPs / BW reported by fio do not seem to depend on whether I am doing single disk, a 1 VDEV mirror with the SSDs, or a 6-way stripe of mirror vdevs on the HDDs. The only thing that makes a difference is adding a SLOG on the stripe of mirrors, which consistently adds ~50% to the IOPs (so 70-100 --> 120-150). Record size also seems to have no impact on the IOPS until I start going above 256.
Hardware is a HP Proliant DL380 gen 9
CPU: 2x Xeon E5-2660 v3 (Haswell, 20 cores)
RAM: 24x 16GB
HBA: HP Smart HBA H240, in "HBA mode" with an HP 12G SAS expander
Storage / SSD: Samsung 860 Pro 1TB
Storage / HDD: HP MB3000GCWDB (3 TB SATA @ 7200)
Storage / Boot: Internal HP 8GB microSD (have also tried booting off of the HDD)
I've been using the following fio settings:
Turning sync off makes the IOPs substantially better, but alarm bells are going off that there is *no* difference in performance between 6 stripes and a single disk and an SSD.
Interestingly I also encountered this on Proxmox, which is also Debian/OpenZFS-based, so I am wondering if there is an issue there. I had seen another thread suggesting that the core count was a detriment with OpenZFS, and a number of posts suggesting that the H240 is an excellent, high-performance adapter, so I'm at a loss. The server is in good condition-- old decom stock from our datacenter and was supporting some high-speed databases just fine so I'm not expecting to see hardware issues.
I am planning to switch the adapter back to RAID mode just to check IO performance on native RAID, but I really would prefer to use TrueNAS SCALE as it hits every design spec for my homelab other than the terrible performance.
Any thoughts on tracing the root cause would be appreciated.
I've done a fresh install of TrueNAS SCALE and both interacting with the system and fio testing are showing truly awful performance numbers-- IOPS in the double digits whether on SSD or HDD according to FIO. The IOPs / BW reported by fio do not seem to depend on whether I am doing single disk, a 1 VDEV mirror with the SSDs, or a 6-way stripe of mirror vdevs on the HDDs. The only thing that makes a difference is adding a SLOG on the stripe of mirrors, which consistently adds ~50% to the IOPs (so 70-100 --> 120-150). Record size also seems to have no impact on the IOPS until I start going above 256.
Hardware is a HP Proliant DL380 gen 9
CPU: 2x Xeon E5-2660 v3 (Haswell, 20 cores)
RAM: 24x 16GB
HBA: HP Smart HBA H240, in "HBA mode" with an HP 12G SAS expander
Storage / SSD: Samsung 860 Pro 1TB
Storage / HDD: HP MB3000GCWDB (3 TB SATA @ 7200)
Storage / Boot: Internal HP 8GB microSD (have also tried booting off of the HDD)
I've been using the following fio settings:
Single disk:fio --filename=test --ioengine=io_uring --sync=1 --bs=128k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=2G --runtime=60 --rw=randwrite && rm test
fio --filename=/dev/sdo --direct=1 --sync=1 --bs=4k --numjobs=1 --iodepth=4 --group_reporting --name=test --filesize=5G --runtime=60 --rw=randrw
Turning sync off makes the IOPs substantially better, but alarm bells are going off that there is *no* difference in performance between 6 stripes and a single disk and an SSD.
Interestingly I also encountered this on Proxmox, which is also Debian/OpenZFS-based, so I am wondering if there is an issue there. I had seen another thread suggesting that the core count was a detriment with OpenZFS, and a number of posts suggesting that the H240 is an excellent, high-performance adapter, so I'm at a loss. The server is in good condition-- old decom stock from our datacenter and was supporting some high-speed databases just fine so I'm not expecting to see hardware issues.
I am planning to switch the adapter back to RAID mode just to check IO performance on native RAID, but I really would prefer to use TrueNAS SCALE as it hits every design spec for my homelab other than the terrible performance.
Any thoughts on tracing the root cause would be appreciated.