I originally set the server up with 1 RaidZ2 VDEV of 6 Drives, throughput over SMB was about 180MB/s. Happy with that, considering that’s about a single drive’s performance. After much reading and testing around increasing performance, I understood the logical route was additional VDEV’s would increase throughput as a Pool’s data is striped across vdevs. So, I added another VDEV.
Shocker to me was no discernible increase in throughput. I’m seeing 180 – 200MB/s now, where I don’t believe it ever went above 180 MB/s previously. However, I expected a little closer 360ish (understanding it may not exactly be double due to overhead). My use-case is transferring very large files (20GB – 80GB) for archive. Files sit there and will be read, but rarely deleted if ever. I prefer to retain the 2 VDEV RaidZ2 setup and track down the route issue rather than moving to something like striped mirrors. I do believe there's an issue somewhere, I just can’t figure out where. I have also searched online extensively but I’m also learning Truenas, posting in a forum is always my last resort as not to waste anyone’s time. Any insight would be greatly appreciated.
I’ve also tested an SSD cache drive, no impact at all so I’ve since removed it.
DD results
Pool
Shocker to me was no discernible increase in throughput. I’m seeing 180 – 200MB/s now, where I don’t believe it ever went above 180 MB/s previously. However, I expected a little closer 360ish (understanding it may not exactly be double due to overhead). My use-case is transferring very large files (20GB – 80GB) for archive. Files sit there and will be read, but rarely deleted if ever. I prefer to retain the 2 VDEV RaidZ2 setup and track down the route issue rather than moving to something like striped mirrors. I do believe there's an issue somewhere, I just can’t figure out where. I have also searched online extensively but I’m also learning Truenas, posting in a forum is always my last resort as not to waste anyone’s time. Any insight would be greatly appreciated.
I’ve also tested an SSD cache drive, no impact at all so I’ve since removed it.
- Note: Currently running as a Proxmox VM, but tested same results on bare metal
- Version: TrueNAS-12.0-U8.1
- Motherboard: Supermicro X10SDV-4C-TLN2F
- CPU: Intel Xeon D-1520 4C/8T 2.2Ghz
- RAM: 84GB ECC @ 2133Mhz (Proxmox has 128GB total)
- HBA: LSI SAS 9300-16I 12GB/S SATA+SAS (PCI Pass-through)
- POOL: 2 VDEVS of 6x14TB SATA HDD’s in RaidZ2
- HDD: 4 Seagate IronWolf Pro, 2 WD WD140EDGZ, 6 Seagate Exos – all are CMR with 14TB capacity
- NIC: onboard Intel 10G (PCI Pass-through)
DD results
Code:
root@truenas[/mnt/Tank1/Media]# dd if=/dev/zero of=testfile bs=1M count=1k 1073741824 bytes transferred in 0.601700 secs (1784513840 bytes/sec) root@truenas[/mnt/Tank1/Media]# dd if=/dev/zero of=testfile2 bs=1M count=4k 4294967296 bytes transferred in 3.761309 secs (1141881115 bytes/sec) root@truenas[/mnt/Tank1/Media]# dd if=/dev/zero of=testfile bs=1M count=10k 10737418240 bytes transferred in 19.120238 secs (561573461 bytes/sec) root@truenas[/mnt/Tank1/Media]# dd if=/dev/zero of=testfile bs=1M count=100k 107374182400 bytes transferred in 223.981461 secs (479388705 bytes/sec)
Pool
Code:
root@truenas[~]# zpool status Tank1 pool: Tank1 state: ONLINE scan: scrub repaired 0B in 05:47:35 with 0 errors on Sat Jun 4 02:24:38 2022 config: NAME STATE READ WRITE CKSUM Tank1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/33050965-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/3152ba21-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/3270afe1-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/32a01d03-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/336a5d2f-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/3660e7b0-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 raidz2-1 ONLINE 0 0 0 gptid/35024d99-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/349cc1c0-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/3132514a-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0 gptid/35beb2a8-df6d-11ec-af81-d7fd7bbbde8b ONLINE 0 0 0