Hi folks!
I have built a new NAS, based on TrueNAS Scale. First some specs:
Record Size: 1M
Write speeds are okay:
But read speeds...
Real world copy (dataset@storage1 -> dataset@nvme pool; sequential/one big file):
All tests were done on a dataset with "primarycache=none".
For a pool with 6 disks where each disk have these specs...
Any hints and suggestions?
I have built a new NAS, based on TrueNAS Scale. First some specs:
- MB: ASUS Pro WS W680-Ace IPMI
- CPU: Intel Core i7-13700 (E-Cores deactivated)
- RAM: 4x Kingston Server Premier DIMM 32GB, DDR5-4800, CL40-39-39, ECC (-> 128 GB RAM)
- HBA: Broadcom HBA 9405W-16i, PCIe 3.1 x16
- OS disks: 2x Micron 7400 MAX - 3DWPD Mixed Use 800GB
- Storage disks: 6x Samsung Enterprise SSD PM1643 7.68TB, SAS 12Gb/s (formated to 4kn)
Code:
pool: storage1 state: ONLINE scan: scrub repaired 0B in 10:39:25 with 0 errors on Tue Aug 29 11:39:27 2023 config: NAME STATE READ WRITE CKSUM storage1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 6a9ce000-78bc-418c-a457-1bf89a91239f ONLINE 0 0 0 93ed437a-1c35-47bf-8c20-783bd0ecc546 ONLINE 0 0 0 f1a88f56-5bee-4d1e-a488-18e36ed56d06 ONLINE 0 0 0 ba4a92c4-17ee-4e93-8f61-c357dd574081 ONLINE 0 0 0 3619ede7-4622-42c4-8f66-a47b60bb7b40 ONLINE 0 0 0 f74a3e48-3389-47a4-b751-64168ed7416f ONLINE 0 0 0
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT storage1 41.9T 12.2T 29.7T - - 0% 29% 1.00x ONLINE /mnt raidz2-0 41.9T 12.2T 29.7T - - 0% 29.2% - ONLINE 6a9ce000-78bc-418c-a457-1bf89a91239f 6.98T - - - - - - - ONLINE 93ed437a-1c35-47bf-8c20-783bd0ecc546 6.98T - - - - - - - ONLINE f1a88f56-5bee-4d1e-a488-18e36ed56d06 6.98T - - - - - - - ONLINE ba4a92c4-17ee-4e93-8f61-c357dd574081 6.98T - - - - - - - ONLINE 3619ede7-4622-42c4-8f66-a47b60bb7b40 6.98T - - - - - - - ONLINE f74a3e48-3389-47a4-b751-64168ed7416f 6.98T - - - - - - - ONLINE
Record Size: 1M
Write speeds are okay:
Code:
admin@tilikum:$ fio --filename=test --ioengine=posixaio --rw=write --bs=1M --numjobs=1 --iodepth=16 --group_reporting --name=write_test --filesize=50G --direct=1 --runtime=120 write_test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=16 fio-3.25 Starting 1 process Jobs: 1 (f=1): [W(1)][97.5%][w=1199MiB/s][w=1199 IOPS][eta 00m:01s] write_test: (groupid=0, jobs=1): err= 0: pid=237538: Tue Aug 29 14:42:24 2023 write: IOPS=1324, BW=1325MiB/s (1389MB/s)(50.0GiB/38650msec); 0 zone resets slat (usec): min=9, max=1164, avg=38.69, stdev=17.82 clat (usec): min=944, max=174486, avg=11984.55, stdev=10424.66 lat (usec): min=973, max=174536, avg=12023.24, stdev=10430.19 clat percentiles (msec): | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 7], 20.00th=[ 7], | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], | 70.00th=[ 12], 80.00th=[ 15], 90.00th=[ 22], 95.00th=[ 35], | 99.00th=[ 47], 99.50th=[ 53], 99.90th=[ 116], 99.95th=[ 136], | 99.99th=[ 171] bw ( MiB/s): min= 140, max= 5128, per=99.81%, avg=1322.18, stdev=827.38, samples=77 iops : min= 140, max= 5128, avg=1322.18, stdev=827.38, samples=77 lat (usec) : 1000=0.01% lat (msec) : 2=0.36%, 4=5.31%, 10=58.03%, 20=25.13%, 50=10.58% lat (msec) : 100=0.44%, 250=0.15% cpu : usr=5.57%, sys=0.14%, ctx=25945, majf=0, minf=44 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=50.1%, 16=49.9%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=95.9%, 8=4.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,51200,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): WRITE: bw=1325MiB/s (1389MB/s), 1325MiB/s-1325MiB/s (1389MB/s-1389MB/s), io=50.0GiB (53.7GB), run=38650-38650msec
But read speeds...
Code:
admin@tilikum:$ fio --filename=test --ioengine=posixaio --rw=read --bs=1M --numjobs=1 --iodepth=16 --group_reporting --name=write_test --filesize=50G --direct=1 --runtime=120 write_test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=16 fio-3.25 Starting 1 process Jobs: 1 (f=1): [R(1)][100.0%][r=176MiB/s][r=176 IOPS][eta 00m:00s] write_test: (groupid=0, jobs=1): err= 0: pid=250436: Tue Aug 29 14:47:11 2023 read: IOPS=220, BW=221MiB/s (232MB/s)(25.9GiB/120081msec) slat (nsec): min=58, max=288905, avg=859.40, stdev=2056.97 clat (msec): min=15, max=165, avg=72.43, stdev=30.34 lat (msec): min=15, max=165, avg=72.43, stdev=30.34 clat percentiles (msec): | 1.00th=[ 17], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 21], | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 87], 60.00th=[ 88], | 70.00th=[ 88], 80.00th=[ 89], 90.00th=[ 91], 95.00th=[ 107], | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 132], 99.95th=[ 138], | 99.99th=[ 155] bw ( KiB/s): min=159744, max=978944, per=100.00%, avg=226151.73, stdev=166843.54, samples=240 iops : min= 156, max= 956, avg=220.85, stdev=162.93, samples=240 lat (msec) : 20=19.56%, 50=3.13%, 100=71.14%, 250=6.17% cpu : usr=0.13%, sys=0.08%, ctx=13288, majf=0, minf=43 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=50.0%, 16=50.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=95.8%, 8=0.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=26521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): READ: bw=221MiB/s (232MB/s), 221MiB/s-221MiB/s (232MB/s-232MB/s), io=25.9GiB (27.8GB), run=120081-120081msec
Code:
admin@tilikum:~$ sudo zpool iostat -ylv storage1 10 capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.72K 0 180M 0 1ms - 1ms - 872ns - - - - - raidz2-0 12.3T 29.6T 1.72K 0 180M 0 1ms - 1ms - 872ns - - - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 372 0 44.8M 0 2ms - 2ms - 829ns - - - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 372 0 44.8M 0 2ms - 2ms - 996ns - - - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 332 0 44.7M 0 2ms - 2ms - 808ns - - - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 332 0 44.6M 0 2ms - 2ms - 873ns - - - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 156 0 626K 0 124us - 123us - 904ns - - - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 195 0 784K 0 269us - 269us - 804ns - - - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.74K 0 182M 0 1ms - 1ms - 885ns - - - - - raidz2-0 12.3T 29.6T 1.74K 0 182M 0 1ms - 1ms - 885ns - - - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 355 0 45.1M 0 2ms - 2ms - 834ns - - - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 355 0 45.1M 0 2ms - 2ms - 1us - - - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 355 0 45.1M 0 2ms - 2ms - 836ns - - - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 355 0 45.1M 0 2ms - 2ms - 875ns - - - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 177 0 711K 0 112us - 112us - 930ns - - - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 177 0 711K 0 221us - 221us - 805ns - - - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.85K 0 194M 0 943us - 943us - 869ns - - - - - raidz2-0 12.3T 29.6T 1.85K 0 194M 0 943us - 943us - 869ns - - - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 545 0 48.8M 0 1ms - 1ms - 842ns - - - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 392 0 10.6M 0 502us - 501us - 1us - - - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 60 0 9.27M 0 2ms - 2ms - 803ns - - - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 213 0 47.6M 0 1ms - 1ms - 831ns - - - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 177 0 38.4M 0 685us - 685us - 869ns - - - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 508 0 39.7M 0 285us - 285us - 807ns - - - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.80K 0 188M 0 1ms - 1ms - 873ns - - - - - raidz2-0 12.3T 29.6T 1.80K 0 188M 0 1ms - 1ms - 873ns - - - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 552 0 47.4M 0 328us - 328us - 833ns - - - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 547 0 46.2M 0 291us - 291us - 955ns - - - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 0 0 0 0 - - - - - - - - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 4 0 1.20M 0 1ms - 1ms - 800ns - - - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 183 0 46.0M 0 4ms - 4ms - 876ns - - - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 551 0 47.4M 0 1ms - 1ms - 830ns - - - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 4.24K 0 445M 0 480us - 477us - 560ns - - - - - raidz2-0 12.3T 29.6T 4.24K 0 445M 0 480us - 477us - 560ns - - - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 597 0 30.0M 0 195us - 195us - 613ns - - - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 919 0 111M 0 328us - 325us - 634ns - - - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 706 0 82.0M 0 297us - 294us - 469ns - - - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 706 0 82.0M 0 299us - 297us - 478ns - - - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 818 0 110M 0 874us - 870us - 552ns - - - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 597 0 30.0M 0 888us - 887us - 607ns - - - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
Real world copy (dataset@storage1 -> dataset@nvme pool; sequential/one big file):
Code:
admin@tilikum:~$ sudo zpool iostat -ylv storage1 10 capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.26K 0 321M 0 102ms - 5ms - 224ms - 71ms - - - raidz2-0 12.3T 29.6T 1.26K 0 321M 0 102ms - 5ms - 224ms - 71ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 176 0 44.0M 0 167ms - 10ms - 218ms - 95ms - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 274 0 68.4M 0 199ms - 8ms - 216ms - 136ms - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 147 0 36.7M 0 64ms - 5ms - - - 51ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 147 0 36.7M 0 51ms - 5ms - - - 51ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 320 0 79.9M 0 46ms - 3ms - 201ms - 42ms - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 222 0 55.5M 0 68ms - 3ms - 301ms - 42ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 960 0 239M 0 237ms - 12ms - 283ms - 194ms - - - raidz2-0 12.3T 29.6T 960 0 239M 0 237ms - 12ms - 283ms - 194ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 0 0 2.80K 0 196us - 196us - - - 1us - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 0 0 2.80K 0 196us - 196us - - - 2us - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 237 0 59.3M 0 220ms - 12ms - 301ms - 188ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 238 0 59.5M 0 188ms - 12ms - 335ms - 188ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 241 0 60.2M 0 203ms - 12ms - 230ms - 201ms - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 241 0 60.3M 0 337ms - 12ms - 301ms - 201ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 962 0 240M 0 252ms - 12ms - 295ms - 201ms - - - raidz2-0 12.3T 29.6T 962 0 240M 0 252ms - 12ms - 295ms - 201ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 0 0 1.60K 0 6ms - 6ms - 1us - 1us - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 0 0 1.20K 0 196us - 196us - - - 2us - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 240 0 60.0M 0 343ms - 12ms - 320ms - 201ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 240 0 59.9M 0 207ms - 12ms - 308ms - 201ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 240 0 60.0M 0 201ms - 12ms - 316ms - 201ms - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 240 0 60.0M 0 256ms - 12ms - 255ms - 201ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 977 0 244M 0 170ms - 10ms - 311ms - 122ms - - - raidz2-0 12.3T 29.6T 977 0 244M 0 170ms - 10ms - 311ms - 122ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 82 0 20.5M 0 7ms - 7ms - - - 77us - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 82 0 20.5M 0 8ms - 8ms - - - 91us - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 240 0 59.9M 0 328ms - 12ms - 301ms - 201ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 163 0 40.8M 0 230ms - 11ms - 316ms - 143ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 163 0 40.8M 0 146ms - 11ms - 352ms - 143ms - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 245 0 61.4M 0 102ms - 7ms - - - 98ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.50K 0 384M 0 59ms - 4ms - 279ms - 33ms - - - raidz2-0 12.3T 29.6T 1.50K 0 384M 0 59ms - 4ms - 279ms - 33ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 93 0 23.3M 0 1ms - 1ms - - - 10us - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 93 0 23.3M 0 1ms - 1ms - - - 9us - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 389 0 97.1M 0 56ms - 4ms - 281ms - 30ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 290 0 72.3M 0 15ms - 5ms - 402ms - 12ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 290 0 72.2M 0 13ms - 4ms - 402ms - 11ms - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 382 0 95.5M 0 157ms - 6ms - 216ms - 86ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- capacity operations bandwidth total_wait disk_wait syncq_wait asyncq_wait scrub trim pool alloc free read write read write read write read write read write read write wait wait ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- storage1 12.3T 29.6T 1.20K 0 306M 0 73ms - 5ms - 213ms - 41ms - - - raidz2-0 12.3T 29.6T 1.20K 0 306M 0 73ms - 5ms - 213ms - 41ms - - - 6a9ce000-78bc-418c-a457-1bf89a91239f - - 172 0 43.2M 0 159ms - 6ms - 241ms - 95ms - - - 93ed437a-1c35-47bf-8c20-783bd0ecc546 - - 97 0 24.4M 0 15ms - 2ms - - - 13ms - - - f1a88f56-5bee-4d1e-a488-18e36ed56d06 - - 226 0 56.4M 0 16ms - 4ms - 301ms - 8ms - - - ba4a92c4-17ee-4e93-8f61-c357dd574081 - - 226 0 56.4M 0 16ms - 4ms - 402ms - 11ms - - - 3619ede7-4622-42c4-8f66-a47b60bb7b40 - - 210 0 52.4M 0 4ms - 3ms - - - 387us - - - f74a3e48-3389-47a4-b751-64168ed7416f - - 291 0 72.9M 0 181ms - 6ms - 150ms - 99ms - - - ---------------------------------------- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- ----- -----
All tests were done on a dataset with "primarycache=none".
For a pool with 6 disks where each disk have these specs...
- Sequential Read: 2,100 MB/s
- Sequential Write: 2,000 MB/s
- Random Read (QD = 64): 400K IOPS
- Random Write (QD = 64): 70K IOPS
Any hints and suggestions?