I have a 6x2x3TB mirrored zpool (on FreeNAS 11.2U2) that can't seem to get higher than ~600MiB/s on sequential reads. The individual disks are capable of at least 150MiB/s, and I'm trying to reach 1GiB/s sequential reads which this setup should be more than capable of. During a scrub ZFS will read from all the disks at near max speed, just not when doing normal sequential reads.
The pool is on a r710 (2xE5620@2.40GHz, 60GB RAM) with a LSI 9200-8e connected to a Lenovo SA120 with SATA drives, in case that matters. ashift is 12 for all vdevs, and I've tested with 8K, 128K, and 1M recordsizes with 128K seeming to do the best but without much of a difference.
Does anyone have any idea what could be limiting me here? It feels suspiciously like there is some global tunable limit that I am running into but I have no idea what it could be.
example raw disk read (~180MB/s):
example zvol read (~510MB/s):
example file read (~600MB/s):
zpool layout:
zpool iostat during reads:
iostat during reads:
iostat during raw simultaneous dd from all drives:
CPU utilization/load during tests:
The pool is on a r710 (2xE5620@2.40GHz, 60GB RAM) with a LSI 9200-8e connected to a Lenovo SA120 with SATA drives, in case that matters. ashift is 12 for all vdevs, and I've tested with 8K, 128K, and 1M recordsizes with 128K seeming to do the best but without much of a difference.
Does anyone have any idea what could be limiting me here? It feels suspiciously like there is some global tunable limit that I am running into but I have no idea what it could be.
example raw disk read (~180MB/s):
Code:
$ sudo dd if=/dev/da2p2 of=/dev/null bs=128k count=80k 81920+0 records in 81920+0 records out 10737418240 bytes transferred in 58.288367 secs (184212028 bytes/sec)
example zvol read (~510MB/s):
Code:
$ sudo dd if=/dev/zvol/tank0/perf-test/test-zvol of=/dev/null bs=128k count=80k 81920+0 records in 81920+0 records out 10737418240 bytes transferred in 20.745362 secs (517581627 bytes/sec)
example file read (~600MB/s):
Code:
$ sudo dd if=/mnt/tank0/perf-test/test.file of=/dev/null bs=128k count=80k 81920+0 records in 81920+0 records out 10737418240 bytes transferred in 17.677490 secs (607406268 bytes/sec)
zpool layout:
Code:
$ zpool list -v tank0 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT tank0 16.3T 2.56T 13.8T - - 0% 15% 1.00x ONLINE /mnt mirror 2.72T 445G 2.28T - - 0% 15% gptid/e4e7e6b6-4a53-11e9-83d1-842b2b066720 - - - - - - - gptid/05e02b8c-4a54-11e9-83d1-842b2b066720 - - - - - - - mirror 2.72T 446G 2.28T - - 0% 16% gptid/0b979830-4a54-11e9-83d1-842b2b066720 - - - - - - - gptid/0fb474a8-4a54-11e9-83d1-842b2b066720 - - - - - - - mirror 2.72T 448G 2.28T - - 0% 16% gptid/13c3954d-4a54-11e9-83d1-842b2b066720 - - - - - - - gptid/17febc9f-4a54-11e9-83d1-842b2b066720 - - - - - - - mirror 2.72T 451G 2.28T - - 0% 16% gptid/1c04408e-4a54-11e9-83d1-842b2b066720 - - - - - - - gptid/20648952-4a54-11e9-83d1-842b2b066720 - - - - - - - mirror 2.72T 407G 2.32T - - 0% 14% gptid/249ad9d2-4a54-11e9-83d1-842b2b066720 - - - - - - - gptid/2fdfb369-4a54-11e9-83d1-842b2b066720 - - - - - - - mirror 2.72T 420G 2.31T - - 0% 15% gptid/3fd5c7be-4a54-11e9-83d1-842b2b066720 - - - - - - - gptid/477aed6d-4a54-11e9-83d1-842b2b066720 - - - - - - - log - - - - - - mirror 9.50G 180K 9.50G - - 0% 0% gptid/68b6b008-fbd1-11e6-aa96-782bcb779bf8 - - - - - - - gptid/68605033-fbd1-11e6-aa96-782bcb779bf8 - - - - - - -
zpool iostat during reads:
Code:
capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- tank0 2.56T 13.8T 33.4K 0 531M 0
iostat during reads:
Code:
device r/s w/s kr/s kw/s ms/r ms/w ms/o ms/t qlen %b da2 2028 0 59182.1 0.0 0 0 0 0 0 29 da3 1903 0 61723.3 0.0 0 0 0 0 0 28 da4 1636 0 44180.6 0.0 0 0 0 0 0 25 da5 1726 0 44053.0 0.0 0 0 0 0 0 15 da6 1913 0 57406.3 0.0 0 0 0 0 2 17 da7 1542 0 44386.6 0.0 0 0 0 0 2 18 da8 1869 0 49341.3 0.0 0 0 0 0 0 21 da9 1766 0 53491.6 0.0 0 0 0 0 0 32 da10 1905 0 60173.1 0.0 0 0 0 0 0 28 da11 1064 0 45770.0 0.0 2 0 0 2 0 60 da12 1223 0 46780.6 0.0 0 0 0 0 3 33 da13 1716 0 51019.1 0.0 0 0 0 0 0 26
iostat during raw simultaneous dd from all drives:
Code:
device r/s w/s kr/s kw/s ms/r ms/w ms/o ms/t qlen %b da2 1125 0 144088.9 0.0 0 0 0 0 1 86 da3 1125 0 144088.9 0.0 0 0 0 0 1 89 da4 1500 0 192118.5 0.0 0 0 0 0 1 105 da5 1313 0 168103.7 0.0 0 0 0 0 1 100 da6 1313 0 168103.7 0.0 0 0 0 0 1 99 da7 1313 0 168103.7 0.0 0 0 0 0 1 100 da8 1313 0 168103.7 0.0 0 0 0 0 1 91 da9 1313 0 168103.7 0.0 0 0 0 0 1 95 da10 1500 0 192118.5 0.0 0 0 0 0 1 109 da11 938 0 120074.1 0.0 1 0 0 1 0 99 da12 1125 0 144088.9 0.0 0 0 0 0 1 105 da13 1125 0 144088.9 0.0 0 0 0 0 1 87
CPU utilization/load during tests:
Code:
Avg: 0.5% sy: 8.3% ni: 0.0% hi: 0.2% si: 0.0% wa: 0.0% Load average: 2.36 1.58 1.13