Donny Davis
Contributor
- Joined
- Jul 31, 2015
- Messages
- 139
So in my latest FreeNAS endeavor I am trying to hit a million IOPS.
My machine is as follows
Dell Poweredge T620
2X Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
192GB of DDR3 memory (ECC)
4 Intel DC P3600 1.6TB NVME Drives in a 4 way stripe
So far I am quite a clip away from 1M IOPS - However under Linux with MDADM I can hit the number I am looking for so I am pretty sure its not the gear.
I am sure FreeNAS and ZFS are quite capable and I have it misconfigured.
It could be my testing method, my configuration, or both. Any pointers on how to push this to the max would be much appreciated.
So far I am at 100K IOPS using a 4k test with fio
Just for giggles I bumped up the block size so I can see some throughput numbers
write: IOPS=15.9k, BW=7931MiB/s (8316MB/s)
I guess I can live with 8G/s
My machine is as follows
Dell Poweredge T620
2X Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
192GB of DDR3 memory (ECC)
4 Intel DC P3600 1.6TB NVME Drives in a 4 way stripe
So far I am quite a clip away from 1M IOPS - However under Linux with MDADM I can hit the number I am looking for so I am pretty sure its not the gear.
I am sure FreeNAS and ZFS are quite capable and I have it misconfigured.
It could be my testing method, my configuration, or both. Any pointers on how to push this to the max would be much appreciated.
So far I am at 100K IOPS using a 4k test with fio
Code:
fio --filename=test --direct=1 --rw=randrw --randrepeat=0 --rwmixread=0--iodepth=16 --numjobs=32 --runtime=60 --group_reporting --name=4ktest --size=4G --bs=4k 4ktest: (groupid=0, jobs=32): err= 0: pid=11832: Mon Dec 30 19:45:12 2019 write: IOPS=102k, BW=397MiB/s (416MB/s)(23.2GiB/60010msec) clat (usec): min=13, max=81419, avg=310.83, stdev=620.32 lat (usec): min=13, max=81420, avg=311.27, stdev=620.54 clat percentiles (usec): | 1.00th=[ 37], 5.00th=[ 52], 10.00th=[ 68], 20.00th=[ 90], | 30.00th=[ 102], 40.00th=[ 116], 50.00th=[ 137], 60.00th=[ 165], | 70.00th=[ 223], 80.00th=[ 371], 90.00th=[ 758], 95.00th=[ 1172], | 99.00th=[ 2311], 99.50th=[ 2966], 99.90th=[ 5932], 99.95th=[ 9110], | 99.99th=[20317] bw ( KiB/s): min= 2525, max=30773, per=3.13%, avg=12708.65, stdev=5502.21, samples=3831 iops : min= 631, max= 7693, avg=3176.88, stdev=1375.55, samples=3831 lat (usec) : 20=0.01%, 50=4.35%, 100=24.07%, 250=44.31%, 500=11.43% lat (usec) : 750=5.63%, 1000=3.67% lat (msec) : 2=5.13%, 4=1.18%, 10=0.18%, 20=0.03%, 50=0.01% lat (msec) : 100=0.01% cpu : usr=1.69%, sys=25.15%, ctx=9471830, majf=0, minf=0 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,6094225,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1
Just for giggles I bumped up the block size so I can see some throughput numbers
write: IOPS=15.9k, BW=7931MiB/s (8316MB/s)
I guess I can live with 8G/s
Code:
4ktest: (groupid=0, jobs=32): err= 0: pid=12389: Mon Dec 30 19:49:32 2019 write: IOPS=15.9k, BW=7931MiB/s (8316MB/s)(128GiB/16527msec) clat (usec): min=107, max=76720, avg=1901.41, stdev=2120.80 lat (usec): min=114, max=76824, avg=1993.77, stdev=2127.88 clat percentiles (usec): | 1.00th=[ 343], 5.00th=[ 668], 10.00th=[ 775], 20.00th=[ 930], | 30.00th=[ 1074], 40.00th=[ 1205], 50.00th=[ 1336], 60.00th=[ 1500], | 70.00th=[ 1713], 80.00th=[ 2212], 90.00th=[ 3523], 95.00th=[ 5080], | 99.00th=[10421], 99.50th=[14091], 99.90th=[25035], 99.95th=[30278], | 99.99th=[42730] bw ( KiB/s): min=131334, max=453632, per=3.14%, avg=255016.27, stdev=72851.62, samples=1024 iops : min= 256, max= 886, avg=497.70, stdev=142.32, samples=1024 lat (usec) : 250=0.67%, 500=1.19%, 750=6.69%, 1000=16.64% lat (msec) : 2=51.99%, 4=14.81%, 10=6.92%, 20=0.89%, 50=0.21% lat (msec) : 100=0.01% cpu : usr=5.57%, sys=45.56%, ctx=1422776, majf=0, minf=0 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=1
Last edited: