TrueNAS Scale not using full 10GB speed of Intel NIC

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
Hello,

I've recently updated to TrueNAS Scale (V. 22.12.0) and switched to an Intel Ethernet 10G 2P X540-t adapter. However, it doesn't seem that TrueNAS is using the total bandwidth. I understand those speeds are theoretical, especially with a gigabit network backend, but it appears I'm only getting megabit speeds.

I've noticed it, particularly with SMB when transferring files between my 2020 Mac Mini and server. In most cases, even a file as small as 2GB can take almost 2 minutes to transfer or it may even disconnect the SMB share from my Mac altogether. This problem was particularly noticeable when I was trying to move my 200GB photo to PhotoPrism where the estimated time was counting up only to transfer between 20GB to 30GB and then disconnect the SMB share from my Mac.

What makes me believe it is throttled to megabit speeds is the fact files appear to be moving at about 0.01GB per second. And, maybe this slow speed caused the server to disconnect. Anyways, just a thought.

So, is there any way to confirm TrueNAS Scale is pulling the right drivers, and if not how can I update them? Intel does have these drivers readily available. Alternatively, could there be a problem with the SMB share, and if so what can I do to optimize it?

Any suggestions would be greatly appreciated!

Additional notes, previously this was not a problem when running TrueNAS Core, although I did have to edit the tunable for TrueNAS Core to see the Realtek NIC built into the motherboard. But, once I updated to the latest version of TrueNAS Core that fix broke. That's one of the reasons I figured I'd go ahead a hop on the TrueNAS Scale train. And, by switching to an Intel NIC I was hoping not to run into these problems but here I am again.


Hardware:

Local NAS:
  • TrueNAS Version: TrueNAS-SCALE-22.12.0
  • Motherboard: MSI Z490-A Pro
  • Memory: 4x PNY - XLR8 8GB, 3000MHz DDR4 DIMM (Total 32GB)
  • CPU: Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz
  • NIC: Intel Ethernet 10G 2P X540-t
  • Pool Configuration:
    • "MAINpool": 2 VDEVS each with 2x WD Red Plus 4TB (Total: 16TB raw, 8TB useable)
    • "Backup-Drive_Local": 1 VDEV with attached WD - easystore 8TB External USB 3.0 Hard Drive (Used for backup only)
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
iirc SMB with mac devices uses syncwrites, so that could be (at least part of) the issue.
Please post your full hardware specs as per forum rules.
 

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
Thank you for your responses!

@Daisuke How do I retrieve the fio?

@Davvo I apologize for omitting my hardware specs. I've added them to the original post.
 
Last edited:

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
The lspci shows that the system does see the 10GB Intel controller and it appears to be pulling the correct drivers.
Code:
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
02:00.1 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)


But, I can't seem to get iperf3 or fio to work. iperf3 is giving me a "iperf3: error - unable to send control message: Bad file descriptor" Any ideas?
 
Last edited:

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
How do I retrieve the fio?

In my test example, /Volumes/default/opt is the SMB share mount in my Mac Pro and /mnt/default/opt/backup is the Scale pool dataset. The --blocksize query parameter is your recordsize setting in your pool dataset where the SMB share is mounted, make sure you read the Pools and Datasets and SMB Shares sections from this thread to understand the importance of this setting for your pool datasets, as well how to configure properly your SMB share.

1672807858024.png


First, test the pool read performance directly on your Scale server, my results on raidz2 default pool (12x HGST Ultrastar He8 Helium HUH728080ALE601 8TB disks):
Code:
$ pwd
/mnt/default/opt/backup
$ fio --name=seq-read --rw=read --ioengine=posixaio \
    --blocksize=128k --iodepth=64 --runtime=300 --size=4g \
    --direct=1 --invalidate=1 --time_based --numjobs="$(nproc)"
...
Run status group 0 (all jobs):
   READ: bw=12.5GiB/s (13.4GB/s), 1036MiB/s-1082MiB/s (1087MB/s-1135MB/s), io=3748GiB (4024GB), run=300002-300007msec

Install the required testing tools on your Mac:
Code:
$ brew install ioping fio

Next, test the actual data transfer speed through your SMB share mount, my results on a 1G network with transfer speed capped at 117MB:
Code:
$ pwd
/Volumes/default/opt/backup
$ fio --name=seq-read --rw=read --ioengine=posixaio \
    --blocksize=128k --iodepth=64 --runtime=300 --size=4g \
    --direct=1 --invalidate=1 --time_based --numjobs="$(sysctl -n hw.physicalcpu)"
...
Run status group 0 (all jobs):
   READ: bw=112MiB/s (117MB/s), 1327KiB/s-19.6MiB/s (1359kB/s-20.5MB/s), io=32.8GiB (35.2GB), run=300026-300098msec

Finally, test the disk I/O latency from your Mac to Scale server with ioping, my results (warmup is always ignored, for accuracy):
Code:
$ ioping -c 5 .
4 KiB <<< . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB): request=1 time=847 us (warmup)
4 KiB <<< . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB): request=2 time=1.10 ms
4 KiB <<< . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB): request=3 time=1.27 ms
4 KiB <<< . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB): request=4 time=1.23 ms
4 KiB <<< . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB): request=5 time=1.36 ms

--- . (smbfs //user@uranus._smb._tcp.local/default 51.8 TiB) ioping statistics ---
4 requests completed in 4.96 ms, 16 KiB read, 807 iops, 3.15 MiB/s
generated 5 requests in 4.01 s, 20 KiB, 1 iops, 4.99 KiB/s
min/avg/max/mdev = 1.10 ms / 1.24 ms / 1.36 ms / 94.2 us
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
@Daisuke Thank you for that helpful directions. Below are the results:

Server —> Server
Code:
Jobs: 12 (f=12): [R(12)][65.3%][r=188MiB/s][r=1502 IOPS][eta 01m:44s] 


Mac —> Server
Code:
READ: bw=110MiB/s (116MB/s), 4813KiB/s-19.3MiB/s (4929kB/s-20.2MB/s), io=32.3GiB (34.7GB), run=300029-300095msec


Disk I/O Latency
Code:
--- . (smbfs //rshepher@truenas._smb._tcp.local/Test 5.12 TiB) ioping statistics ---
4 requests completed in 3.35 ms, 16 KiB read, 1.20 k iops, 4.67 MiB/s
generated 5 requests in 4.00 s, 20 KiB, 1 iops, 4.99 KiB/s
min/avg/max/mdev = 801 us / 836.5 us / 870 us / 26.2 us



And, I wasn’t able to find the option to specify the record size like you have screenshots of.
 
Last edited:

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
You have your answer, disks speed is slow like molasses. Cannot get 1G speed transfer if your disks are providing 200M. You should format your [CODE]code[/CODE], like every one else on these forums, is hard to read what you posted.

The recordsize is part of advanced settings. If you don’t find a setting, look what else is available on that page. Please try to make an effort and search, instead of asking trivial questions you should know already. Normally, in cases like that, you will just be ignored instead of being given an answer.
 
Last edited:

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
@Daisuke
WOW! Other than adjusting the record size, are there any other recommendations for optimizing the disk speeds, especially when it comes to TrueNAS Scale? I just upgraded from TrueNAS Core where I wasn’t having any speed issues I even just replaced two drives. So, what else could be the bottleneck? I also understand that TrueNAS Scale is new and a little slower but this seems a bit extreme.

Thanks!
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
You can open a support ticket and provide the info. Once you troubleshoot with iXsystems, you can decide if you want to rollback to Core, or apply a suggested fix. Edit: I talked to @anodos and Jira tickets are for OS related issues bugs, is not the case for you.
I just upgraded from TrueNAS Core
Do a Scale clean install instead.
 
Last edited:

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
Doing a clean install doesn’t sound like a bad idea if I can reimport my pools and rebuild the apps I just finished configurating. Right now, I think I’m going to start by filing a ticket with IX Systems and see what they recommend. Thanks for all the help!
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
Right now, I think I’m going to start by filing a ticket with IX Systems and see what they recommend.
I talked to @anodos and Jira tickets are for OS related issues bugs, is not the case for you. I'm going to edit that comment into previous post.
if I can reimport my pools
Yes, you can, I was able to re-import the pools without any issues after I performed a Scale clean install. I was already on upgraded Scale Bluefin 22.12.0 but I found several upgrade issues that do not align with a clean install. I hope your install disk is an SSD, make sure you create the 16G swap partition, if asked.
 
Last edited:

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
@Daisuke, I'm happy to report that I have successfully reinstalled and reconfigured TrueNAS Scale. So far it seems like the issue was fixed. However, the SMB share is still unstable when transferring large files. For instance, I'm trying to transfer my 200+GB photo library to my server but the SMB share keeps disconnecting and I end up having to reconnect, eject, reconnect again and restart the transfer. In the meantime, I spliced up the catalog to transfer it at around 20 GB at a time which seems to be working but isn't preferable.

That being said, I also ran the benchmarks again for reference below are those results:

Server —> Server
Code:
Jobs: 12 (f=12): [R(12)][22.7%][r=188MiB/s][r=1507 IOPS][eta 03m:52s]


Mac —> Server
Code:
READ: bw=92.7MiB/s (97.2MB/s), 2124KiB/s-16.2MiB/s (2175kB/s-17.0MB/s), io=27.2GiB (29.2GB), run=300029-300101msec


Disk I/O Latency
Code:
--- . (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.10 TiB) ioping statistics ---
4 requests completed in 3.70 ms, 16 KiB read, 1.08 k iops, 4.22 MiB/s
generated 5 requests in 4.00 s, 20 KiB, 1 iops, 5.00 KiB/s
min/avg/max/mdev = 715 us / 925.2 us / 1.38 ms / 266.6 us
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
Your server test does not show the correct data, this is my result on server:
Code:
$ pwd
/mnt/default/opt/backup
$ fio --name=seq-read --rw=read --ioengine=posixaio \
    --blocksize=128k --iodepth=64 --runtime=300 --size=4g \
    --direct=1 --invalidate=1 --time_based --numjobs="$(nproc)"
...
Run status group 0 (all jobs):
   READ: bw=12.5GiB/s (13.4GB/s), 1036MiB/s-1082MiB/s (1087MB/s-1135MB/s), io=3748GiB (4024GB), run=300002-300007msec

What's the directory where you run the test on server and what if your dataset recordsize? I asked you to post your results using the above format and you posted again some random info. What router and switch do you use?
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Could this be an issue with the realtek NIC(s)?
Both machines use Realtek
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
There were some generalized statements made in the past from iXsystems that from a performance perspective Scale is not yet where Core is. So this may be such a case.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
OK, so what I'm seeing here...

Intel 10Gbit NIC on the NAS.

Slow transfer confirmed at least once from a client to be around 1Gbit speed.

Efforts need to focus first on proving that we're actually linking and communicating (or capable of it) between one or more clients and the NAS at 10Gbits.

Sounds to me like the iperf3 efforts need to be brought back...
 

Ralphshep

Dabbler
Joined
Apr 28, 2020
Messages
45
Hello,

Over the past few days, I've reinstalled TrueNAS Scale, manually reconfigured all of my settings, increased the recordsize to 1M, and enabled "Apple SMB 2/3 Protocol Extensions." All of these efforts have increased the speed of my SMB link tremendously. Below are results from another fio and ioping benchmarks.

Although this is the case, I'm still finding the connection to be unstable. Often, when transferring large batches of files I'm stopped by error 42 which appears to be because the drive disconnects from macOS. However, when I attempt to reconnect Finder shows that the drive is still mounted and it still shows up in the volumes folder when navigating through the file system using Terminal. Trying to open the folder in Finder I get an error message that the server can't be reached. Trying to connect to the server using its IP and hostname in the "Connect to Server" window gives me the same error message. At the same time, the eject button next to the share doesn't appear to do anything either. To bypass this issue I have to either have to wait (typically overnight) for everything to time out or restart my computer both of which are major inconveniences. Any thoughts on what could be causing this? Thanks!

root@truenas[/mnt/MAINpool/PhotoLibrary]# fio --name=seq-read --rw=read --ioengine=posixaio \
--blocksize=128k --iodepth=64 --runtime=300 --size=4g \ad --rw=read --ioengine=posixaio \
--direct=1 --invalidate=1 --time_based --numjobs="$(nproc)"
seq-read: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=64
...
fio-3.25
Starting 12 processes
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
seq-read: Laying out IO file (1 file / 4096MiB)
Jobs: 12 (f=12): [R(12)][100.0%][r=232MiB/s][r=1856 IOPS][eta 00m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=1521691: Sun Jan 8 18:13:41 2023
read: IOPS=146, BW=18.3MiB/s (19.2MB/s)(5496MiB/300169msec)
slat (nsec): min=72, max=128869, avg=193.06, stdev=689.02
clat (usec): min=1700, max=3043.3k, avg=436911.43, stdev=541126.47
lat (usec): min=1700, max=3043.3k, avg=436911.62, stdev=541126.48
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1909],
| 20.00th=[ 10028], 30.00th=[ 63701], 40.00th=[ 131597],
| 50.00th=[ 227541], 60.00th=[ 367002], 70.00th=[ 534774],
| 80.00th=[ 759170], 90.00th=[1199571], 95.00th=[1635779],
| 99.00th=[2298479], 99.50th=[2466251], 99.90th=[3036677],
| 99.95th=[3036677], 99.99th=[3036677]
bw ( KiB/s): min=16384, max=147456, per=13.03%, avg=33253.12, stdev=24857.12, samples=338
iops : min= 128, max= 1152, avg=259.79, stdev=194.20, samples=338
lat (msec) : 2=13.80%, 4=5.70%, 10=0.46%, 20=2.60%, 50=5.68%
lat (msec) : 100=6.70%, 250=16.89%, 500=16.59%, 750=11.50%, 1000=5.97%
lat (msec) : 2000=11.64%, >=2000=2.47%
cpu : usr=0.01%, sys=0.00%, ctx=2722, majf=6, minf=52
IO depths : 1=0.1%, 2=0.1%, 4=1.1%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=43968,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521692: Sun Jan 8 18:13:41 2023
read: IOPS=142, BW=17.8MiB/s (18.6MB/s)(5336MiB/300359msec)
slat (nsec): min=71, max=104374, avg=186.50, stdev=574.50
clat (usec): min=1364, max=3257.8k, avg=450296.80, stdev=512479.37
lat (usec): min=1364, max=3257.8k, avg=450296.98, stdev=512479.38
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1844], 10.00th=[ 1926],
| 20.00th=[ 48497], 30.00th=[ 109577], 40.00th=[ 177210],
| 50.00th=[ 250610], 60.00th=[ 383779], 70.00th=[ 566232],
| 80.00th=[ 817890], 90.00th=[1149240], 95.00th=[1501561],
| 99.00th=[2197816], 99.50th=[2734687], 99.90th=[3271558],
| 99.95th=[3271558], 99.99th=[3271558]
bw ( KiB/s): min=16384, max=212992, per=11.95%, avg=30480.09, stdev=22827.64, samples=358
iops : min= 128, max= 1664, avg=238.12, stdev=178.34, samples=358
lat (msec) : 2=11.25%, 4=2.39%, 10=1.50%, 20=2.10%, 50=2.85%
lat (msec) : 100=7.95%, 250=21.44%, 500=18.14%, 750=9.90%, 1000=8.85%
lat (msec) : 2000=12.14%, >=2000=1.50%
cpu : usr=0.01%, sys=0.00%, ctx=2559, majf=2, minf=50
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=53.0%, >=64=8.3%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=42688,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521693: Sun Jan 8 18:13:41 2023
read: IOPS=108, BW=13.6MiB/s (14.3MB/s)(4088MiB/300173msec)
slat (nsec): min=73, max=107438, avg=181.44, stdev=720.59
clat (usec): min=1322, max=3953.3k, avg=587406.22, stdev=672173.48
lat (usec): min=1322, max=3953.3k, avg=587406.40, stdev=672173.48
clat percentiles (usec):
| 1.00th=[ 1680], 5.00th=[ 1827], 10.00th=[ 1942],
| 20.00th=[ 44303], 30.00th=[ 124257], 40.00th=[ 244319],
| 50.00th=[ 367002], 60.00th=[ 530580], 70.00th=[ 742392],
| 80.00th=[ 994051], 90.00th=[1501561], 95.00th=[2055209],
| 99.00th=[2902459], 99.50th=[3405775], 99.90th=[3942646],
| 99.95th=[3942646], 99.99th=[3942646]
bw ( KiB/s): min=16384, max=163840, per=10.56%, avg=26955.05, stdev=21192.15, samples=310
iops : min= 128, max= 1280, avg=210.58, stdev=165.56, samples=310
lat (msec) : 2=11.12%, 4=3.56%, 10=0.98%, 20=1.17%, 50=4.50%
lat (msec) : 100=6.07%, 250=12.72%, 500=19.18%, 750=10.96%, 1000=9.98%
lat (msec) : 2000=14.68%, >=2000=5.09%
cpu : usr=0.01%, sys=0.00%, ctx=2188, majf=1, minf=54
IO depths : 1=0.1%, 2=0.1%, 4=0.9%, 8=12.5%, 16=25.0%, 32=52.7%, >=64=8.9%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.6%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=32704,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521694: Sun Jan 8 18:13:41 2023
read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(6698MiB/300405msec)
slat (nsec): min=72, max=517443, avg=216.98, stdev=2328.93
clat (usec): min=442, max=2181.9k, avg=358773.06, stdev=384287.00
lat (usec): min=442, max=2181.9k, avg=358773.28, stdev=384287.00
clat percentiles (usec):
| 1.00th=[ 1745], 5.00th=[ 1827], 10.00th=[ 1909],
| 20.00th=[ 16188], 30.00th=[ 90702], 40.00th=[ 168821],
| 50.00th=[ 254804], 60.00th=[ 346031], 70.00th=[ 467665],
| 80.00th=[ 599786], 90.00th=[ 893387], 95.00th=[1132463],
| 99.00th=[1719665], 99.50th=[1887437], 99.90th=[2197816],
| 99.95th=[2197816], 99.99th=[2197816]
bw ( KiB/s): min= 1792, max=163840, per=11.57%, avg=29529.71, stdev=26422.34, samples=464
iops : min= 14, max= 1280, avg=230.70, stdev=206.42, samples=464
lat (usec) : 500=0.01%
lat (msec) : 2=13.15%, 4=5.15%, 10=0.81%, 20=1.42%, 50=4.24%
lat (msec) : 100=6.74%, 250=18.02%, 500=22.93%, 750=13.85%, 1000=5.85%
lat (msec) : 2000=7.35%, >=2000=0.48%
cpu : usr=0.02%, sys=0.00%, ctx=3264, majf=2, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=6.7%, 16=22.6%, 32=61.4%, >=64=8.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.2%, 8=1.3%, 16=0.1%, 32=0.1%, 64=1.4%, >=64=0.0%
issued rwts: total=53584,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521695: Sun Jan 8 18:13:41 2023
read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(7776MiB/300147msec)
slat (nsec): min=72, max=106747, avg=195.29, stdev=509.55
clat (usec): min=1700, max=1933.4k, avg=308777.12, stdev=314491.13
lat (usec): min=1700, max=1933.4k, avg=308777.31, stdev=314491.13
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 5211],
| 20.00th=[ 56361], 30.00th=[ 102237], 40.00th=[ 158335],
| 50.00th=[ 212861], 60.00th=[ 274727], 70.00th=[ 367002],
| 80.00th=[ 522191], 90.00th=[ 750781], 95.00th=[ 977273],
| 99.00th=[1367344], 99.50th=[1568670], 99.90th=[1937769],
| 99.95th=[1937769], 99.99th=[1937769]
bw ( KiB/s): min=16384, max=212992, per=13.10%, avg=33423.36, stdev=24732.34, samples=476
iops : min= 128, max= 1664, avg=261.11, stdev=193.22, samples=476
lat (msec) : 2=6.83%, 4=3.15%, 10=1.44%, 20=1.34%, 50=5.97%
lat (msec) : 100=10.60%, 250=27.06%, 500=22.84%, 750=10.91%, 1000=5.35%
lat (msec) : 2000=4.53%
cpu : usr=0.02%, sys=0.00%, ctx=3676, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=53.1%, >=64=8.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=62208,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521696: Sun Jan 8 18:13:41 2023
read: IOPS=185, BW=23.2MiB/s (24.3MB/s)(6960MiB/300336msec)
slat (nsec): min=75, max=190210, avg=218.30, stdev=881.38
clat (usec): min=689, max=2831.4k, avg=345160.08, stdev=427359.08
lat (usec): min=690, max=2831.4k, avg=345160.30, stdev=427359.09
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1844], 10.00th=[ 1926],
| 20.00th=[ 27919], 30.00th=[ 77071], 40.00th=[ 127402],
| 50.00th=[ 187696], 60.00th=[ 274727], 70.00th=[ 387974],
| 80.00th=[ 574620], 90.00th=[ 910164], 95.00th=[1216349],
| 99.00th=[1988101], 99.50th=[2298479], 99.90th=[2701132],
| 99.95th=[2801796], 99.99th=[2835350]
bw ( KiB/s): min= 5632, max=163840, per=12.46%, avg=31781.08, stdev=26717.60, samples=448
iops : min= 44, max= 1280, avg=248.29, stdev=208.73, samples=448
lat (usec) : 750=0.01%
lat (msec) : 2=12.15%, 4=3.10%, 10=1.82%, 20=1.57%, 50=5.87%
lat (msec) : 100=10.67%, 250=22.57%, 500=18.05%, 750=9.60%, 1000=6.30%
lat (msec) : 2000=7.34%, >=2000=0.94%
cpu : usr=0.02%, sys=0.00%, ctx=3211, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=4.7%, 16=14.8%, 32=71.5%, >=64=8.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.6%, 8=0.1%, 16=0.0%, 32=0.9%, 64=1.4%, >=64=0.0%
issued rwts: total=55680,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521697: Sun Jan 8 18:13:41 2023
read: IOPS=105, BW=13.2MiB/s (13.9MB/s)(3968MiB/300070msec)
slat (nsec): min=71, max=205667, avg=192.86, stdev=1179.69
clat (usec): min=1716, max=3657.3k, avg=604962.11, stdev=715788.06
lat (usec): min=1716, max=3657.3k, avg=604962.30, stdev=715788.07
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 1876], 10.00th=[ 1958],
| 20.00th=[ 41681], 30.00th=[ 124257], 40.00th=[ 212861],
| 50.00th=[ 346031], 60.00th=[ 471860], 70.00th=[ 734004],
| 80.00th=[1069548], 90.00th=[1585447], 95.00th=[2298479],
| 99.00th=[3238003], 99.50th=[3338666], 99.90th=[3640656],
| 99.95th=[3640656], 99.99th=[3640656]
bw ( KiB/s): min=16384, max=196608, per=11.04%, avg=28160.22, stdev=23851.02, samples=288
iops : min= 128, max= 1536, avg=220.00, stdev=186.34, samples=288
lat (msec) : 2=11.03%, 4=3.49%, 10=0.20%, 20=0.60%, 50=5.65%
lat (msec) : 100=6.05%, 250=15.52%, 500=18.15%, 750=10.69%, 1000=7.46%
lat (msec) : 2000=14.72%, >=2000=6.45%
cpu : usr=0.01%, sys=0.00%, ctx=1984, majf=8, minf=46
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=53.0%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=31744,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521698: Sun Jan 8 18:13:41 2023
read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(6456MiB/300515msec)
slat (nsec): min=67, max=110741, avg=179.93, stdev=560.99
clat (usec): min=1745, max=2368.1k, avg=372370.14, stdev=353608.69
lat (usec): min=1745, max=2368.1k, avg=372370.32, stdev=353608.69
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 27657],
| 20.00th=[ 91751], 30.00th=[ 152044], 40.00th=[ 208667],
| 50.00th=[ 270533], 60.00th=[ 346031], 70.00th=[ 446694],
| 80.00th=[ 608175], 90.00th=[ 851444], 95.00th=[1098908],
| 99.00th=[1551893], 99.50th=[1769997], 99.90th=[2365588],
| 99.95th=[2365588], 99.99th=[2365588]
bw ( KiB/s): min=16384, max=163840, per=11.74%, avg=29945.49, stdev=20620.43, samples=441
iops : min= 128, max= 1280, avg=233.94, stdev=161.10, samples=441
lat (msec) : 2=5.66%, 4=1.53%, 10=0.74%, 20=0.87%, 50=4.83%
lat (msec) : 100=7.19%, 250=26.27%, 500=27.26%, 750=12.39%, 1000=6.32%
lat (msec) : 2000=6.57%, >=2000=0.37%
cpu : usr=0.01%, sys=0.00%, ctx=3227, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.2%, 8=12.5%, 16=25.0%, 32=52.8%, >=64=8.5%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=51648,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521700: Sun Jan 8 18:13:41 2023
read: IOPS=191, BW=23.9MiB/s (25.0MB/s)(7176MiB/300406msec)
slat (nsec): min=72, max=91945, avg=192.07, stdev=516.01
clat (usec): min=1345, max=2019.8k, avg=334885.23, stdev=327138.08
lat (usec): min=1346, max=2019.8k, avg=334885.42, stdev=327138.08
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1942], 10.00th=[ 18220],
| 20.00th=[ 73925], 30.00th=[ 125305], 40.00th=[ 177210],
| 50.00th=[ 233833], 60.00th=[ 304088], 70.00th=[ 392168],
| 80.00th=[ 557843], 90.00th=[ 775947], 95.00th=[1010828],
| 99.00th=[1468007], 99.50th=[1686111], 99.90th=[2021655],
| 99.95th=[2021655], 99.99th=[2021655]
bw ( KiB/s): min=16384, max=147456, per=12.40%, avg=31638.91, stdev=20429.08, samples=464
iops : min= 128, max= 1152, avg=247.18, stdev=159.60, samples=464
lat (msec) : 2=5.73%, 4=2.41%, 10=0.56%, 20=1.45%, 50=4.46%
lat (msec) : 100=10.37%, 250=26.98%, 500=23.97%, 750=13.15%, 1000=5.57%
lat (msec) : 2000=5.25%, >=2000=0.11%
cpu : usr=0.02%, sys=0.00%, ctx=3724, majf=1, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=0.8%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=57408,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521701: Sun Jan 8 18:13:41 2023
read: IOPS=192, BW=24.0MiB/s (25.2MB/s)(7216MiB/300078msec)
slat (nsec): min=73, max=97429, avg=183.19, stdev=481.36
clat (usec): min=1721, max=2104.0k, avg=332665.27, stdev=299257.51
lat (usec): min=1721, max=2104.0k, avg=332665.45, stdev=299257.52
clat percentiles (usec):
| 1.00th=[ 1795], 5.00th=[ 3458], 10.00th=[ 49546],
| 20.00th=[ 109577], 30.00th=[ 147850], 40.00th=[ 198181],
| 50.00th=[ 246416], 60.00th=[ 295699], 70.00th=[ 383779],
| 80.00th=[ 517997], 90.00th=[ 759170], 95.00th=[ 968885],
| 99.00th=[1333789], 99.50th=[1417675], 99.90th=[2105541],
| 99.95th=[2105541], 99.99th=[2105541]
bw ( KiB/s): min=16384, max=98304, per=11.98%, avg=30563.92, stdev=16989.22, samples=483
iops : min= 128, max= 768, avg=238.77, stdev=132.73, samples=483
lat (msec) : 2=3.75%, 4=1.35%, 10=1.11%, 20=0.44%, 50=3.44%
lat (msec) : 100=8.54%, 250=31.71%, 500=28.27%, 750=10.87%, 1000=6.10%
lat (msec) : 2000=4.32%, >=2000=0.11%
cpu : usr=0.02%, sys=0.00%, ctx=3671, majf=2, minf=49
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=52.9%, >=64=8.6%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=57728,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521702: Sun Jan 8 18:13:41 2023
read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(7272MiB/300044msec)
slat (nsec): min=72, max=120810, avg=186.03, stdev=553.36
clat (usec): min=1359, max=2476.5k, avg=330065.12, stdev=319080.96
lat (usec): min=1359, max=2476.5k, avg=330065.30, stdev=319080.96
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 2040], 10.00th=[ 27132],
| 20.00th=[ 71828], 30.00th=[ 120062], 40.00th=[ 173016],
| 50.00th=[ 238027], 60.00th=[ 304088], 70.00th=[ 400557],
| 80.00th=[ 549454], 90.00th=[ 784335], 95.00th=[ 985662],
| 99.00th=[1317012], 99.50th=[1451230], 99.90th=[2466251],
| 99.95th=[2466251], 99.99th=[2466251]
bw ( KiB/s): min=16384, max=278528, per=12.22%, avg=31188.40, stdev=25593.84, samples=477
iops : min= 128, max= 2176, avg=243.66, stdev=199.95, samples=477
lat (msec) : 2=4.72%, 4=1.00%, 10=1.32%, 20=1.21%, 50=7.92%
lat (msec) : 100=10.45%, 250=25.85%, 500=23.98%, 750=11.99%, 1000=6.71%
lat (msec) : 2000=4.62%, >=2000=0.22%
cpu : usr=0.01%, sys=0.00%, ctx=3583, majf=4, minf=47
IO depths : 1=0.1%, 2=0.1%, 4=1.0%, 8=12.5%, 16=25.0%, 32=53.1%, >=64=8.4%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=58176,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1521703: Sun Jan 8 18:13:41 2023
read: IOPS=171, BW=21.5MiB/s (22.5MB/s)(6440MiB/300126msec)
slat (nsec): min=70, max=95766, avg=178.40, stdev=481.79
clat (usec): min=1519, max=2327.3k, avg=372811.39, stdev=381336.48
lat (usec): min=1519, max=2327.3k, avg=372811.56, stdev=381336.48
clat percentiles (usec):
| 1.00th=[ 1762], 5.00th=[ 1926], 10.00th=[ 39060],
| 20.00th=[ 88605], 30.00th=[ 131597], 40.00th=[ 189793],
| 50.00th=[ 250610], 60.00th=[ 325059], 70.00th=[ 434111],
| 80.00th=[ 591397], 90.00th=[ 910164], 95.00th=[1149240],
| 99.00th=[1769997], 99.50th=[1853883], 99.90th=[2332034],
| 99.95th=[2332034], 99.99th=[2332034]
bw ( KiB/s): min=16384, max=147456, per=11.73%, avg=29939.15, stdev=19327.55, samples=440
iops : min= 128, max= 1152, avg=233.89, stdev=150.99, samples=440
lat (msec) : 2=5.76%, 4=1.19%, 10=0.50%, 20=1.12%, 50=3.73%
lat (msec) : 100=11.18%, 250=26.46%, 500=25.22%, 750=11.43%, 1000=5.71%
lat (msec) : 2000=7.20%, >=2000=0.50%
cpu : usr=0.01%, sys=0.00%, ctx=3164, majf=2, minf=51
IO depths : 1=0.1%, 2=0.1%, 4=1.3%, 8=12.5%, 16=25.0%, 32=52.8%, >=64=8.4%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.4%, >=64=0.0%
issued rwts: total=51520,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=249MiB/s (261MB/s), 13.2MiB/s-25.9MiB/s (13.9MB/s-27.2MB/s), io=73.1GiB (78.5GB), run=300044-300515msec


ralph@Ralph-MacMini Photo Library % pwd
/Volumes/Photo Library
ralph@Ralph-MacMini Photo Library % fio --name=seq-read --rw=read --ioengine=posixaio \
--blocksize=128k --iodepth=64 --runtime=300 --size=4g \
--direct=1 --invalidate=1 --time_based --numjobs="$(sysctl -n hw.physicalcpu)"
seq-read: (g=0): rw=read, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=posixaio, iodepth=64
...
fio-3.33
Starting 8 processes
Jobs: 8 (f=8): [R(8)][100.0%][r=112MiB/s][r=898 IOPS][eta 00m:00s]
seq-read: (groupid=0, jobs=1): err= 0: pid=1125: Sun Jan 8 21:20:00 2023
read: IOPS=153, BW=19.1MiB/s (20.1MB/s)(5741MiB/300037msec)
slat (nsec): min=0, max=1731.0k, avg=2252.35, stdev=8858.95
clat (msec): min=62, max=1374, avg=104.51, stdev=39.86
lat (msec): min=62, max=1374, avg=104.51, stdev=39.86
clat percentiles (msec):
| 1.00th=[ 74], 5.00th=[ 81], 10.00th=[ 87], 20.00th=[ 96],
| 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 102], 80.00th=[ 102], 90.00th=[ 103], 95.00th=[ 180],
| 99.00th=[ 184], 99.50th=[ 226], 99.90th=[ 642], 99.95th=[ 701],
| 99.99th=[ 1368]
bw ( KiB/s): min= 4079, max=24526, per=17.66%, avg=19740.51, stdev=2230.52, samples=591
iops : min= 31, max= 191, avg=153.76, stdev=17.46, samples=591
lat (msec) : 100=39.45%, 250=60.13%, 500=0.24%, 750=0.14%, 2000=0.03%
cpu : usr=0.19%, sys=0.25%, ctx=46521, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=47.1%, 16=52.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.1%, 8=1.3%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=45928,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1126: Sun Jan 8 21:20:00 2023
read: IOPS=99, BW=12.4MiB/s (13.0MB/s)(3714MiB/300099msec)
slat (nsec): min=0, max=183000, avg=2653.37, stdev=2468.95
clat (msec): min=65, max=1556, avg=100.98, stdev=36.41
lat (msec): min=65, max=1556, avg=100.99, stdev=36.41
clat percentiles (msec):
| 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 87], 20.00th=[ 99],
| 30.00th=[ 100], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 101], 80.00th=[ 101], 90.00th=[ 102], 95.00th=[ 107],
| 99.00th=[ 190], 99.50th=[ 226], 99.90th=[ 642], 99.95th=[ 659],
| 99.99th=[ 1401]
bw ( KiB/s): min= 2031, max=15104, per=11.40%, avg=12746.54, stdev=1534.24, samples=593
iops : min= 15, max= 118, avg=99.15, stdev=11.95, samples=593
lat (msec) : 100=44.09%, 250=55.51%, 500=0.23%, 750=0.13%, 1000=0.01%
lat (msec) : 2000=0.03%
cpu : usr=0.13%, sys=0.17%, ctx=30187, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=29715,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1127: Sun Jan 8 21:20:00 2023
read: IOPS=10, BW=1296KiB/s (1327kB/s)(380MiB/300002msec)
slat (nsec): min=1000, max=113000, avg=3626.40, stdev=3269.23
clat (msec): min=62, max=1355, avg=98.64, stdev=32.94
lat (msec): min=62, max=1355, avg=98.64, stdev=32.94
clat percentiles (msec):
| 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 85], 20.00th=[ 97],
| 30.00th=[ 100], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 101], 80.00th=[ 101], 90.00th=[ 101], 95.00th=[ 101],
| 99.00th=[ 148], 99.50th=[ 224], 99.90th=[ 542], 99.95th=[ 659],
| 99.99th=[ 1351]
bw ( KiB/s): min= 254, max= 1539, per=1.17%, avg=1305.31, stdev=158.69, samples=591
iops : min= 1, max= 12, avg= 9.35, stdev= 1.27, samples=591
lat (msec) : 100=41.18%, 250=58.49%, 500=0.20%, 750=0.10%, 2000=0.03%
cpu : usr=0.07%, sys=0.07%, ctx=3109, majf=0, minf=9
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=3038,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1128: Sun Jan 8 21:20:00 2023
read: IOPS=79, BW=9.99MiB/s (10.5MB/s)(2996MiB/300020msec)
slat (nsec): min=0, max=444000, avg=2844.48, stdev=4974.39
clat (msec): min=62, max=1363, avg=98.80, stdev=34.33
lat (msec): min=62, max=1363, avg=98.80, stdev=34.33
clat percentiles (msec):
| 1.00th=[ 66], 5.00th=[ 74], 10.00th=[ 85], 20.00th=[ 97],
| 30.00th=[ 100], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 101], 80.00th=[ 101], 90.00th=[ 101], 95.00th=[ 101],
| 99.00th=[ 148], 99.50th=[ 226], 99.90th=[ 634], 99.95th=[ 659],
| 99.99th=[ 1368]
bw ( KiB/s): min= 255, max=12312, per=9.20%, avg=10287.95, stdev=1307.73, samples=592
iops : min= 1, max= 96, avg=79.94, stdev=10.25, samples=592
lat (msec) : 100=41.76%, 250=57.80%, 500=0.27%, 750=0.13%, 2000=0.03%
cpu : usr=0.11%, sys=0.14%, ctx=24362, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=10.2%, 8=89.7%, 16=0.1%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=23971,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1129: Sun Jan 8 21:20:00 2023
read: IOPS=153, BW=19.1MiB/s (20.1MB/s)(5741MiB/300054msec)
slat (nsec): min=0, max=329000, avg=2318.51, stdev=3598.47
clat (msec): min=71, max=1097, avg=104.52, stdev=37.54
lat (msec): min=71, max=1097, avg=104.52, stdev=37.54
clat percentiles (msec):
| 1.00th=[ 82], 5.00th=[ 84], 10.00th=[ 91], 20.00th=[ 95],
| 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 102], 80.00th=[ 102], 90.00th=[ 103], 95.00th=[ 174],
| 99.00th=[ 182], 99.50th=[ 236], 99.90th=[ 659], 99.95th=[ 743],
| 99.99th=[ 1099]
bw ( KiB/s): min= 3832, max=24271, per=17.66%, avg=19741.23, stdev=2201.16, samples=591
iops : min= 29, max= 189, avg=153.76, stdev=17.24, samples=591
lat (msec) : 100=41.93%, 250=57.62%, 500=0.24%, 750=0.17%, 1000=0.01%
lat (msec) : 2000=0.03%
cpu : usr=0.19%, sys=0.25%, ctx=46510, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=47.1%, 16=52.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.1%, 8=2.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=45929,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1130: Sun Jan 8 21:20:00 2023
read: IOPS=153, BW=19.1MiB/s (20.1MB/s)(5741MiB/300091msec)
slat (nsec): min=0, max=352000, avg=2268.45, stdev=3083.97
clat (msec): min=74, max=1299, avg=104.52, stdev=37.33
lat (msec): min=74, max=1299, avg=104.53, stdev=37.33
clat percentiles (msec):
| 1.00th=[ 77], 5.00th=[ 85], 10.00th=[ 92], 20.00th=[ 97],
| 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 102], 80.00th=[ 102], 90.00th=[ 102], 95.00th=[ 161],
| 99.00th=[ 174], 99.50th=[ 234], 99.90th=[ 642], 99.95th=[ 684],
| 99.99th=[ 1301]
bw ( KiB/s): min= 4063, max=24332, per=17.66%, avg=19735.00, stdev=2256.76, samples=592
iops : min= 31, max= 190, avg=153.72, stdev=17.68, samples=592
lat (msec) : 100=36.46%, 250=63.12%, 500=0.22%, 750=0.17%, 2000=0.03%
cpu : usr=0.18%, sys=0.25%, ctx=46521, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=47.1%, 16=52.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=97.1%, 8=0.9%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=45930,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1131: Sun Jan 8 21:20:00 2023
read: IOPS=72, BW=9222KiB/s (9443kB/s)(2702MiB/300010msec)
slat (nsec): min=0, max=129000, avg=2758.18, stdev=1946.17
clat (msec): min=62, max=1363, avg=98.64, stdev=32.82
lat (msec): min=62, max=1363, avg=98.64, stdev=32.82
clat percentiles (msec):
| 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 85], 20.00th=[ 97],
| 30.00th=[ 100], 40.00th=[ 100], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 101], 80.00th=[ 101], 90.00th=[ 101], 95.00th=[ 101],
| 99.00th=[ 148], 99.50th=[ 224], 99.90th=[ 617], 99.95th=[ 659],
| 99.99th=[ 1368]
bw ( KiB/s): min= 1784, max=12263, per=8.31%, avg=9288.77, stdev=1140.22, samples=591
iops : min= 13, max= 95, avg=72.09, stdev= 8.96, samples=591
lat (msec) : 100=41.67%, 250=57.97%, 500=0.22%, 750=0.10%, 2000=0.03%
cpu : usr=0.15%, sys=0.18%, ctx=21936, majf=0, minf=10
IO depths : 1=0.1%, 2=0.1%, 4=87.1%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=21615,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
seq-read: (groupid=0, jobs=1): err= 0: pid=1132: Sun Jan 8 21:20:00 2023
read: IOPS=153, BW=19.1MiB/s (20.1MB/s)(5741MiB/300071msec)
slat (nsec): min=0, max=1189.0k, avg=2307.96, stdev=7384.70
clat (msec): min=79, max=991, avg=104.52, stdev=36.63
lat (msec): min=79, max=991, avg=104.53, stdev=36.63
clat percentiles (msec):
| 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 95],
| 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 101], 60.00th=[ 101],
| 70.00th=[ 102], 80.00th=[ 102], 90.00th=[ 102], 95.00th=[ 167],
| 99.00th=[ 174], 99.50th=[ 234], 99.90th=[ 659], 99.95th=[ 885],
| 99.99th=[ 986]
bw ( KiB/s): min= 4079, max=24478, per=17.66%, avg=19738.34, stdev=2265.81, samples=591
iops : min= 31, max= 191, avg=153.75, stdev=17.76, samples=591
lat (msec) : 100=38.65%, 250=60.93%, 500=0.22%, 750=0.14%, 1000=0.07%
cpu : usr=0.18%, sys=0.25%, ctx=46518, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=47.1%, 16=52.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=96.1%, 8=3.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=45928,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
READ: bw=109MiB/s (114MB/s), 1296KiB/s-19.1MiB/s (1327kB/s-20.1MB/s), io=32.0GiB (34.3GB), run=300002-300099msec

ralph@Ralph-MacMini Photo Library % ioping -c 5 /Volumes/'Photo Library'
4 KiB <<< /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB): request=1 time=516 us (warmup)
4 KiB <<< /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB): request=2 time=569 us
4 KiB <<< /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB): request=3 time=594 us
4 KiB <<< /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB): request=4 time=884 us
4 KiB <<< /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB): request=5 time=847 us

--- /Volumes/Photo Library (smbfs //rshepher@truenas._smb._tcp.local/Photo%20Library 5.09 TiB) ioping statistics ---
4 requests completed in 2.89 ms, 16 KiB read, 1.38 k iops, 5.40 MiB/s
generated 5 requests in 4.01 s, 20 KiB, 1 iops, 4.99 KiB/s
min/avg/max/mdev = 569 us / 723.5 us / 884 us / 142.9 us
 

mikepj

Cadet
Joined
May 23, 2023
Messages
2
@Ralphshep Did you ever discover what was causing this issue? I'm seeing the same thing while connecting to TrueNAS Scale from a Mac. The only difference is that ejecting and re-mounting the SMB share fixes it for me (I don't need to reboot).
 
Top