Read Perfomance question

curly

Cadet
Joined
May 13, 2020
Messages
7
Hi All,
I currenty bought a Truenas Mini XL+ unit for video editing in 2K resolution purpose
8 cores 2.2GHz Intel Atom CPU
64G ECC memory

And Added:
8 x WD 14TB RED NAS Pro disks
1 Pool with Stripe 2 vDev, each vDev consists of 4 disks in RAID-Z1, Lz4 compression, Record size 128K
1 x 512G SSD as cache, 1 x 512G SSD as LOG

Switch: Netgear XS708T 8 ports 10G switch

Test Client Side:
MSI Stealth laptop. windows 11 pro
11th Gen Intel(R) Core(TM) i7-11375H @ 3.30GHz 3.30
64G Memory
ATTO Thunderbolt 3 to Dual 10G adapter

All ports, cards and switch set to MTU 9000 jumbo frame

iperf test speed result:
Connecting to host 192.168.1.140, port 5201
[ 4] local 192.168.1.151 port 53533 connected to 192.168.1.140 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 1.08 GBytes 9.32 Gbits/sec
[ 4] 1.00-2.00 sec 1.08 GBytes 9.29 Gbits/sec
[ 4] 2.00-3.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 3.00-4.00 sec 1.09 GBytes 9.37 Gbits/sec
[ 4] 4.00-5.00 sec 1.08 GBytes 9.24 Gbits/sec
[ 4] 5.00-6.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 6.00-7.00 sec 1.09 GBytes 9.36 Gbits/sec
[ 4] 7.00-8.00 sec 1.09 GBytes 9.38 Gbits/sec
[ 4] 8.00-9.00 sec 1.08 GBytes 9.32 Gbits/sec
[ 4] 9.00-10.00 sec 1.07 GBytes 9.23 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 10.9 GBytes 9.33 Gbits/sec sender
[ 4] 0.00-10.00 sec 10.9 GBytes 9.33 Gbits/sec receiver

Data looks goo to me

with DD test:
Write
dd if=/dev/zero of=/mnt/POOL1/sambademo/speed/test.dd bs=4M count=10000
10000+0 records in
10000+0 records out
41943040000 bytes transferred in 25.610992 secs (1637696832 bytes/sec)
...................................................................................................................................
Read
dd of=/dev/zero if=/mnt/POOL1/sambademo/speed/test.dd bs=4M count=10000
10000+0 records in
10000+0 records out
41943040000 bytes transferred in 14.354601 secs (2921923151 bytes/sec)
.....................................................................................................................................

Also seems to be OK

With real application, I only got 8xxMB/s in write and 4xxMB/s in read with HD/2K video streams.

I also use frametest command in client machine and here is the result:
WRITE
frametest -w 2k -n 1000 -t 4 x:/speed/
Test parameters: -w12512 -n1000 -t4
Test duration: 15 secs
Frames transferred: 990 (12096.563 MB)
Fastest frame: 15.683 ms (779.09 MB/s)
Slowest frame: 145.292 ms (84.10 MB/s)

Averaged details:
Open I/O Frame Data rate Frame rate
Last 1s: 8.205 ms 31.42 ms 14.73 ms 829.63 MB/s 67.9 fps
5s: 7.392 ms 31.76 ms 14.78 ms 826.43 MB/s 67.6 fps
30s: 7.338 ms 33.07 ms 15.35 ms 796.16 MB/s 65.2 fps
Overall: 7.338 ms 33.07 ms 15.35 ms 796.16 MB/s 65.2 fps

Histogram of frame completion times:
50% |
|
|
| *
| *
| *
| ***
| ***
| * *****
| *****************
+|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
ms <0.1 .2 .5 1 2 5 10 20 50 100 200 500 >1s


Overall frame rate .... 64.97 fps (832435187 bytes/s)

Average file time ...... 60.103 ms
Shortest file time ..... 15.683 ms
Longest file time ...... 145.292 ms

Average create time .... 7.309 ms
Shortest create time ... 1.324 ms
Longest create time .... 69.035 ms

Average write time ..... 33.2 ms
Shortest write time .... 13.0 ms
Longest write time ..... 116.1 ms

Average close time .... 19.625 ms
Shortest close time ... 0.520 ms
Longest close time .... 91.696 ms

Test parameters: -w12512 -n1000 -t4
Test duration: 15 secs
Test parameters: -r -z12512 -n1000 -t4
Test duration: 31 secs
Frames transferred: 972 (11876.625 MB)
Fastest frame: 37.853 ms (322.80 MB/s)
Slowest frame: 216.701 ms (56.39 MB/s)

READ
frametest -r 2k -n 1000 -t 4 x:/speed/
Averaged details:
Open I/O Frame Data rate Frame rate
Last 1s: 23.739 ms 90.88 ms 30.73 ms 397.67 MB/s 32.5 fps
5s: 20.864 ms 91.90 ms 31.71 ms 385.37 MB/s 31.5 fps
30s: 20.073 ms 93.30 ms 32.23 ms 379.13 MB/s 31.0 fps
Overall: 20.007 ms 93.90 ms 32.40 ms 377.12 MB/s 30.9 fps

Histogram of frame completion times:
50% |
|
|
|
|
| *
| ***
| ****
| ******
| * **********
+|----|-----|----|----|-----|----|----|-----|----|----|-----|----|
ms <0.1 .2 .5 1 2 5 10 20 50 100 200 500 >1s


Overall frame rate .... 30.91 fps (395990652 bytes/s)

Average file time ...... 113.839 ms
Shortest file time ..... 37.853 ms
Longest file time ...... 216.701 ms

Average open time ...... 20.069 ms
Shortest open time ..... 7.049 ms
Longest open time ...... 91.108 ms

Average read time ...... 93.7 ms
Shortest read time ..... 29.4 ms
Longest read time ...... 185.7 ms

Average close time .... 0.032 ms
Shortest close time ... 0.015 ms
Longest close time .... 0.144 ms

I would like to push the read be faster , may be up to 600-800MB/s, anyone have experienceand suggestion in how to make the read speed be faster?

Thanks,

Curly
 
Joined
Jul 3, 2015
Messages
926
Reads come directly out of ARC (RAM) initially if possible then followed by L2ARC (if present ) then disk. Your first question is are you hitting your RAM speed (unlikely if you can’t saturate 10Gb) or L2ARC (possible as you only have one SSD more info on this drive would be good) or disk your pool layout is not really optimised for read speed. If you want performance from your pool then 4 vdevs of 2 x mirrors would be the way to go and then your pool would most likely read faster than your SSD in which case you could dump it. For info your SLOG is possibly a waste of time either you aren’t using it or you are and it’s slowing your writes down.
 
Last edited:

curly

Cadet
Joined
May 13, 2020
Messages
7
Thanks Johnny. I tried deleted the SSD cache and Log disk then test read and write again. Seems read speed is faster and can up to 5xxMB/s but it seems that TrueNas Core just uniltzed few megabyte of memory and I still have 5X G free.

Will you suggest 2 x SSD as Cache disk rather than using 64G RAM ?

Thanks

Curly
 

curly

Cadet
Joined
May 13, 2020
Messages
7
I managed to rebuild the POOL with 4 x vDEV (2 disks mirror) ; without any cache disk, the read/write is now 8XXMB/sec and 9XXMB/sec which reached my target, thanks Johnny for the suggestion. :)

Curly
 
Joined
Jul 3, 2015
Messages
926
Nice to hear
 
Joined
Jun 15, 2022
Messages
674
As a note, 4k block size throughout may help during video scrubs depending on your workload. ZFS is Copy-On-Write so as the disks fill this can greatly reduce overhead for certain workloads.

In my experience video is compressed before being stored, so if you check the compression ratio and it's zero you can turn off compression and reduce overhead.

Does your network support 15k Jumbo Frames?
 
Last edited:

curly

Cadet
Joined
May 13, 2020
Messages
7
As a note, 4k block size throughout may help during video scrubs depending on your workload. ZFS is Copy-On-Write so as the disks fill this can greatly reduce overhead for certain workloads.

In my experience video is compressed before being stored, so if you check the compression ratio and it's zero you can turn off compression and reduce overhead.

Does your network support 15k Jumbo Frames?
Thanks for your suggestion, I will try changing the compression setting in my uncompressed DPX file project.

My network only supports Jumbo Frame but not Super Jumbo frame. :(

Thanks

Curly
 
Top