SOLVED Speed drops off with SMB over 10Gb.

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
Yay! Another "I am using 10Gb and my speed is slow!" post.

NAS
OS: TrueNAS-SCALE-22.02-RC.1-2
Motherboard: Supermicro X10SRL-F
CPU: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
NIC: Intel X550-T2
RAM: 128GiB
Storage: 1 main pool. 7 HDD drives in Z2.

10FT CAT8
No switch. Direct connect between NIC's.

Workstation
OS: Windows 11
Motherboard: ROG STRIX Z590-I GAMING WIFI
CPU: i9 - 11900K
NIC: Sonnet Solo 10G (Thunderbolt 3 adapter in Thunderbolt 4 port.)
RAM: 32GiB
Storage: Samsung 980, Samsung 960, 2x Samsung 850 in RAID0

Every transfer starts out 800-900MB/sec then within 5-10 seconds falls to 250-350MB/sec for the rest of the transfer.
CrystalDiskMark test always comes out about the same. But I do know this is misleading as it reports the fastest speed at any point of the test.

With limited knowledge in this field, I am at a loss as to why the speed drops off.

Code:
Connecting to host 192.168.5.11, port 5201
[  4] local 192.168.5.80 port 51982 connected to 192.168.5.11 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   752 MBytes  6.30 Gbits/sec
[  4]   1.00-2.00   sec   723 MBytes  6.06 Gbits/sec
[  4]   2.00-3.00   sec   727 MBytes  6.10 Gbits/sec
[  4]   3.00-4.00   sec   720 MBytes  6.04 Gbits/sec
[  4]   4.00-5.00   sec   728 MBytes  6.10 Gbits/sec
[  4]   5.00-6.00   sec   756 MBytes  6.34 Gbits/sec
[  4]   6.00-7.00   sec   810 MBytes  6.80 Gbits/sec
[  4]   7.00-8.00   sec   735 MBytes  6.17 Gbits/sec
[  4]   8.00-9.00   sec   734 MBytes  6.16 Gbits/sec
[  4]   9.00-10.00  sec   730 MBytes  6.13 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  7.24 GBytes  6.22 Gbits/sec                  sender
[  4]   0.00-10.00  sec  7.24 GBytes  6.22 Gbits/sec                  receiver

iperf Done.
 

Attachments

  • Screenshot 2021-11-30 014755.png
    Screenshot 2021-11-30 014755.png
    19.2 KB · Views: 303
  • Screenshot 2021-11-30 020811.png
    Screenshot 2021-11-30 020811.png
    15.9 KB · Views: 304
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
If you test by copying files, you are limited in throughput by the number of I/Os that are outstanding (queue depth). Its a copy limitation.
Early on, its Ok because the writes are acknowledged when in RAM, but after you hit the RAM buffering limit, the systems has to wait for I/O's to be committed to the drives.

Test with fio and you can see write and read bandwidth with many I/Os in flight (queue depth = 32). Its less latency sensitive.
 

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
So might there be any easy solution to fix this?
More RAM? SSD SLOG?

Ahh humm. After doing some digging seems I cannot add a SLOG in TrueNAS Scale... yet.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
There is an easy but not cheap solution: Replace your disks with SSDs. What you see is your disks limiting the transfer, relative to what the wire (see iperf) allows.

I would assume that you a have an empty SATA port left on board. Get a cheap SATA SSD, make a single-drive vdev and new(!) pool out of it, and re-test. DANGER: Do not add that new vdev to your existing pool.

SLOG would not help because your test is a serial transfer, where IOPS are low.

And please make sure to read the existing material (see "Recommended readings" in my signature).
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
What is the reason that you can't add a SLOG? (Not that I think it solves the problem)

1st of all.. confirm that independent read and write performance is OK.

Then confirm that copy is slow.. and hence the theory is correct

There are several windows copy tools that provide more parallelism. e.g Robocopy

The best tool depends on what you are trying to do.... explain the gaol and someone here might have expereince.
 

crkinard

Explorer
Joined
Oct 24, 2019
Messages
80
Well found the "issue".

Let's just say if you want speed have more than one vdev in a pool.
 

serfnoob

Dabbler
Joined
Jan 4, 2022
Messages
23
Well found the "issue".

Let's just say if you want speed have more than one vdev in a pool.
I'm struggling with figuring out best performance too. What did you end up doing and what speeds do you get now?
 

dffvb

Dabbler
Joined
Apr 14, 2021
Messages
42
Let's just say if you want speed have more than one vdev in a pool.

Hm, whith havin full 10 GB speed with no vdev in the pool with CORE I find it hard to believe this is true generally speaking
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Hm, whith havin full 10 GB speed with no vdev in the pool with CORE [...]
And how exactly do you build a pool without a vdev? You might be confused about terminology here.
  • pools are built from one or more vdevs
  • vdevs are built from one ore more HDDs/SSDs
  • vdevs have a particular level of redundancy, e.g. single disk, 2-way mirror, RAIDZ2, ...
 

dffvb

Dabbler
Joined
Apr 14, 2021
Messages
42
You are certainly right, mixed the terminology - mistook vdev with l2arc - still getting familiar with it :smile:
 

Korvenwin

Cadet
Joined
Mar 4, 2022
Messages
8
Well found the "issue".

Let's just say if you want speed have more than one vdev in a pool.
Is it a better idea to get best perfomance to set multiple small vdev versus a big vded in one pool?
I'm planning a new server. I own 10x4 tb hard disk. What is best:
  1. 1 pool with 1 vdev in RaidZ2 of 10 disks
  2. 1 pool with 2 vdevs in RaidZ of 5 disk each one
What option do you think is better for data. Boot and VM/Docker pool will be separated on another pool of SSD.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Define "better". Two vdevs will give you twice the IOPS of one vdev. But if you set up the two 5-disk vdevs as RAIDZ1, you have way lower data safety compared to a single RAIDZ2. Two disks in the same vdev fail --> your data is toast.

I would use two RAIDZ2 vdevs.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
You are certainly right, mixed the terminology - mistook vdev with l2arc - still getting familiar with it :smile:
BTW - you don't have enough ARC (memory) to be running L2ARC
 
Top