SMB Slow Read Speeds

sazrocks

Cadet
Joined
Oct 12, 2021
Messages
5
Hi, new here so sorry if I'm in the wrong section.

I have a fresh install of TrueNAS-SCALE-21.08-BETA.2, and I'm having some trouble with read speeds over SMB. I have a 1Gbps network, and I am able to write large files (linux ISOs) in the realm of 80-100MB/s, which is acceptable. However, when I attempt to read those files back to any of various windows clients that I have, I get stuck at 8-10MB/s speeds, which is extremely slow. I ran some DD speed tests locally on the server and the array gets about 250MB/s when reading from that file. I'm a noob to TrueNAS so any suggestions as to where I am going wrong or what I can do to fix this would be appreciated.

Thanks.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
1st you have to explain everything.. otherwise, people can't see what the mistake was.
What protocol are you using?
What is your pool layout?
How did you test the performance? (what client and software)
What is the hardware?

BTW, dd is a terrible way to test performance over a network. It has its own bottleneck.
 

sazrocks

Cadet
Joined
Oct 12, 2021
Messages
5
1st you have to explain everything.. otherwise, people can't see what the mistake was.
Sorry about that, I could have been more descriptive.
What protocol are you using?
Sorry, not entirely sure what you mean by this. I'm using SMB over a 1Gbps twisted pair network with Windows 10 clients.
What is your pool layout?
I'm a bit new to this, so hopefully I get this right:
Single vdev, 4 1 TB 10k SAS disks, RAIDZ1
How did you test the performance? (what client and software)
Several ways:
  1. To verify the network, I ran an iperf3 server on trueNAS using the shell. I then ran iperf3 client on my windows 10 client and tested against the trueNAS server. I was able to hit ~950Mbps each direction.
  2. To verify the speed of the array, I used the time command and dd on the truenas server to read a large file (Ubuntu install ISO, over 2GB) from the array into /dev/null. The command I used was similar to "time dd if=/path/to/bigfile of=/dev/null bs=8k". The read speed result was over 250MB/s
  3. For the SMB test, I used a Windows 10 client connected via ethernet into the network, and first copied the large ISO from the client's local SSD storage to the trueNAS SMB share. This write operation ran at 80-100MB/s. Next, I attempted to copy the large ISO I had just written to the network share back to my windows 10 client's SSD. It is at this point that I was only getting transfer rates in the realm of 8-10MB/s. I tried a second windows 10 client, also with an SSD, and got the same 8-10MB/s read speeds when trying to copy the file off the share.
BTW, dd is a terrible way to test performance over a network. It has its own bottleneck.
I agree. To be clear, I did not run dd across the network, I opened the shell on the trueNAS SCALE server and ran the dd command directly there.

Just a bit more info about the hardware I'm using:
It's a PowerEdge R320 with an Xeon E5-2470 and 32GB of RAM. I'm using the built in PERC H710 mono mini as my controller (NOT flashed to IT firmware). Each disk configured as the sole disk in its own virtual disk, which then is used in trueNAS for the RAIDZ1 array. The disks are 1TB 10k SAS disks, with the exception of the trueNAS install drive, which is a 512GB SATA drive.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Why is the H710 not in IT mode? Using a RAID controller even in JBOD or equivalent is not a good idea and is likely to result in strange behavior. ZFS expects to manage the disks - and you are not letting it
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
  1. For the SMB test, I used a Windows 10 client connected via ethernet into the network, and first copied the large ISO from the client's local SSD storage to the trueNAS SMB share. This write operation ran at 80-100MB/s. Next, I attempted to copy the large ISO I had just written to the network share back to my windows 10 client's SSD. It is at this point that I was only getting transfer rates in the realm of 8-10MB/s. I tried a second windows 10 client, also with an SSD, and got the same 8-10MB/s read speeds when trying to copy the file off the share.

I agree. To be clear, I did not run dd across the network, I opened the shell on the trueNAS SCALE server and ran the dd command directly there.

Hmmm... copy is not quite the same as read, which is why dd is not a good test of read performance.
When you read.... it reads the file from the NAS to RAM. There is no write latency to the SSD.
When you copy, there is a need to wait for the SSD. Many copy routines also only read one block at a time and then wait for that to be safely written to the other device. This process slows down the process.

So, I'd recommend you find a way to do a pure "read performance" test.... like fio.
 

sazrocks

Cadet
Joined
Oct 12, 2021
Messages
5
Sorry for the late reply; got busy and didn't have time to play with new things anymore.
Why is the H710 not in IT mode? Using a RAID controller even in JBOD or equivalent is not a good idea and is likely to result in strange behavior. ZFS expects to manage the disks - and you are not letting it
I updated to 22.02-RC.1-2 and flashed my PERC to IT mode, still the same bad performance reading over SMB.
Hmmm... copy is not quite the same as read, which is why dd is not a good test of read performance.
When you read.... it reads the file from the NAS to RAM. There is no write latency to the SSD.
When you copy, there is a need to wait for the SSD. Many copy routines also only read one block at a time and then wait for that to be safely written to the other device. This process slows down the process.

So, I'd recommend you find a way to do a pure "read performance" test.... like fio.
Ran fio locally on the trueNAS box, got ~500MiB/s reads on the ISO I'm using for speed testing.
Ran same fio command remotely on my windows client on the same ISO file in the share, got ~8MiB/s.
I really think there's something weird going on with SMB here that's unique to the trueNAS box; my unRAID box easily pushes 120MiB/s (saturating my 1Gbps link) to the same client.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
*110MiB/s
For some reason it seems that I cannot edit my posts...

Its best to document you fio commands and your pool layout.
Do you have both read performance issues and write performance issues?
 
Top