Hi,
I am a new user to TrueNAS scale. I've just build a new NAS with TrueNAS and I have one Ubuntu VM running on this system too. This VM is connected to the host via a bridge and has an SMB share mounted. I'm a little puzzled on SMB3 performance and if it's possible to make it faster.
Whenever I read or write some testfile via the SMB share mounted inside the VM using fio with a 4k blocksize it's kinda slow at about 70 MiB/s. Using a 1m blocksize I get about 1050 MiB/s for writing and 2335 MiB/s for reading. The write speed is pretty much the limit of the hardware I guess so that's all good.
Whenever I run these tests on the TrueNAS machine itself on the same dataset I get about 1050 MiB/s for writing regardless of 4k or 1m blocksize and 2500 MiB/s for reading. But running it the second time I get 10 GiB/s so I guess thats ARC
Why is 4k performance via SMB so slow? How does blocksize work when reading/writing files over SMB3 between the host and the VM? Is this always a sync operation? Should I be using NFS instead?
The share is mounted like so:
Hardware and Pool
This share is on an 8 disk Seagate Exos RAIDZ2. The dataset I am using has a recordsize is 128k. In hindsight I might had to set it to 1M from the start since this dataset houses video files between 1GB and 100GB in size. Good or bad idea?
Anyway...
EDIT: rsyncing a file to that share writes at about 400 MiB/s which seems slow since fio reaches 1000+ using 1m blocks and locally I can write with the same speed using 4k blocks.
I am a new user to TrueNAS scale. I've just build a new NAS with TrueNAS and I have one Ubuntu VM running on this system too. This VM is connected to the host via a bridge and has an SMB share mounted. I'm a little puzzled on SMB3 performance and if it's possible to make it faster.
Whenever I read or write some testfile via the SMB share mounted inside the VM using fio with a 4k blocksize it's kinda slow at about 70 MiB/s. Using a 1m blocksize I get about 1050 MiB/s for writing and 2335 MiB/s for reading. The write speed is pretty much the limit of the hardware I guess so that's all good.
Whenever I run these tests on the TrueNAS machine itself on the same dataset I get about 1050 MiB/s for writing regardless of 4k or 1m blocksize and 2500 MiB/s for reading. But running it the second time I get 10 GiB/s so I guess thats ARC
Why is 4k performance via SMB so slow? How does blocksize work when reading/writing files over SMB3 between the host and the VM? Is this always a sync operation? Should I be using NFS instead?
The share is mounted like so:
Code:
//192.168.178.5/datasetnameremoved /mnt/datasetnameremoved cifs username=removed,password=removed,vers=3.0,uid=3000,gid=950 0 0
Hardware and Pool
This share is on an 8 disk Seagate Exos RAIDZ2. The dataset I am using has a recordsize is 128k. In hindsight I might had to set it to 1M from the start since this dataset houses video files between 1GB and 100GB in size. Good or bad idea?
Anyway...
Code:
pool: tank state: ONLINE config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 050a7136-3221-4a30-9aba-8f2fb88b9d82 ONLINE 0 0 0 38eca059-186f-449d-82a6-0a5d5c9a434b ONLINE 0 0 0 8ed51906-4aef-4d9f-a8b1-45851aa374b2 ONLINE 0 0 0 ee2b4a04-783f-4c4e-ab8f-50d3824f85d3 ONLINE 0 0 0 aa0cb315-378a-4352-add2-6f240b0d5b1f ONLINE 0 0 0 9038b382-311a-4d64-9932-60656017b516 ONLINE 0 0 0 bf7566bd-d036-4b3a-8a3b-e419d67c11a4 ONLINE 0 0 0 b5e1ac34-62f3-49fe-aae0-507610ae950e ONLINE 0 0 0 # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT boot-pool 912G 64.4G 848G - - 0% 7% 1.00x ONLINE - tank 116T 22.3T 94.1T - - 0% 19% 1.00x ONLINE /mnt
EDIT: rsyncing a file to that share writes at about 400 MiB/s which seems slow since fio reaches 1000+ using 1m blocks and locally I can write with the same speed using 4k blocks.
Last edited by a moderator: