Hi Everyone,
A bit more input from my side. I did try out the following (changed) settings in smb.conf, all without any major change in performance:
SO_SNDBUF=8192 SO_RCVBUF=8192
IPTOS_LOWDELAY
TCP_NODELAY
read_raw=no write_raw=no
Trying out varying block size in dd command agains SMB disk, file size about 1.3GB, all without any major change
bs=512, count=2560k
bs=2k, count=640k
bs=8k, count=160k
bs=32k, count=40k
Not what I expected. Measuring block size effect on raw performance (dd on freenas server direct to/from disk, 1.3GB file, read performance shown)
8k - 64k about 1.4 GB/s
4k 1.1 GB/s
2k 850 MB/s
1k 570 MB/s
512 349 MB/s
The raid array has a 256GB SSD cache! Highest write performance is about 260 MB/s.
The only clear effect I could find was the use of SMB client.
- Debian client on same ESXi server as FreeNAS (virtual network only) using dd from mounted SMB disk: 27 MB/s read performance
- Same debian cient using smbclient get from FreeNAS: 80 MB/s
- Win XP client on same ESXi server as FreeNAS using Intel NAS Performance Toolkit: 27 MB/s
- Debian client on other (old) ESXI server (physical 1 Gb network) using dd: 18 MB/s
- OSX client on physical 1 Gb network using dd: 37 MB/s
Still trying to understand why
/Mats
Thanks for the input! I love forums where people are actually helpful to each other.