10GBE maximizing performace

Status
Not open for further replies.

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
Check your local performance on the NAS.
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Getting decent performance out of 10g requires some tuning. After checking to make sure your local Disk I/O can deliver the speeds you're looking for, you'll need to look at your network, too.
ex:
https://forums.freenas.org/index.php?threads/slow-smb-write-speeds-on-10gb.51021/
https://forums.freenas.org/index.php?threads/10-gig-networking-primer.25749/page-7#post-272868

Also, you might want to consider that adding an slog just adds another endpoint for the sync write process. When sync writes happen, the theory is that the slog is faster than your vdev, and once the data is written to the slog, ZFS can return success to the upstream app. In the time between when the writes to the slog completes and the vdev completes, if the write to the vdev fails, the slog is used to recover the write transaction. When the direct vdev write is complete, the data in the slog is discarded. If your SMB configuration does async writes, an slog will make no difference.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

ZFS does what amounts to write caching in transaction groups. They can be tuned, but generally you probably don't want/need to do that, as there are no guardrails and it's equally easy to reduce vs improve performance. Plus, if you assume zfs used 8-10 g of your RAM to absorb incoming writes, the system would have about 30 seconds of buffer at 325MB/s. If your file size is 6GB, you should be able to absorb an entire file write in RAM at a higher transfer rate, so it's likely there is more tuning to be done somewhere.
 
Last edited:

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
The classic local performance test is dd with a large bs parameter, like bs=1M.
 

pjr_atl

Dabbler
Joined
Jan 4, 2013
Messages
24
apologies for the delayed response traveling on business
Code:

root@freenas:~ # dd if=/dev/zero of=/mnt/zfs1/media/ddtest bs=1024k count=20000
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 75.688894 secs (277075259 bytes/sec)
root@freenas:~ # dd of=/dev/null if=/mnt/zfs1/media/ddtest bs=1024k count=20000
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 51.221231 secs (409430224 bytes/sec)
root@freenas:~ #



so as I read this MY best possible write speed ~277MB/s and read speed would be ~409MB/s

Crystal disk mark reports 427 read and 331 write

Conclusion its running as fast as it can :) if I want more I need more/faster disks and a faster processor.

This is running on an HP Microserver N40L I am very impressed at how this old box is holding up...
 

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
Yep, it's going about as fast over the network as can be expected considering the local performance.

Regarding your OP assumption that writes would use ram, that may be happening, but only until the ram fills up. Then it's going to go at disk subsystem speed. If your benchmarking tool uses a large file, the ram speed at the beginning of the transfer will be diluted by the rest of the transfer.
 
Status
Not open for further replies.
Top