My First NFS Shares!

Status
Not open for further replies.

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
So I've set up a couple of NFS shares to test with on my ESXi sand box. I've configured a pair of data sets to share. One dataset is on a stipped set of mirrors with eight 2TB 7200RPM drives. The other is a dataset on a P3605 1.6TB NVMe SSD. I have the ESXi box connected to the FreeNAS box with a DAC and a Intel 10GB X520. I added a new VMDK to one of my VM's and just run a few baseline's to see how things looked. Here are the results:

The hard drive config:

NFSHardDrives.png


The NVMe:
NFSNVMe.png


The hard drive configuration looks good on the reads. Honestly, better than I expected. The write however look glacial. I'm not sure if this is a sync issue, or more likely...a Brian (me) doesn't know what he's doing issue. I'll be adding a SLOG and L2ARC to this setup as well, but I wanted a baseline before I started to get too fancy. ;)

The NVMe configuration looked disappointing. I would have expected much faster reads. I don't know that I expected it to saturate the 10GB link, but it surely has the capability of doing so. The write are much better than the hard drive configuration, but still very slow for what the drive is. I'm starting to research settings and tweaking, but I was hoping for some feedback before I dig too deep in the wrong places. Thanks in advance!

Oh, and here are my specs:
Processor(s) (2) Intel Xeon E5-2670 @ 2.6 GHz
Motherboard Supermicro X9DR7-LNF4-JBOD
Memory 256 GB - (16) Samsung 16 GB ECC Registered DDR3 @ 1600 MHz
Chassis Supermicro CSE-846TQ-R900B
HBA (2) Supermicro AOC-2308-l8e
NVMe Intel P3600 1.6TB NVMe SSD
Solid State Storage (2) Intel S3700 200GB SSD (Not in use yet)
Hard Drive Storage (9) HGST Ultrastar 7K3000 2TB Hard Drives
Network Adapter (2) Intel X520-DA2 Dual Port 10 Gbps Network Adapters

And here's the overall lab environment. I'm working with HyperionFN (the freeNAS box) and HyperionESXi2 (the sand box running ESXi 6.0)
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
ESXi NFS mounts will send sync writes, which is why your tests look the way they do. If you want to test the pool speed you can disable sync writes and re-run. A proper slog device will of course help with the writes and a production ESXi NFS mount should always use sync writes.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
The fact that your reads are virtually identical for both pools suggests to me that something other than your drives is the bottleneck.

You can clearly see the increased write performance on the SSD, which is consistent with the increased performance of the SSD. A SLOG will definitely help you with the HDD pool, and I imagine it might help with the SSD pool. By disable sync writes for NFS, you can test your write performance without the ZIL, which should be your peak write throughput.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
I figured that would be the case. I was just hoping for better performance. We'll see how iSCSI fairs! ;) And I'll put an S3700 in there as my SLOG and see how that affects things. I do agree that something is the bottleneck, I just can't imagine what that would be. The system is loaded...I guess I could get a pair of 40gbe adapters instead of the 10gb, but i don't think they are as easy to just use a DAC with. Everything else on both systems should be lightning fast.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I figured that would be the case. I was just hoping for better performance. We'll see how iSCSI fairs! ;) And I'll put an S3700 in there as my SLOG and see how that affects things. I do agree that something is the bottleneck, I just can't imagine what that would be. The system is loaded...I guess I could get a pair of 40gbe adapters instead of the 10gb, but i don't think they are as easy to just use a DAC with. Everything else on both systems should be lightning fast.
I can tell you that without tuning the X520 10Gbe adapters performance is exactly what you're getting ~650MB/s. You'll need to tune the network stack get get more out of the X520s.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
I can tell you that without tuning the X520 10Gbe adapters performance is exactly what you're getting ~650MB/s. You'll need to tune the network stack get get more out of the X520s.

Great info! Any links to tuning an x520 on FreeNAS 9.10? Or any settings I can quickly adjust?
 
Status
Not open for further replies.
Top