iSCSI 4K Performance Slow

Status
Not open for further replies.

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
I am in the process of doing some benchmarking for a database using various different storage mechanisms. One of those mechanisms is iSCSI which I am providing using my FreeNAS server. I am running a fresh installation of 11.0 on the following hardware:

Processor(s) (2) Intel Xeon E5-2670 @ 2.6 GHz
Motherboard Supermicro X9DR7-LNF4-JBOD
Memory 256 GB - (16) Samsung 16 GB ECC Registered DDR3 @ 1600 MHz
Chassis Supermicro CSE-846TQ-R900B
Chassis Supermicro CSE-847E16-RJBOD1
HBA (1) Supermicro AOC-2308-l8e
HBA (1) LSI 9200-8e
NVMe Intel P3600 1.6TB NVMe SSD
Solid State Storage (2) Intel S3700 200GB SSD
Hard Drive Storage (9) HGST Ultrastar 7K3000 2TB Hard Drives
Hard Drive Storage (17) HGST 3TB 7K4000 Hard Drives
Network Adapter (2) Intel X520-DA2 Dual Port 10 Gbps Network Adapters

I have created a zvol on my NVMe drive directly and have this server connected to my physical windows server via copper DAC (3 meter). I have tried with and without jumbo frames, but the performance doesn't seem to improve. 4K performance is just bad:
Physical-iSCSI-IntelP3605-CDM.png

Physical-iSCSI-IntelP3605-Anvil.png


Here is the same type of drive in the same physical box that connects to FreeNAS:
Physical-IntelP3605-CDM.png

Physical-IntelP3605-Anvil.png


Am I missing something super simple? It seems like I should have plenty of horsepower to make this lightning fast. I really need QD1 to waaay faster to get decent performance on my DB.

Thanks in advance!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Presumably, your benchmarks aren't actually filling up the queues.

You really have to decouple the storage benchmarks from the network benchmarks as a first step to identifying bottlenecks.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
I'm not sure I understand. If its the same benchmark on both, wouldn't it have similar behavior? This was just what I had to post. My database is around 75% slower, which is my bigger issue. When I watch the NIC in Windows, it never goes above 500mbit during database operations, thats roughly 5% of the capacity of the network. The drive itself isn't being used for anything else at all, so I'm having trouble identifying where the bottleneck might be. Any tips on figuring that out?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm not sure I understand. If its the same benchmark on both, wouldn't it have similar behavior?
Not on a completely different operating system. Benchmark the storage, benchmark the network and then we can have a better idea of what's going on.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
Fair enough. Pardon me being so new, but what's the best simple way to benchmark the drive in FreeNAS?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You'll want to use dd, most likely.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
Tried this:
dd if=/dev/zero of=/mnt/zNVMe/ddfile1.tmp bs=4k count=2000000
2000000+0 records in
2000000+0 records out
8192000000 bytes transferred in 17.183151 secs (476746095 bytes/sec)

If I did it correctly, it is doing a 4k random test that completed at a rate of 479MB/s. This seems awfully high...but that's what I see. That's a write test obviously, which is for now all I care about. Thoughts? Best way to benchmark the network side? Did I do the dd command correctly?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You have to disable compression - zeros compress really well. As for the network, you want iperf.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
zNVMe has compression disabled. Is there something else I need to turn off?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If I did it correctly, it is doing a 4k random test
No, and now that I think about it you'd need something other than dd for the random portion of the tests. That makes the numbers look a bit low, but I'm not familiar with what is typical of the P3600.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
Still working on iperf.. it connects, but nothing ever happens. troubleshooting now.

any thoughts on a DD alternative that will give me random performance on my P3605?
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
Not great on iperf...
iperf.png


But that shouldn't kill IOPS performance, right? Moving on to IOPS on the P3605...
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It is difficult to achieve high numbers in 4K QD1 test, since they depend more on latency then throughput, and moving SSD from direct attach to some other box behind many extra layers of protocols is not a way to reduce it. Throwing more hardware into the problem, that may help with throughput or parallel load, won't do much such single request at a time load. I'm afraid there may be no easy answer. My own iSCSI tests with FreeBSD iSCSI initiator show latency about 136us at 512B blocks may be somewhat better then 320-370us at 4KB blocks you show here, but also not as much as I would like.
 

briandm81

Dabbler
Joined
Jun 8, 2016
Messages
33
That definitely makes a lot of sense. I'm comparing performance to an EMC Symetrix box and it appears that I'm outpacing it with my little home server setup. So that makes me feel good at least. :)
 
Status
Not open for further replies.
Top