Iscsi Performance Freenas 9.3 ESXi6

Status
Not open for further replies.

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Hello,

My setup is in the sig. I am using ISCSI MPIO and can easily saturate my 1Gbe network. My disks only get to 40-50MBps. Is this normal? Should I see more?

X9SRL with Xeon 2609
32GB memory
6x Ultrastar 3TB (3 mirrors) with Sandisk 128GB L2ARC

Performance seems great for VMs, but I want to know if I can push more out of it.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
not much memory for a L2aec and iscsi.
you do not have any signiture right now and you do not tell us how you measure "the speed".
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Your limiting factor is your network interfaces... the drives are only going to move data as fast as your network interfaces can send it out to the client. You probably have some headroom on the drive performance, but you'd need to go to 10gig networking to see it from a remote system.

However, you're not looking at the whole picture. Raw streaming bandwidth is fairly easy... with a VM-heavy workload, read and write IOPS are at least equally, if not more, important than raw bandwidth. With only 3 stripes, you're pretty limited there.

As pointed out, you aren't running much memory. Keep in mind that too much L2ARC with not enough system memory can make things worse, not better.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The 2609 is a contemptible CPU (that means "it SUCKS!!"). 32GB is probably too small in terms of memory; block access protocols need LOTS of RAM to make good choices of what to evict to L2ARC.

You'd probably see some improvement with an E5-1620 and 64GB (or more) of RAM.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Awe man, I just bought that CPU! Oh well.

Good news is i have 1TB of DDR3 ECC sitting in my computer room, so I will throw more at it soon.

Also the server is about to be directly connected to both my ESXi with 10Gbe.

Any thing else I need to check/confirm in Freenas to crank the most performance out of this box? I use this for VM workloads, so I would like to design it with that in mind.

Also I made another thread, I see my L2ARC drive being used, but not my SLOG.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Make sure you keep your pool utilization low. Fragmentation kills performance, giving ZFS lots of free space to play with reduces the speed with which fragmentation occurs. Periodically moving VM's off of and then back onto the storage will also reduce fragmentation.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Hey good info, thank you!
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
Is there a way to check fragmentation?
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
I am at 20% FRAG right now.
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
SATAPool1 8.16T 2.15T 6.01T - 20% 26% 1.00x ONLINE /mnt
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Try not to fill up the disk unnecessarily and you'll find your performance is better.

Screen-Shot-2013-02-25-at-10.45.36-AM3.png
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
That graph is horrible. Any chance they are working on a fix for this issue? That could be a potential show stopper for clients.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Yeah, I'd love some more detail on that graph as well. Even at 10% utilization, you're showing 6MB/sec. Were you, by chance, using RLL or MFM drives for this analysis? :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That graph is horrible. Any chance they are working on a fix for this issue? That could be a potential show stopper for clients.

There is no "fix" accept to throw more hardware at it. That is the problem with copy on write file systems.
Yeah, I'd love some more detail on that graph as well. Even at 10% utilization, you're showing 6MB/sec. Were you, by chance, using RLL or MFM drives for this analysis? :)

The y axis numbers don't matter. What is important is to realize the % of performance that drops as your zpool gets fuller.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That graph is horrible. Any chance they are working on a fix for this issue? That could be a potential show stopper for clients.

No, there isn't a "fix" for this, other than to go SSD. Once you start making hard drives seek, you are very much married to the mechanical speed of the drives, and the more you seek, the worse it gets.

ZFS is actually very good about this because it works hard to allocate blocks contiguously, so even if you're writing to random blocks in random files, the transaction group will tend to get written as contiguous blocks (i.e. no seeks) as long as contiguous space exists on the disk. The problem is that as you decimate the availability of contiguous space, performance invariably has to fall off. Once you get to the point where a transaction group cannot be written contiguously, you see falloff.

But if it helps make you feel better, do consider that if you're writing random blocks on a non-ZFS filesystem like UFS or NTFS, you *START OUT* at a low speed because you're seeking all over the place to write those blocks "in place." And it never gets better. So this is more an example of how ZFS can make certain use models really shine performance-wise compared to conventional models.

The standard ZFS mitigation techniques to cope with fragmentation involve throwing hardware at the problem. To reduce the problem of slow read performance, we throw lots of ARC and L2ARC at the issue. This only helps the stuff that's read frequently enough that it comes to reside in {,L2}ARC, of course. To reduce the problem of slow write performance, we throw lots of extra disk space at it, so that ZFS can maintain larger contiguous free spaces. This sometimes helps read performance as well, but it depends on what the write access patterns for the written data are.

But fundamentally you hit disk drive seeks as a limiting performance factor at some point.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, I'd love some more detail on that graph as well. Even at 10% utilization, you're showing 6MB/sec. Were you, by chance, using RLL or MFM drives for this analysis? :)

What in the bloody heck are you talking about, sir? 6MB per second is fscking AWESOME for random writes. Try it sometime. Get out a standard hard disk. Even on the fastest, it probably peaks around 200 transactions per second, that works out to 820 KBytes/sec. More likely you'll see ~400-500KBytes/sec. Screw this. Let's just try it.

Code:
# iozone -f /dev/da17 -r 4k -s 1g -i 2 -a
        Iozone: Performance Test of File I/O
                Version $Revision: 3.420 $
                Compiled for 64 bit mode.
                Build: freebsd

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa.

        Run began: Wed Nov 25 11:42:26 2015

        Record Size 4 KB
        File size set to 1048576 KB
        Auto Mode
        Command line used: iozone -f /dev/da17 -r 4k -s 1g -i 2 -a
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
         1048576       4                                       463    1015

iozone test complete.



Anyways, the point here is that ZFS is awesome magic if you give it resources. That performance shown out at 80-90% on the graph? That's basically just average hard drive random write performance, out there. That would be flat across the graph for a traditional filesystem. But, no, ZFS gifts us with speed .. lots of speed ... with a relatively empty pool.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
What in the bloody heck are you talking about, sir? 6MB per second is fscking AWESOME for random writes. Try it sometime. Get out a standard hard disk. Even on the fastest, it probably peaks around 200 transactions per second, that works out to 820 KBytes/sec. More likely you'll see ~400-500KBytes/sec. Screw this. Let's just try it.

Chill out :) I was interpreting "steady state" as being a streaming/bulk write, not random write. Obviously, this is great performance for random writes to a single drive!
 

wreedps

Patron
Joined
Jul 22, 2015
Messages
225
The 2609 is a contemptible CPU (that means "it SUCKS!!"). 32GB is probably too small in terms of memory; block access protocols need LOTS of RAM to make good choices of what to evict to L2ARC.

You'd probably see some improvement with an E5-1620 and 64GB (or more) of RAM.


I just ordered a E5-1620, thanks!
 
Status
Not open for further replies.
Top