FreeNAS write speed dropping by 50% on Performance test

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, if you're sharing a SSD for SLOG and L2ARC, that works out to being a real bad idea.

Conspicuously missing from this discussion is any mention of what sort of workload you are designing this to handle. If you have a legitimate need for a SLOG device, dedicate one of your SSD's to the task. Use the manufacturer's tool to reset it to factory defaults, clearing all the data, then make a small partition for SLOG. This can potentially help the wear leveling algorithms in the SSD understand your usage model and maintain a larger pool of free pages, which translates to more joy. The other SSD will still help out your pool, but the pool has enough vdevs in it that it should be pretty fast.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm going to remove most of this since I'm just rehashing what cyberjock and jgreco said already.

But one thing I didn't see (yet) is that you should probably mirror both your L2ARC and SLOG. If you're at the point of needing this kind of performance you want it consistent. Edit III: Mirroring L2ARC means a disk failure doesn't cause any impact to read performance against it, but you lose half of your read cache capacity. In most cases the better option is striping for more cache and simply deal with the failed drive if it happens, but if you're in a situation where half of your L2ARC going missing would be a significant impact, you may want to consider it. But that might mean your system as a whole is undersized or not sized properly ...

(Edit II: The Revenge) But with that said, properly doing that means four SSDs - two mirrored for L2ARC, two mirrored for SLOG.

My quick suggestion is that since you're in a testbed world still, drop the L2ARC from the pool entirely and check again.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'd recommend the ZIL be mirrored and the L2ARCs striped. Of course, if you stripe them, then you double their capacity(and their combined speed), but also need even more RAM. The sweet spot for ZFS where performance can potentially explode is 96-128GB of RAM. That's where you can start doing lots of L2ARC and still have enough ARC to have a very fast pool. The question though is what is appropriate for you. I can't really answer that as there's so many variables. I've worked with companies that went with 256GB of RAM and needed more, and I've had others that had 96GB of RAM and were very happy. It's very much a personalized thing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Also, based on some hasty math in my head, you can't even use all 138GB of disk space for your L2ARC because of how little RAM you have. There's a reason why I tell people not to even consider L2ARCs until they hit 64GB of RAM. Also the manual says not to consider an L2ARC until you've maxxed out your RAM(within reason).

The typical sweet spot for L2ARC is 5:1 or maybe 4:1. At 48GB of RAM that would mean 240GB of L2ARC.

I'm going to disagree with your 64GB logic. I understand how you came to it and as a rule of thumb it might help people understand that their little 8GB box cannot simply have a 1TB L2ASRC SSD thrown on it. But even at 32GB a system could be sufficiently large and stressed to make good use of a 60 or 90GB L2ARC SSD. On the other hand, it also helps to understand that the L2ARC is really not all that helpful unless your pool is sufficiently stressed that you need to squeeze some more IOPS out.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The 64GB logic just came from experimenting with various businesses as we've tried to figure out what the minimum was for some of them. It's a nasty mess when you have just 32GB of RAM total and your L2ARC is going to dedicate 10+GB of RAM for the L2ARC index. If your pool has a need for speed then you tend to need a good combination of ARC and L2ARC. For those kinds of people their working set is usually large enough that the ARC is much too stressed and the L2ARC can't perform fast enough to offset the lost performance. That's why I always tell people 64GB of RAM to start with L2ARC. Your L2ARC can be a bit bigger(which they always go and buy a 200GB+ L2ARC anyway) and they'll usually want/need the bigger ARC.
 

Chris_Zon

Dabbler
Joined
Mar 3, 2014
Messages
21
Also, I forgot to answer the question about what the storage would be used for. It's being used for VMWare, right now we are running a SCSI storage for VMWare, but we were testing to see if an NFS mount storage with ZFS/L2ARC would be a good alternative, if you happen to have suggestions in that area they would be welcome.For now I'll be testing out what was pointed out before, might add in some more RAM and switch the log and cache around.
 

Chris_Zon

Dabbler
Joined
Mar 3, 2014
Messages
21
Also, suggestions for tunables and sysctls would be welcome, or if I should leave everything default I'd like to know. Because right now I changed a couple of sysctls and tunables but I'm not sure if it's positively or negatively affecting my system.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Also, suggestions for tunables and sysctls would be welcome, or if I should leave everything default I'd like to know. Because right now I changed a couple of sysctls and tunables but I'm not sure if it's positively or negatively affecting my system.

Post them, but offhand I'm betting you'll be better off removing them all. The underlying cause here is most likely sharing L2ARC and SLOG on the same device, so tunables might actually aggravate the problem once you resolve that.

I understand how you came to it and as a rule of thumb it might help people understand that their little 8GB box cannot simply have a 1TB L2ASRC SSD thrown on it.

Aww, you mean I should send this 1.2TB ioDrive2 back? Meanie.
 

Chris_Zon

Dabbler
Joined
Mar 3, 2014
Messages
21
Post them, but offhand I'm betting you'll be better off removing them all. The underlying cause here is most likely sharing L2ARC and SLOG on the same device, so tunables might actually aggravate the problem once you resolve that.



Aww, you mean I should send this 1.2TB ioDrive2 back? Meanie.


I'm working on testing the setup without tunables and sysctls now, setting tunables takes a bit though so I'll continue this tomorrow.
 

Chris_Zon

Dabbler
Joined
Mar 3, 2014
Messages
21
I'm working on testing the setup again, I removed all the sysctl's and tunable's, set up 1 of the SSD's as logs and 1 as cache, still getting that 50% drop issue. Any ideas?

http://imgur.com/a/rwyGS

Did some more testing, and it seems to consistently drop after just under 3GB in the cache
11.1 -> 13.5 = 2.4GB
13.9 -> 16.5 = 2.6GB
16.9 -> 19.6 = 2.7GB
 

Chris_Zon

Dabbler
Joined
Mar 3, 2014
Messages
21
My scsi performance seems to be stuck at around 80MB/s. anyone got any ideas to boost those last MB/s to have it perform on the same level as NFS?

Edit: Some tweaking got it up to 85-90,
 
Status
Not open for further replies.
Top