ZIL and L2ARC on same disk

Status
Not open for further replies.
L

L

Guest
Ok so you guys have forced me triple check my facts. I actually have architected partitioned ssd's for dozens of fortune 100 customers with no complaints and only great joy. The thing is that none of them do streaming. I understand that this community is very vested in a streaming workload. All of my work was done on solaris, and was done since 2004, 1 year prior to zfs being introduced. I will test, but everything I have seen so far is that the zfs inside of freenas is almost identical to the zfs elsewhere. I will also test this to make sure the underlying freebsd isn't blotted, but from some of my experts opinions freebsd is actually runs a little more lean than solaris.

So, here is a quote from a blog by brendan gregg here https://blogs.oracle.com/brendan/entry/test

"What's bad about the L2ARC?



  • It was designed to either improve performance or do nothing, so there isn't anything that should be bad. To explain what I mean by do nothing - if you use the L2ARC for a streaming or sequential workload, then the L2ARC will mostly ignore it and not cache it. This is because the default L2ARC settings assume you are using current SSD devices, where caching random read workloads is most favourable; with future SSDs (or other storage technology), we can use the L2ARC for streaming workloads as well."



 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
So, here is a quote from a blog by brendan gregg here
Context. Note also that the test machine in the blog post had 128GB RAM and 44 HDD. This is a far cry from the OP's 8GB RAM. Adding a large L2ARC device will just increase pressure on ARC which is already insufficiently sized for the proposed workload.
 

wintermute000

Explorer
Joined
Aug 8, 2014
Messages
83
Not to burst your bubble but 600Mbit/s is in the realm of 60MB/s which is kinda poor for SSD.

You'd kinda think that SSD would be awesome as pool components but it actually turns out to be not-quite true. The problem of sync writes still occurs; if ESXi pushes out a block it has to be committed. With RAIDZ or even mirrors that involves multiple device writes. If nonredundancy is fine then the single SSD is a good compromise in some ways. You're not running sync=always I'm guessing...?
I'm aware, need to do more tuning or try device extent, no I turned off sync, it's definitely not cpu or ram bound, I get full gigabit via cifs to raidz2
 

wintermute000

Explorer
Joined
Aug 8, 2014
Messages
83
I'm aware of that, I do not value my iSCSI, its lab VMs for playing around with ESXi. However I got some info from this forum that syncs is irrelevant to zvol, so I turned it back on. I confirmed with benchmarks it does not make a difference.

My 'production' data is on a synced, scrubbed RAIDZ2.

BTW here are my iscsi figures. I actually did a bench in a win VM. I can live with this for a lab setup given it hits the limits of single NIC gigabit :)

YXcWtZn.png
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It was designed to either improve performance or do nothing, so there isn't anything that should be bad. To explain what I mean by do nothing - if you use the L2ARC for a streaming or sequential workload, then the L2ARC will mostly ignore it and not cache it. This is because the default L2ARC settings assume you are using current SSD devices, where caching random read workloads is most favourable; with future SSDs (or other storage technology), we can use the L2ARC for streaming workloads as well."

Yeah, what anodos said. That's spot on.

BIG, BIG difference between the example you are giving and what this forum regularly sees (and what this thread is about). Only a fraction of 1% of users will even 32GB of RAM.

8GB of RAM with an L2ARC on FreeNAS is shown, time and time again, to hurt performance. While people can argue until they are blue in the face that "it was designed to either improve performance or do nothing" I can tell you with certainty that it does NOT work that way on FreeNAS. As we've seen in dozens and dozens of threads from just this year that is just not the case.

The problem is virtually nobody here can afford to build their system with hardware that is roughly appropriate for the workload. People drop 8GB in a FreeNAS box and then freak out when they can't do 1GB/sec throughput on their 50TB 14 disk pool. Unfortunately we don't perform miracles with FreeNAS (the miracles plugin costs extra).

One thing you'll learn Linda is that far more people will want to do it wrong (almost always in the name of saving money or saving watts, which they usually end up getting wrong) and watch it stab themselves in the eye before they realize they should have just done it per our recommendations to start with. If you look at slide 15 of my noob presentation I mention specifically that L2ARCs can and will hurt performance. If you don't have the RAM resources to support an L2ARC you *will* hurt performance. FreeBSD may be different with 8GB of RAM. I don't know as I'm not going to argue how FreeBSD behaves. But in FreeNAS-land adding an L2ARC when you don't already have a respectable amount of RAM will actually hurt performance.

The problem is people that don't have ZFS and/or FreeNAS experience read stuff like you just quoted, immediate equate that to "it can't hurt me then" and then look at their "soon-to-be-FreeNAS" setup. They decide they'll go with 8GB of RAM and instead of buying another stick of RAM they decide that a 240GB SSD would give them 240GB of cache, far more than the 8GB of RAM. Since 8GB < 240GB clearly it's the smarter way to go, right? So they drop their money and then are woefully disappointed when it doesn't work like they think. The reality is that they shouldn't be eyeballing a 240GB L2ARC until they have something like 50GB of RAM. And probably like 0.5% of users in this forum have hardware that will even support >32GB of RAM.

You also have things to consider that you will not see anywhere else in FreeBSD-land. For example, FreeNAS allocates 1-2GB of RAM for RAM disks, it has to run services and is an appliance-like setup, and it has "decided" on many settings for you with the "slimmed-down" WebGUI. Even when you look at FreeBSD it recommends 1GB of RAM minimum for ZFS but strongly urges more.

Then you read stuff like the ZFS evil tuning guide, which says you can use ZFS on FreeBSD with 1GB of RAM but then goes on to say that 64-bit is recommended but 32-bit is supported with sufficient tuning. That would seem to virtually imply that 4GB of RAM minimum is a good place to start. People also read that kind of stuff and then say to themselves "well, gee, FreeNAS doesn't need 8GB of RAM" and then we see broken pools. I hate poorly written documentation like the evil tuning guide that says one thing but then implies a whole different ballgame. Is 1GB workable or do you really need >4GB? There's a 4-fold+ increase in RAM needs based on how you want to argue it.

Much of FreeBSD knowledge applies to FreeNAS, but not in the same way as many expect and definitely is detrimental to people that look at FreeBSD's system requirements, then FreeNAS' requirements, and then say that the FreeNAS documentation must be wrong. Similar code, but totally different design and so they share much but not all. If you don't know the difference you can quickly find yourself with bad advice and giving bad ideas that backfire on you. Most of our recommendations revolve around the experience of seeing what does and doesn't work for various users.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I actually have architected partitioned ssd's for dozens of fortune 100 customers with no complaints and only great joy.

What's your SSD of choice for this, and why, Linda Kateley?

note to everyone else: please let's let her answer without lots of help.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Yes, please let Linda answer. Should be fun!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Yes, please let Linda answer. Should be fun!
Anyone that uses "To Architect" as a verb, should be allowed to answer.
 

PetrZ

Dabbler
Joined
Feb 23, 2018
Messages
20
Hi, I am joining this old topic to ask for advice as well.
System: 2x 6core Xeon, 48GB RAM, 8x 6TB drives in raidz2, hw ecnrypted (AES-NI)
Bonded 2x1GbE for users (nfs/samba), 2x10GbE for two VM hypervisors (iSCSI).
Is it worth to add PCI-E M.2 SSD for L2ARC/ZIL? Some VMs will run SQL DBs.
I was thinking to add 1-2GB for ZIL and rest of 32GB or 64GB to L2ARC.
Same of FreeNAS RAM will be maybe used for jails/VMs on same host.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Holy necro-post, Batman!

You want to run SQL databases on VM on a RAID-Z2 array? That's going to be horrible. You need an array of striped mirrors at a minimum, or SSD. If you aren't running SSD, a SLOG is required - needs to be sized for 5 seconds of transfer (20Gbps * 1 byte/8 bits * 5 seconds = 12.5GB). And you don't have enough memory to be considering L2ARC... max out the memory in the box, then consider L2ARC.
 
Status
Not open for further replies.
Top