Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Some insights into SLOG/ZIL with ZFS on FreeNAS

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#61
9.2 has allegedly fixed some issues, but I have no idea what actually changed. One of these days I've got to set up a repository and build environment.
 
Joined
Nov 30, 2013
Messages
26
Thanks
2
#62
9.2 has allegedly fixed some issues, but I have no idea what actually changed.
Sorry for asking again, is the following make sense to you?
600MB/s transfer -> 3GB txg size -> 24GB system memory -> 12GB SSD for ZIL/SLOG device

600MB/s transfer rate (4 x HDD each can write 150MB/s)
v
3GB txg size (600MB/s x 5 sec where txg by default will flush data every 5 sec)
v
24GB system memory (3GB x 1/8 where txg by default use up to 1/8 system's memory)
v
12GB partitioned SSD (leave the rest of the SSD unallocated) for ZIL/slog device (The maximum size of a log device should be approximately 1/2 the size of physical memory because that is the maximum amount of potential in-play data that can be stored)
 
Joined
Dec 7, 2013
Messages
95
Thanks
5
#63
In the end, which part of the system actually makes the decision if a write is a sync write or not?

Also (noob question alert ^^) Does the L2ARC also have something to do with all this, or is that something completely separate?
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
11,656
Thanks
2,946
#64
In the end, which part of the system actually makes the decision if a write is a sync write or not?
The filesystem consumer typically decides whether to request a write as sync. In the case of ZFS writing filesystem metadata, it acts as both consumer and provider. ZFS also allows the admin to force all writes to be async or sync.

The filesystem consumer on a UNIX system might be a sophisticated database application that requests certain writes as sync in order to guarantee consistency of data, but on a NAS it is typically just passing along requests made via the sharing protocol. So if you have an ESXi NFS client, it is flagging all its writes as sync at the ESXi host, then the NAS NFS server passes that on to the kernel when it makes the write request.

Also (noob question alert ^^) Does the L2ARC also have something to do with all this, or is that something completely separate?
It doesn't have much to do with this. ZFS can cache metadata in the ARC for rapid reading and has some tunables to control that, so one might see a fuzzy sort of metadata acceleration mechanism, but they're really totally separate things.
 
Joined
Apr 25, 2014
Messages
39
Thanks
5
#66
I Read the post several times, together with one of my college's. We wanted to fully understand the concept of an SLOG since we really care about our data :)

We ended up at this blog that tried to explain it with some pictures:
https://pthree.org/2013/04/19/zfs-administration-appendix-a-visualizing-the-zfs-intent-log

It confirmed the way we read the thread. I thought it could help other people better understand how an SLOG works and why it is not a ZIL ;)

Thanks for all post @jgreco!
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,850
#67
That is a very good blog. It's a bit more complex than the blog makes it out to be, but it's definitely a good starter for a noob that wants to learn.
 

pbucher

FreeNAS Experienced
Joined
Oct 15, 2012
Messages
180
Thanks
21
#69
Ow and i found this analysis of 'good' ssd when power failing:
http://lkcl.net/reports/ssd_analysis.html

To bad he hadn't had time to test the crucial M500 since it seems to be the best choice at the moment (since the 120GB version only costs about €60)
The Crucial M500 is a nice drive, but it's not one made for this type of usage. It's best to get a SSD designed for the data center and not a desktop class SSD. The Intel S3500 mentioned in the above link the is a good drive and works very well has an SLOG, I'm using them in 2 servers at the moment. Another interesting budget choice is a Seagate 600 Pro(using that in 2 other servers), the 100GB is pretty cheap(make sure it's the Pro version they are built for the data center), it performs better than the S3500 and costs about the same. If you have the budget the Intel S3700 is the top of the SSD food chain for things like this but they are starting to get expensive and in general would just outlast a S3500. After the S3700 you move into expensive PCI flash cards or SAS/SATA devices built to be SLOG devices(STEC makes some of these).
 

pbucher

FreeNAS Experienced
Joined
Oct 15, 2012
Messages
180
Thanks
21
#71
Wouldn't an Intel DC S3700 be good for a slog? It has an onboard capacitor and the 200 gig version has 3.65 Petabytes of write endurance.....
It would be a great slog and possibly over kill depending upon your needs. There is no 1 size fits all, you need to know your needs and work load and design something that fulfills those needs. Some work loads have no use for an slog others need some sort of enterprise PCI flash based card(think Fusion IO), most though fall between these 2 extremes.
 

xhoy

Newbie
Joined
Apr 25, 2014
Messages
39
Thanks
5
#72
Wouldn't an Intel DC S3700 be good for a slog? It has an onboard capacitor and the 200 gig version has 3.65 Petabytes of write endurance.....
It could be, but it dependence on your needs. we are still waiting for our Samsung 600 Pro ssd.

We didn't choice the S3700 because of price and write speeds. BUT the M500 is a really bad choice since the write speeds are VERY low... (we only had 2/3MB/s instead of 80MB/s with a intel 330 60GB SSD) while on paper at least the speeds should similar to the S3500/S3700 100MB/s...

Turns out the real world is more different then we thought....

@betta21 if i where you i would pick a random, cheap, somewhere on the shelf SSD and start with trying if it works for you. We use ESX with NFS and our writes went from 5/6MB/s to 80+MB/s. It just dependence on your needs :)
 
Joined
Jul 2, 2014
Messages
1
Thanks
0
#73
Can anyone verify if it's possible to configure a SLOG with 4 SSD drives into a RAID10 config? I'd like to get both the RAID1 protection as well as the RAID0 benefits of improved write IOPS.
 

xhoy

Newbie
Joined
Apr 25, 2014
Messages
39
Thanks
5
#74
I don't know if you can add an slog device in RAID10... the freenas interface hasn't got that option!

And for the record: The preformance of the M500 and de Samsung 600 Pro SSD (100GB) are just as bad...
We now have an INTEL DC S3700 (100GB) writes speeds are around 100mb/s (but we havent tested that fully so maby the disk could do more :)

SO if you need an slog just buy an s3700!
 
Joined
Mar 25, 2012
Messages
19,154
Thanks
1,850
#75
The SLOG is regardless of your zpool configuration.

Yeah, it seems that the Intel SSDs are "the" SSD to go with. ;)
 

Rilo Ravestein

FreeNAS Experienced
Joined
Mar 6, 2014
Messages
685
Thanks
59
#76

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,099
Thanks
3,846
#77
Joined
May 12, 2015
Messages
5
Thanks
0
#78
Talking about ZeusRAM used with ZFS, my VMware cluster (5 dual socket blades) is connected to two independent but identical datastores, settings on both storages are the same, hardware is the same, unfortunately, on one, the ZeusRAM died, on the other it still working. When I datastore-vmotion a vm from one storage to the other, and do some timing of the jobs (processing of scientific data) the difference is more than noticeable. The small (8G) ZeusRAM makes quite some difference...but the performance comes with a price, they are quite expensive.
 
Joined
Jun 23, 2015
Messages
26
Thanks
0
#80
It seems nfs and zfs don't work as well as cfs and zfs does. Running a vmware esxi on zfs without breaking the bank is not possible.

I just need a way to do a backup for my linux box and get decent speed (not trying to saturate the network maybe get 40mb/s-100mb/s).
sync=disabled seems to get good results for many people but is risky. Anything over 10-15mb/s would work just fine for me.

I guess I just want to know if nfs is even worth it or nfs is so bad that rsync/ssh based backup might actually be better.
 
Top