Sorry for asking again, is the following make sense to you?
600MB/s transfer -> 3GB txg size -> 24GB system memory -> 12GB SSD for ZIL/SLOG device
600MB/s transfer rate (4 x HDD each can write 150MB/s)
3GB txg size (600MB/s x 5 sec where txg by default will flush data every 5 sec)
24GB system memory (3GB x 1/8 where txg by default use up to 1/8 system's memory)
12GB partitioned SSD (leave the rest of the SSD unallocated) for ZIL/slog device (The maximum size of a log device should be approximately 1/2 the size of physical memory because that is the maximum amount of potential in-play data that can be stored)
The filesystem consumer typically decides whether to request a write as sync. In the case of ZFS writing filesystem metadata, it acts as both consumer and provider. ZFS also allows the admin to force all writes to be async or sync.
The filesystem consumer on a UNIX system might be a sophisticated database application that requests certain writes as sync in order to guarantee consistency of data, but on a NAS it is typically just passing along requests made via the sharing protocol. So if you have an ESXi NFS client, it is flagging all its writes as sync at the ESXi host, then the NAS NFS server passes that on to the kernel when it makes the write request.
Also (noob question alert ^^) Does the L2ARC also have something to do with all this, or is that something completely separate?
It doesn't have much to do with this. ZFS can cache metadata in the ARC for rapid reading and has some tunables to control that, so one might see a fuzzy sort of metadata acceleration mechanism, but they're really totally separate things.
The Crucial M500 is a nice drive, but it's not one made for this type of usage. It's best to get a SSD designed for the data center and not a desktop class SSD. The Intel S3500 mentioned in the above link the is a good drive and works very well has an SLOG, I'm using them in 2 servers at the moment. Another interesting budget choice is a Seagate 600 Pro(using that in 2 other servers), the 100GB is pretty cheap(make sure it's the Pro version they are built for the data center), it performs better than the S3500 and costs about the same. If you have the budget the Intel S3700 is the top of the SSD food chain for things like this but they are starting to get expensive and in general would just outlast a S3500. After the S3700 you move into expensive PCI flash cards or SAS/SATA devices built to be SLOG devices(STEC makes some of these).
It would be a great slog and possibly over kill depending upon your needs. There is no 1 size fits all, you need to know your needs and work load and design something that fulfills those needs. Some work loads have no use for an slog others need some sort of enterprise PCI flash based card(think Fusion IO), most though fall between these 2 extremes.
It could be, but it dependence on your needs. we are still waiting for our Samsung 600 Pro ssd.
We didn't choice the S3700 because of price and write speeds. BUT the M500 is a really bad choice since the write speeds are VERY low... (we only had 2/3MB/s instead of 80MB/s with a intel 330 60GB SSD) while on paper at least the speeds should similar to the S3500/S3700 100MB/s...
Turns out the real world is more different then we thought....
@betta21 if i where you i would pick a random, cheap, somewhere on the shelf SSD and start with trying if it works for you. We use ESX with NFS and our writes went from 5/6MB/s to 80+MB/s. It just dependence on your needs
I don't know if you can add an slog device in RAID10... the freenas interface hasn't got that option!
And for the record: The preformance of the M500 and de Samsung 600 Pro SSD (100GB) are just as bad...
We now have an INTEL DC S3700 (100GB) writes speeds are around 100mb/s (but we havent tested that fully so maby the disk could do more
Talking about ZeusRAM used with ZFS, my VMware cluster (5 dual socket blades) is connected to two independent but identical datastores, settings on both storages are the same, hardware is the same, unfortunately, on one, the ZeusRAM died, on the other it still working. When I datastore-vmotion a vm from one storage to the other, and do some timing of the jobs (processing of scientific data) the difference is more than noticeable. The small (8G) ZeusRAM makes quite some difference...but the performance comes with a price, they are quite expensive.
It seems nfs and zfs don't work as well as cfs and zfs does. Running a vmware esxi on zfs without breaking the bank is not possible.
I just need a way to do a backup for my linux box and get decent speed (not trying to saturate the network maybe get 40mb/s-100mb/s).
sync=disabled seems to get good results for many people but is risky. Anything over 10-15mb/s would work just fine for me.
I guess I just want to know if nfs is even worth it or nfs is so bad that rsync/ssh based backup might actually be better.