mattlach
Patron
- Joined
- Oct 14, 2012
- Messages
- 280
To ZIL or not to ZIL
Hey all! I know this topic keeps coming up, but everyones usage scenario is a little bit different, so I'd appreciate your comments on my implementation.
Q1
I'm currently running my FreeNAS install on a ESXi server (see sig for details). Using a large dd test (to avoid distorted results from cache) from /dev/zero to disk and back from disk to /dev/null The 8 disk volume reads at about 520MB/s and writes at about 485MB/s. It currently consists of 4x4TB WD RED's and 4x3TB WD Greens which I am swapping out one by one as I can afford new RED's.
I don't work with defined data sets very often, and I don't run VM images or intense databases from it. Instead my usage will involve accessing random files across the volume, so I don't think an L2ARC is the way to go.
Activity on the volume will look something like this:
- Home file server: 3-4 computers, copying files, saving downloads, playing mp3's, playing movies, etc.
- MythTV backend recording as many as 6 HD streams (~18Mbit/s each) at the same time and playing them back to clients
- Symform P2P cloud backup. Initially I plan on contributing ~5TiB to pay for my 2.4TiB of backups
So, with all of these going on at the same time there will be a decent amount of simultaneous load.
While my benchmarks say I can write 485MB/s that beinchmark isn't necessarily very useful, for a number of reasons, among them writing all 0's is highly compressible, so it skews the data up, and with multiple files being written at the same time, seek times will drastically reduce throughput from its sequential max.
What worries me the most are the MythTV recordings, as they are recorded off the cable in real time and cant sustain as much of a delay as - say a backup. Video playback also needs to keep up.
So this type of workload looks like it will make good use of an SSD zIL, allowing the volume to group writes for better speeds when the volume is less busy. The question is, is my workload high enough to warrant a ZIL, or will I be wasting my money? What do you guys think?
Q2
As a follow-up, if I do go with a ZIL, I understand they have an impact on RAM use. Once I am done swapping out my drives for RED's, my system should have 8 4TB drives, resulting in a total useable space of 24TB which is ~21.8TiB. My server maxes out at 32GB RAM (at least until 16GB modules become available) and I have a few other VM's on it, leaving the max I can afford to give FreeNAS 25GB, so I am already getting close to the recommended 1GB of RAM per 1 TIB of storage. How much should I expect my RAM needs to increase by adding a ZIL?
q3
How large of an SSD and what type should I use? Old guides and forum posts all seem to recommend SLC drives, but SSD's have changed a lot in the last few years. MLC write endurance has improved a lot. These days even enterprise server users often use MLC drives. My understanding is that more or less, current MLC is the new SLC, and current TLC is the new MLC.
Based on this, I was leaning on getting something like a 128GB Samsung 840 pro for my zil. How does this sound?
q4
Most guides recommend mirroring the ZIL. I understand this used to be more of an issue than it is today, as in the past ZIL failure could result in data loss, where it no longer does. These days the motivation seems to be to avoid performance degradation in case of a drive failure in the ZIL. If I can live with an hour of performance degradation while I run down to MicroCenter and buy another SSD, is there really any reason to mirror it?
Anyway, I apologize for my longwinded post on what must be a tired subject at this point, but it would help to clarify all this with the latest information out there and not rely on outdated info.
Thanks,
Matt
Hey all! I know this topic keeps coming up, but everyones usage scenario is a little bit different, so I'd appreciate your comments on my implementation.
Q1
I'm currently running my FreeNAS install on a ESXi server (see sig for details). Using a large dd test (to avoid distorted results from cache) from /dev/zero to disk and back from disk to /dev/null The 8 disk volume reads at about 520MB/s and writes at about 485MB/s. It currently consists of 4x4TB WD RED's and 4x3TB WD Greens which I am swapping out one by one as I can afford new RED's.
I don't work with defined data sets very often, and I don't run VM images or intense databases from it. Instead my usage will involve accessing random files across the volume, so I don't think an L2ARC is the way to go.
Activity on the volume will look something like this:
- Home file server: 3-4 computers, copying files, saving downloads, playing mp3's, playing movies, etc.
- MythTV backend recording as many as 6 HD streams (~18Mbit/s each) at the same time and playing them back to clients
- Symform P2P cloud backup. Initially I plan on contributing ~5TiB to pay for my 2.4TiB of backups
So, with all of these going on at the same time there will be a decent amount of simultaneous load.
While my benchmarks say I can write 485MB/s that beinchmark isn't necessarily very useful, for a number of reasons, among them writing all 0's is highly compressible, so it skews the data up, and with multiple files being written at the same time, seek times will drastically reduce throughput from its sequential max.
What worries me the most are the MythTV recordings, as they are recorded off the cable in real time and cant sustain as much of a delay as - say a backup. Video playback also needs to keep up.
So this type of workload looks like it will make good use of an SSD zIL, allowing the volume to group writes for better speeds when the volume is less busy. The question is, is my workload high enough to warrant a ZIL, or will I be wasting my money? What do you guys think?
Q2
As a follow-up, if I do go with a ZIL, I understand they have an impact on RAM use. Once I am done swapping out my drives for RED's, my system should have 8 4TB drives, resulting in a total useable space of 24TB which is ~21.8TiB. My server maxes out at 32GB RAM (at least until 16GB modules become available) and I have a few other VM's on it, leaving the max I can afford to give FreeNAS 25GB, so I am already getting close to the recommended 1GB of RAM per 1 TIB of storage. How much should I expect my RAM needs to increase by adding a ZIL?
q3
How large of an SSD and what type should I use? Old guides and forum posts all seem to recommend SLC drives, but SSD's have changed a lot in the last few years. MLC write endurance has improved a lot. These days even enterprise server users often use MLC drives. My understanding is that more or less, current MLC is the new SLC, and current TLC is the new MLC.
Based on this, I was leaning on getting something like a 128GB Samsung 840 pro for my zil. How does this sound?
q4
Most guides recommend mirroring the ZIL. I understand this used to be more of an issue than it is today, as in the past ZIL failure could result in data loss, where it no longer does. These days the motivation seems to be to avoid performance degradation in case of a drive failure in the ZIL. If I can live with an hour of performance degradation while I run down to MicroCenter and buy another SSD, is there really any reason to mirror it?
Anyway, I apologize for my longwinded post on what must be a tired subject at this point, but it would help to clarify all this with the latest information out there and not rely on outdated info.
Thanks,
Matt