bad performance with mirrored SSDs as SLOG

Status
Not open for further replies.

Thomymaster

Contributor
Joined
Apr 26, 2013
Messages
142
Yes i know that, but:

"25MB/s is phenomenally good if it's random 4K write at low queue depth, but abysmal if it's sequential read."

I just wanted to know if the writes generated to the SLOG are 4K random or sequential writes...

I know about having much RAM in my FreeNAS but i need to use SLOG because of data security when i.e. the unit crashes or the power goes down.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
When you run benchmarks, you're running them against the pool as a whole, not specifically the SLOG. Correct me if I'm wrong, but in sync mode, because ZFS wants the confirmation that each write went to stable storage, you're basically running a queue depth of one whenever you write to a ZIL, be it RAM, SSD, or magnetic. Latency becomes a major factor in how quickly the data can hit the disk since you're waiting for that "confirmation" every time. SSD can confirm a lot faster than magnetic disk, but it still takes time. That's why stuff like the ZeusRAM exists - because while magnetic storage takes milliseconds and SSD takes microseconds, RAM takes nanoseconds.

About the "25MB/s" number - which benchmark is generating this, and what parameters are you using to run it?
 

Thomymaster

Contributor
Joined
Apr 26, 2013
Messages
142
i did a crystal disk mark to one of my virtual machines (which reside on the zfs pool) and then looked at "gstat". The result was that on avery write test, the SLOG was written at max. 25MB/s and this was visible in the Crystal Disk Mark results.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
i did a crystal disk mark to one of my virtual machines (which reside on the zfs pool) and then looked at "gstat". The result was that on avery write test, the SLOG was written at max. 25MB/s and this was visible in the Crystal Disk Mark results.

This is probably just the limit of your SLOG device. You're fighting multiple factors here:

- Smaller SSDs are slower than bigger ones. You've got a 64GB, so the slowest mainstream size.
- Benchmark programs tend to generate random data. Sandforce drives (like your ADATA) perform much worse than their rated speed on incompressible data.
- Performance stats for SSDs are generally listed as "sequential" and "random IOPS at queue depth 32" and neither of these represents the load they get from acting as an SLOG.
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
To provide some comparison numbers with CDM from inside a WinVM mounted via NFS

Intel 520/120GB
Seq 327,2 106,4
512 310,0 101,5
4K 9,548 9,098
4KQD32 141,6 9,126

Thats why they said 25 MB/s is good on 4K but bad on Seq Write;)
 
Status
Not open for further replies.
Top