zpool Encrypted Pool auto Mount / Import Issues (due to missing SLOG)

Status
Not open for further replies.

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I agree with your assertion, that the delta can never be chased down to 0, but I would expect the impact from such a fast SLOG to have made the variance much smaller than it was.

Optane is certainly fast, but it's not RAM fast. Even with the fastest NAND out there, there's still a bit of latency added by having to do the write operations at all, as well as hop through the NVMe driver; and sync=disabled basically just gets to exit out of the whole function.

  • I would 100% buy that answer, and in fact it is the same I would offer (less articulately); however,
  • That second write, occurs every 5 seconds, when the ZIL is flushed and blocks written to pool, so unless you run dd write, and then
  • BAM within 5 seconds, prior to the final on disk ZIL flush,
  • hit the pool with dd read, I would not think that your disks working harder during the write cycle (on disk ZIL vs. SLOG) would have any impact during the beginning of the read cycle, and subsequent speeds.
  • Thus my suggestion that it is likely methodology error. But I've seen this every time I've tested ... I find it curious. And there was no other disk activity other than the bench marking.

Bit of a flawed assumption in that ZFS only writes to the pool every 5 seconds. Much like recordsize, the "5 second" default of vfs.zfs.txg.timeout is a maximum and a transaction group can be pushed to disk (much) earlier if there's sufficient dirty data (guess what, that's tunable too as vfs.zfs.dirty_data_sync) so your pool is a lot more active that you probably think.

  • [not to correct, but in case someone references later], I believe the last command (while implied) is technically, sysctl kstat.zfs.misc.arcstats.size. (again, not to correct you).
No harm done or implied. Technically you don't need the -a as that's "pull all values" but I'm usually interested in several so I'll just do a -a and pipe it through grep.

You can also use the arcstat.py script to get this and lots of other ARC-related detail.

(And hello, Search Engine User of the Future. Hopefully this thread has been helpful!)
 
Status
Not open for further replies.
Top