- Joined
- Feb 6, 2014
- Messages
- 5,112
L2ARC not detected, skipping section
So as you posted, you accidentally created a second pool (L2ARC-01) rather than adding it as a vdev of type
cache
to your main pool (WolfTruePool) - you'll want to export and destroy the second pool (L2ARC-01) and then extend/Add Vdevs on the main pool (WolfTruePool) and use it as type cache
.The command is correct - it just doesn't produce any output if successful. It's also case-sensitive as shown below, so you'll need to match whatever your pool and dataset/volume name are.
Code:
root@badger:~ # zfs get secondarycache PerfVolume/testzvol NAME PROPERTY VALUE SOURCE PerfVolume/testzvol secondarycache all default root@badger:~ # zfs set secondarycache=metadata perfvolume/testzvol cannot open 'perfvolume/testzvol': dataset does not exist root@badger:~ # zfs set secondarycache=metadata PerfVolume/testzvol root@badger:~ # zfs get secondarycache PerfVolume/testzvol NAME PROPERTY VALUE SOURCE PerfVolume/testzvol secondarycache metadata local
The last line (zfs get) is verifying that your command stuck.
From your arc_summary output I can see a couple things:
You're almost exclusively hitting the metadata:
Code:
Cache hits by data type: Demand data: < 0.1 % 2.0M Demand prefetch data: < 0.1 % 10.5k Demand metadata: 99.9 % 8.3G Demand prefetch metadata: 0.1 % 5.2M
And it's not being pinned there correctly:
Code:
ARC size (current): 92.3 % 53.1 GiB Metadata cache size (hard limit): 75.0 % 43.1 GiB Metadata cache size (current): 10.7 % 4.6 GiB
This is something that I've seen before, and it has to do with ZFS preferring to evict filesystem metadata over data.
From that value we cooked up earlier from your
zpool status -D
which was 15441616288 bytes or 14.4GiB, I'd expect to see around that much metadata sitting in RAM.So let's force the issue with another tunable:
Name:
vfs.zfs.arc.meta_min
Type:
sysctl
Value:
17179869184
That will tell ZFS not to evict metadata unless there's more than 16GiB of it in your RAM. Note that this means there will be 12GiB less data in your RAM; but I think your priority here is the write speeds over the reads.
Important note: This is only going to accelerate the reads of the deduplication table. If you perform an action that requires a whole lot of metadata/dedup table writes, such as a massive delete, or destroying snapshots/datasets, then you'll still have to pay that random I/O cost. Ultimately, it's up to you to determine if the perils of deduplication are worth the ~2x space gain.