Metadata VDev/Special Device Pool Mirrors? Triple Mirrors?

JerRatt

Dabbler
Joined
May 17, 2022
Messages
14
Using the latest TrueNAS Core, creating a Metadata VDev doesn't seem to allow the option to repeat/duplicate with the remaining disks to setup additional mirrors. For reference, my primary Data VDev is a 14 x 3.2TB Kioxia NVMe SSD in 7 mirrors (for extreme transfer and I/O speeds), and I have 6 x 800GB Kioxia's that I wanted to setup as a Metadata for the Data VDev and was trying to either add them into a Metadata VDev as either a pool of 3 mirrors or even 2 mirrors to triple up on the redundancy for the Metadata VDev.

Is this possible, or just limited by the UI?
 
Joined
Oct 22, 2019
Messages
3,641
Before any of that, may I ask "why?"

If your data vdev(s) are comprised of fast NVMe's, what benefit will you realistically achieve from using similar-speed drives for special metadata vdev(s)?

(This also takes into account the additional risk of introducing special vdevs.)
 

JerRatt

Dabbler
Joined
May 17, 2022
Messages
14
Before any of that, may I ask "why?"

If your data vdev(s) are comprised of fast NVMe's, what benefit will you realistically achieve from using similar-speed drives for special metadata vdev(s)?

(This also takes into account the additional risk of introducing special vdevs.)

I was thinking that splitting up the roles for the bulk data to be on the data vdev while having the special vdev be able to work in tandem at that same time to do the metadata tasks (and some small file offload) might of some help for overall performance? This server will be dishing out 100Gb connections to file sharing, quite a few million files, thousands of folders and subfolders, about 12TB of files in total. We want it fast not just for throughput but listing folder contents as well (some folders have thousands of files in its root alone)

I was just able to get this added through the shell by using this command (zpool add DATA special mirror nvd2 nvd3 mirror nvd4 nvd5 mirror nvd6 nvd7), trying to add it as a triple mirror using this command (zpool add DATA special mirror nvd2 nvd3 nvd4 mirror nvd5 nvd6 nvd7) threw an error about it needing to match the 2-disk mirror configuration of the DATA vdev.

I do also have Intel P5800's mirrored for a SLOG, a hotspare, and 512GB of RAM. Haven't set any sort of L2ARC, was trying to just throw system RAM at it instead of creating another pool for that. Would the drives for the metadata setup be better used for a L2ARC or maybe a Dedup vdev (wasn't really planning on needing deduplication).
 
Joined
Oct 22, 2019
Messages
3,641
I was thinking that splitting up the roles for the bulk data to be on the data vdev while having the special vdev be able to work in tandem at that same time to do the metadata tasks (and some small file offload) might of some help for overall performance?
Like parallel I/O? Maybe in theory that greatly boosts performance, but I would have to defer to someone who has put it into practice and found that it's actually worth it. (As opposed to expanding the capacity of their pool without a narrow scope of metadata/special vdevs; or even without the extra complexity or risk involved of adding more vdevs because you have extra devices at your disposal.)

Increasing RAM (especially fore Core) de facto will boost performance, regardless of userdata or metadata. Do they or you notice any slowdowns?

In regards to metadata, see this thread for a caveat and a solution, plus check this out going into the future with OpenZFS 2.2.



Would the drives for the metadata setup be better used for a L2ARC or maybe a Dedup vdev (wasn't really planning on needing deduplication).
If you have to ask yourself whether or not using dedup will suit you, then it means you shouldn't use dedup. :wink:



Keep in mind that unlike a SLOG or L2ARC, a special vdev is a crucial component of a pool. You can't simply "try it out" and then later decide to casually remove it



Maybe someone can chime in on how realistically beneficial it is to dedicate a set of NVMe drives for a special vdev, even when the pool itself is solely comprised of only NVMe drives.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
For reference, my primary Data VDev is a 14 x 3.2TB Kioxia NVMe SSD in 7 mirrors (for extreme transfer and I/O speeds), and I have 6 x 800GB Kioxia's that I wanted to setup as a Metadata for the Data VDev and was trying to either add them into a Metadata VDev as either a pool of 3 mirrors or even 2 mirrors to triple up on the redundancy for the Metadata VDev.

Is this possible, or just limited by the UI?
I have yet to seen multiple metadata VDEVs in a single pool on this forum, and as far as I understand the metadata VDEV is, by the name, a single VDEV; do not take my words as absolute, maybe someone with more hands-on experience can dismiss my impression.
It's also strongly suggested to match the redundancy between a pool's vdevs in order not to create a point of failure.

For such a beast and usage I would suggest metadata-only L2ARC instead; if necessary it can even be made persistent between reboots.
All in all, it always comes to how much RAM you plan to have on your system.
 
Last edited:

piersdd

Cadet
Joined
Nov 20, 2021
Messages
8
I would also ensure extreme CPU performance, as my experience building fast filers typically hit bottlenecks with SAMBA and/or NFS
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Yup, especially with such a NVMe based system: CPU will likely be your bottleneck.

I guess he might be going dual CPU by the number of PCIe lanes required.
 
Last edited:
Top