Workstation with 5x HDD and 4x PCIe NVMe cards? Best storage config?

Proteus7

Cadet
Joined
Jul 4, 2023
Messages
2
Coming from the Windows world (S2D, Azure Stack) and I have a ThinkStation P620 (threadripper pro, so plenty of CPU) with 5x8TB SATA HDDs and up to 4 Intel 3.2TB PCIe NVMe cards, along with an A4000 GPU.

Windows, VSan, etc all do storage tiering (hot I/O blocks shifted to NVMe), but I’ve been having a hard time figuring out how I’d implement this in TrueNas? I’m trying to figure out the best overall config that will still give me reliability. Parity is fine, but if I had the option for separate parity and mirrored pools (for even better perf) that would be great. General use case will be Plex, SMB3 file server, potentially iSCSI, docker/K8s, and of course backing up the family laptops. Essentially everything the old Windows Home Server did and a bit more.

Thanks!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
ZFS does not do some of the storage tiering that other OSes or file systems do. ZFS will do RAM caching of reads, (aka ARC), NVMe / SSD caching of reads, (aka L2ARC), and RAM caching of asynchronous writes.

But, ZFS does not do shifting of hot blocks to separate devices, like NVMe, for writes.

ZFS is really different than anything before it. Though their are some later file systems that have some of the features of ZFS, (Hammer1/2, BTRFS), they are basically available only on a single OS.


It is advisable to read up on ZFS & TrueNAS to make sure it meets your needs. And figure out how you can setup something that is suitable for both your use case, and long term stability.

I don't have a list of recommended reading, but you can start with glancing at the documentation, and the Resources here in the forum, (top of every forum page has a "Resources" link).
 

Proteus7

Cadet
Joined
Jul 4, 2023
Messages
2
So I have a Data vDEV with the 5x8TB drives in RaidZ1. Now the question is what to do with the 4x PCIE 3.2TB NVME drives. I see that Metadata, Log, Cache, roles all seem to require separate vdevs. Is there any way to do this on the same set of drives? Does it even make sense to do so?
Basically trying to figure out the optimal way to leverage the 12TB+ of raw NVME storage I have, and use it to accelerate IO across the board.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Adding Cache or Log is something you can remove later if not useful. And Log, (aka SLOG), is only useful for synchronous writes, like used with iSCSI or NFS. Plus, their are specific hardware suggestions for Cache or Log devices, both size & endurance.

Adding Metadata special vDevs are somewhat a one way street. Whence you add one, it can not be removed from a pool. (I think Mirrored pools might be different, but yours is a RAID-Z1 so that does not apply.) A Metadata vDev should have the same redundancy level as a data vDev, because loss of a Metadata vDev means loss of your entire pool. So, at least 2 NMVe in your case.

You can use some or all of the NVMes for another ZFS pool. Either for fast storage dumping ground. Or VMs / Apps.


Now your question of using Metadata, Log and Cache on the same physical device. No, TrueNAS does not support such. In theory, you can do it manually, but unless you really know what you are doing, you could have data loss in your future.
 

DigitalMinimalist

Contributor
Joined
Jul 24, 2022
Messages
162
Adding Metadata special vDevs are somewhat a one way street. Whence you add one, it can not be removed from a pool. (I think Mirrored pools might be different, but yours is a RAID-Z1 so that does not apply.) A Metadata vDev should have the same redundancy level as a data vDev, because loss of a Metadata vDev means loss of your entire pool. So, at least 2 NMVe in your case.

Is there a way to increase the data file size threshold for the special vDev? 3.2TB (mirrored) would only make sense if it can be used to around 50%
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If I understand your question, it's covered in the manual page zfsprops. This applies at the dataset level, so you have to choose which ZFS datasets in a pool with a Metadata special vDev should store small files on that special vDev.
special_small_blocks=size

This value represents the threshold block size for including small file blocks into the special allocation
class. Blocks smaller than or equal to this value will be assigned to the special allocation class while
greater blocks will be assigned to the regular class. Valid values are zero or a power of two from 512B
up to 1M. The default size is 0 which means no small file blocks will be allocated in the special class.

Before setting this property, a special class vdev must be added to the pool. See zpoolconcepts(7) for
more details on the special allocation class.
Now one thing that is not clear unless you know ZFS better, is that whence a Metadata special vDev is full, it will "spill" over to regular vDevs. You would use something like zpool list -v to see when that is occurring. So if you dump tons of small files into a pool with a Metadata special vDev, at some point it may no longer be available for Metadata, as the regular small files will take up some space.

Further, ZFS does NOT move data whence it is written. So, a Metadata special vDev is NOT a write cache for small files or Metadata. It is the preferred location for Metadata or small files, until full. Then it will use normal data pool vDevs as a fall back. (Or you can add more Metadata special vDevs. Or replace the current Metadata special vDev devices with larger ones, one at a time...)
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
As @Arwen says - you need to manually tune your special_small_blocks size to suit:
  1. Your available disk space in the SvDev
  2. Your file size distribution and in the appropriate datasets
  3. Your record size on the dataset (if small blocks >= record size then all files go to the SvDev = not ideal)
 
Top