Storage configuration suggestions needed

Backslash

Cadet
Joined
Aug 8, 2021
Messages
4
Hi all,
I am currently building (or about to start building) my first TrueNas server and am looking for some suggestions/best practices for ZFS configurations as this will be my first foray I to the ZFS world.

I currently use Unraid and have used mergerfs and snapraid in the past and am used to the whole JBOD with a parity disk so moving to ZFS I know I need to rethink how my storage will behave/look.

My use case for this TrueNas Scale server is for media streaming and storage primarily.

Hardware wise, I'm going for a new Core i5-11500 CPU, Mobo to match, 16GB RAM, LSI SAS controller. I have about 8x 3TB SAS disks and one 2TB MLC-SSD.

I'm looking for maximum storage capacity and not too concerned about data loss as I have a decent 3-2-1 strategy for my important data (photos) - mainly offsite backups to cloud providers, external SSD storage, and remote site backups.

My main question here is given that majority of this servers workload will be media streaming and hosting about 10 containers, how should I configure my storage pool(s). Do I need SLOG for writes, do I need L2ARC for read performance? Should I have multiple vDevs - if so, how should they be configured?

Again, this is my first foray into ZFS and I have a lot to learn so looking for some best practices guidance for my use case.

Thanks!
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
this servers workload will be media streaming and hosting about 10 containers,
Do I need SLOG for writes
No.

do I need L2ARC for read performance?
With 16GB RAM, No. And probably not even with more RAM either.

Should I have multiple vDevs
No, a Single RAIDZ2 VDEV will be good for your requirements.

Use the SSD to run your docker apps and the RAIDZ2 Pool for media. (maybe also a cheap SSD for boot, a small Kingston is fine and can usually be found for $20-40)

You can set up a replication task from the SSD to the RAIDZ2 pool for the apps if you're worried about losing any config with the single SSD.


Core i5-11500 CPU, Mobo to match, 16GB RAM
I'm not sure that this combo will support ECC RAM... the forum would usually counsel you to use ECC, so consider a mix that will support it if your data integrity matters to you.
 

Backslash

Cadet
Joined
Aug 8, 2021
Messages
4
Thank you for replying and for the advice. As I said, I am having to think differently when it comes to ZFS vs what I have used in the past. I work in enterprise HCI (Nutanix, Azure Stack HCI, vSAN, etc) so i have a grasp on the enterprise technologies and how the hyperconverged players deal with software defined storage, and ZFS is just slightly different again.

Use the SSD to run your docker apps
Can I mount a single disk separately or does it need to be in a pool on its own?

When it comes to expanding pools (and I know this is probably a bigger conversation), do I need to add the same amount I started with? Say for example I use a RAIDZ1 for my pool for media storage and it has 6 disks in it, if I want to expand that pool, do I need to add another vdev of the same size?

Similarly, if I want to upgrade by utilising bigger disks, what is that process like?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Can I mount a single disk separately or does it need to be in a pool on its own?
I won't specifically recommend it, but there are a few different ways to have a Data Pool on the Boot Drive.

This method has been used by many, so I guess would be the "best" option if you go down that road.

Personally, I would use a separate boot drive and host the system dataset and SWAP there, but nothing more.

EDIT: I re-read your question and maybe I misunderstood the first time around... yes, you can absolutely have a pool with 1 VDEV of 1 disk as type "stripe". Clearly no redundancy there unless you set copies=2 or more, so you'll get integrity checks, but won't be able to fix any found inconsistency unless there's another copy somewhere (replication to the RAIDZ pool I recommended, for example, or copies=2 to keep aditional copies on the same SSD, but in different blocks, clearly at a performance penalty of needing twice the writes).

When it comes to expanding pools (and I know this is probably a bigger conversation), do I need to add the same amount I started with? Say for example I use a RAIDZ1 for my pool for media storage and it has 6 disks in it, if I want to expand that pool, do I need to add another vdev of the same size?
Same size, not really a requirement. Same type of RAIDZ, absolutely recommended, Same width of VDEV also highly recommended.

You can "mix and match" VDEVs (of any or all types) in a pool (the GUI is pretty good at stopping you from doing that by accident now, but you can force or go to CLI and do whatever you want... and have the consequences of that), but doing so will have a potentially powerful impact on your pool performance and data safety.

To recap, you probably should add another 6 disks (or wait for RAIDZ expansion to arrive... probably a year or so away... at which point you can add to the width of your VDEV, 1 disk at a time).

RAIDZ1 isn't recommended for drives larger than 2TB for a bunch of reasons, but be aware that in a pool of 2 VDEVs of RAIDZ1, you're at risk of total pool loss with a second disk failing in the same VDEV (which is at increased risk due to the high load of a resilver in RAIDZ).

Similarly, if I want to upgrade by utilising bigger disks, what is that process like?
Depending if you have spare slots... I'll assume not.

Offline disk1, physically replace it with larger disk, replace it in the GUI, wait for resilver... rinse, repeat for disk2-X until you're all done, then full capacity becomes available at completion of the last disk.
 

Backslash

Cadet
Joined
Aug 8, 2021
Messages
4
Thank you very much @sretalla. That is incredibly helpful information.
I do have a couple of smaller (256GB) SSD's that I'll be using for boot, which will free up my 2TB SSD for appdata in it's own pool.
Noted about RAIDZ1 and larger than 2TB disks, I'll stick to RAIDZ2.

(or wait for RAIDZ expansion to arrive.
Oh that is coming? That's fantastic. That'll bring some more flexibility to ZFS! Right now, I see that as the limiting factor for me to move from something like unraid to TrueNAS. Don't get me wrong, I completely understand the benefits of ZFS being more reliable and resilient and more flexible in other ways - but I can see how the ability to expand the width of a VDEV will be an incredible feature.

Regarding appdata - Is best practice to create a dataset for each container? I noticed that when I create a container, it won't auto-create subfolders (like it can with docker/unraid) and there is no way to create subfolders from the GUI. I assume that can only be done via CLI?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Is best practice to create a dataset for each container?
Depending on how you want to run things, but remembering that the snapshot boundary is the dataset and that you may want to snapshot each app separately, yes in that case.

You may also find some things like recordsize would benefit from being different between apps which separate datasets would also allow.

If you don't care about any of that, then a subdirectory structure (created in CLI or via SFTP apps like FileZilla) would be fine.
 
Last edited:

Ixian

Patron
Joined
May 11, 2015
Messages
218
Sretalla, that guide (https://www.truenas.com/community/t...-of-larger-ssds-for-boot-pool-and-data.81409/ ) is for TrueNAS BSD not SCALE - there are several differences with SCALE that a less experienced user might not know how to change.

A user on Reddit wrote up a guide specific for SCALE: https://www.reddit.com/r/truenas/comments/lgf75w/scalehowto_split_ssd_during_installation/

to do the same thing with SCALE (essentially, take a mirrored pair of SSDs and create a small boot pool for the OS and a larger data pool for apps or whatever). Works great. Not supported by Ix systems, of course, but not something that would necessarily be broken by updates/changes either as it's pretty straightforward.

I ran with a mirrored and split boot/data pool for over a year on an Ubuntu server w/OpenZFS; it was a far bigger PITA to set up (the TrueNAS SCALE installer script doesn't get enough credit) but once it was not only did it work day to day but it easily survived a drive failure (as expected).
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
True. I obviously wasn't paying attention when I gave that one as I do know about one in this forum for SCALE:

I refer to both in the thread where I propose a not very well tested alternative to those (which I don't recommend here): https://www.truenas.com/community/t...l-in-one-truenas-thats-easy-to-install.93325/
 
Top