SOLVED VDev architecture, please advise.

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
First post.
Hi folks.
I have Xingma up and running, but want to try TrueNas.
Currently I have Dell r730(not XD) with 8LFF backplate configuration.
We will leave current configuration of Pools on my soul, we will speak about new addition:
What I have and want to utilize:
2x Optane nvme (58G).
2x Samsung PM953 M.2 NVMe (960G)
2x PCIe to nvme cards (x2 nvme, no PLX).
Each card will contain 1Optane and 1PM953
PCIex16 to 4 NVMe U.2 (Not M.2)
2x U.2 Intel DC P4510 (1Tb New)
2x U.2 Intel DC P3500 (800Gb Lightly used)

HBA 12G external ports.
Disk Shelf 25xSFF (EMC2 SAE, 6G SAS)
25x 600gb SFF 10k SAS HDD.

I want configure new pool for work with torrent client, download folder and maybe storage for nexcloud. Everything will be backuped every night. One of HDD I want to leave like designated hot spare. Connect between workstation and server 2x10G.

Can you suggest architecture for vDevs and how to properly utilize SSDs. (Slog/Meta/L2ARC?)
TIA.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You have too much of a mish-mash of drives, as well as less defined needs for some of us to comment.

For example, while it may seem nice to have 25 x 600GB SAS 10K HDD drives, unless you plan on striped Mirror vDevs, they can be replaced with just 2 x 8TB HDDs. With 2 x 8TB HDDs you get easier management and probably much lower power usage.

Basically you should be reading some of the Resources here in the forums. See top of any forum page for a catalog of Resources. This one in particular might be helpful;

Further, you say that you have Xingma up but don't give us a clue on your skill set with ZFS. Their are some gotchas with ZFS if you make the pool wrong for your use case.

In general, if you have to ask about SLOG, you don't need it. (A SLOG vDev can be added or removed live, as needed.)

For L2ARC / Cache device, you generally need to have 32-64GBs of memory, (or MORE), before adding one. Even then, it is suggested to not exceed 5-10 times RAM size. (L2ARC / Cache requires RAM for it's index / directory information...)

And for Special / Metadata vDevs, you REALLY need to understand your redundancy level and what a Special / Metadata vDev does. Loss of a Special / Metadata vDev is loss of the entire pool. Thus, any Special / Metadata vDev should have the same amount of redundancy as the main data vDevs.

You list "Connect between workstation and server 2x10G.", except that won't work as you might think. You'd still only get 10Gbps performance unless you are using another protocol like iSCSI. You would get fail-over though not really useful in this context.
 

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
You have too much of a mish-mash of drives, as well as less defined needs for some of us to comment.
Thanks.


For example, while it may seem nice to have 25 x 600GB SAS 10K HDD drives, unless you plan on striped Mirror vDevs, they can be replaced with just 2 x 8TB HDDs. With 2 x 8TB HDDs you get easier management and probably much lower power usage
More than understand it. But I already have 25 HDD in the shelf and choose occupation for them if something happens with this pool it will change nothing.


Further, you say that you have Xingma up but don't give us a clue on your skill set with ZFS. Their are some gotchas with ZFS if you make the pool wrong for your use case.
Basic understand about vDevs and Pools, nothing more than copy/paste couple command on the CLI.


plan on striped Mirror vDevs
It was my original thoughts. Easy, fast, simply, more/less redundant with online spare.

And for Special / Metadata vDevs, you REALLY need to understand your redundancy level and what a Special / Metadata vDev does. Loss of a Special / Metadata vDev is loss of the entire pool. Thus, any Special / Metadata vDev should have the same amount of redundancy as the main data vDevs
Yes, I understand. It will be mirror of new Optane. I'm sure Optane will serve more than HDD:smile:
You list "Connect between workstation and server 2x10G.", except that won't work as you might think. You'd still only get 10Gbps performance unless you are using another protocol like iSCSI. You would get fail-over though not really useful in this context.
I mean 1 have 1x10G for SMB, 1x10G for iscsi, 2x 1Gig for manamegment and other needs.

Thanks for answer, sorry for my English, not my first language.

This 25hdd and few SSD pool for education purpose first. Actually my question looks like : what I can do fun, fast and more less reliable with 25HDD, few SSD and couple evenings? If it will serve my torrent needs - nice.
 

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
Basically you should be reading some of the Resources here in the forums. See top of any forum page for a catalog of Resources. This one in particular might be helpful;
Yes.. looks like what I looking for.
Thanks.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Then this might be okay;

3 x 8 - 600GB SAS disks, in a RAID-Z2
1 - 600GB SAS disk SPARE

Having 3 vDevs improves IOPS, and using 8 disk wide RAID-Z2 gives more storage than Mirrors. However, using 2 x Optane with RAID-Z2 is not recommended. 3 x Optane are suggested.


In regards to 1 x 10Gbps for iSCSI, if you are actually going to be using iSCSI, then back to Mirror pairs for 22 to 24 of the 600GB SAS disks. Then using 1 to 3 for SPAREs.

You can do some network tricks with iSCSI that are not easy with SMB. But, it depends on the consumer of iSCSI and what your network topology is.
 

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
Having 3 vDevs improves IOPS
Maybe 4Vdevs z1? Because anyway metadata have 1 sdd fault rate?


using 2 x Optane with RAID-Z2 is not recommended
I will newer kill Optane by using them for soo light load.. and if so, we still learn something, and loose nothing.
if you are actually going to be using iSCSI
Not actually:smile: Just 2 user with really light use.
You can do some network tricks with iSCSI
iSCSI still "terra incognita" for me. Setup and use few times just to try. One time setup it for boot small server without any HDD.

Mostly I use SMB in Windows neighborhood with few smart devices like smart tv..
 

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
then back to Mirror pairs for 22 to 24 of the 600GB SAS disks. Then using 1 to 3 for SPAREs.
I just not sure of I can utilize mirrors speed/IOPS of HDD in shelf connected by SAS 6G.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The end choice is yours. While we can describe alternatives, you have to live with the result.

Using 4 groups of RAID-Z1 might work just fine. Especially if you have a hot SPARE drive.


Don't forget to read up on TrueNAS SCALE, especially about setting up ZFS scrubs, ZFS snapshots and SMART tests.
 

Nixoid

Dabbler
Joined
Nov 20, 2023
Messages
13
The end choice is yours. While we can describe alternatives, you have to live with the result.
Thanks you. On conversation I better understand my needs and ways to realize that.
Using 4 groups of RAID-Z1 might work just fine. Especially if you have a hot SPARE drive.
My initial thought.
Don't forget to read up on TrueNAS SCALE, especially about setting up ZFS scrubs, ZFS snapshots and SMART tests.
Thanks, I will.
 
Top