Storage for 60 disks

leksand

Dabbler
Joined
Jul 31, 2023
Messages
24
Please help with the question:

If there is not enough memory, then:
  • will there be consequences due to the large size of the created zpool?
  • or when it will be filled with information exceeding a certain amount that the RAM can handle?
 

leksand

Dabbler
Joined
Jul 31, 2023
Messages
24
as a result, we decided to make a single pool, the calculators were deceived in place with the available space. The good ones are able to take into account file system losses and understand that a 10 TB disk is actually not 10 TB, but less.

1) Please recommend an online calculator that shows the actual available volume and is able to take into account the number of groups in zpool and outputs the volume in Tb, not TiB.

2) Will creating special vdevs from array disks give anything? And if we deliver an ssd in the future? (there are 6 sas nvme slots in the platform)
 

Attachments

  • zfs pool 6x9raidz2.png
    zfs pool 6x9raidz2.png
    48.7 KB · Views: 62

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
6x 9disks in raidz2 and 6 hot spare disks​
That's a pretty solid setup... might have done a 6x10 in RAIDZ3 without hotspares, but that depends on your approach.
Question: for future recovery (for example, the motherboard burned down) is it worth splitting zpool into 3 pieces: 2 vdev in 9 raidz2 disks?​
Not really. Your POOL is composed by 6 VDEVS in RAIDZ2, each composed of 9 drives: as long as not more than 2 drives per VDEV dies, you data is always available. Separating them into different pools just makes the others survive if one is lost (since each vdev group is indipendent from each other).
If there is not enough memory, then:​
  • will there be consequences due to the large size of the created zpool?​
  • or when it will be filled with information exceeding a certain amount that the RAM can handle?​
Baseline: you have enough memory to run TN; the usual, general advice is to get one GB of RAM for every TB of data, but it's a loose guidance. You have to see for yourself by trial and error. I'd say that 64GB will slow your pool's performance, but if that is really an issue depends on your use case... for example, if you use 1Gbps netowork that won't likely be an issue. Do note however that SCALE basically halves the usable memory (thanks linux).

The ARC (RAM) is dynamically managed so there won't be any critical issues from that pov.

As a stopgap solution you could use a L2ARC device (like a 250GB or 500GB nvme drive) while you grow the ARC with time.
1) Please recommend an online calculator that shows the actual available volume and is able to take into account the number of groups in zpool and outputs the volume in Tb, not TiB.​
https://wintelguy.com/zfs-calc.pl lets you see both the TiB and the TB value. Generally, this level of precision is what to expect from these kind of calculators. Do also note that, in order to avoid abysmal performance, you will need to leave at least 10% of the total pool empty (usually the reccommendation is to use the pool up to 80% of its capacity, but you can stretch that number up to 90% if you need; don't go over 90%).
2) Will creating special vdevs from array disks give anything? And if we deliver an ssd in the future? (there are 6 sas nvme slots in the platform)​
SATA SSDs or NVMEs (the latter are better from a latency point) can greatly improve the performance of a pool made of HDDs. Please read the following documentation.
https://www.truenas.com/docs/core/coretutorials/storage/pools/fusionpool/

Moving the system dataset out of the HDD pool might improve the performance of the system a bit.

Note: since this is for a backup server (if I haven't misread something), you might not need the additional RAM, the L2ARC, or the special VDEVs: if you could tell us more about the network this system would be connected and the kind of performance you expect from it we could provide you with a better estimation.​
 
Last edited:
Joined
Jun 15, 2022
Messages
674
@leksand : I wouldn't necessarily worry about the memory situation until you use it all, then it's probably time to upgrade. ("Speed is not needed.")

The advice given you from the guys here is great. Get the system up and running and test it out with a plan on rebuilding/re-configuring it a few times as needed. Keep thorough notes along the way and a re-install/re-config goes quickly.

Someone once said: ZFS eats RAM like a fat man at an all-you-can-eat fried chicken buffet.
You'll see for yourself how and when that happens, if you need more RAM you'll know it...but like the guys here basically implied, keep an eye on usage so you know if/when you need more.
 

leksand

Dabbler
Joined
Jul 31, 2023
Messages
24
That's a pretty solid setup... might have done a 6x10 in RAIDZ3 without hotspares, but that depends on your approach.​
That's a pretty solid setup... might have done a 6x10 in RAIDZ3 without hotspares, but that depends on your approach.
6x10 in RAIDZ3 I'm losing more than 11 Tb of usable space on the calculator

https://wintelguy.com/zfs-calc.pl lets you see both the TiB and the TB value. Generally, this level of precision is what to expect from these kind of calculators. Do also note that, in order to avoid abysmal performance, you will need to leave at least 10% of the total pool empty (usually the reccommendation is to use the pool up to 80% of its capacity, but you can stretch that number up to 90% if you need; don't go over 90%).​
I use it, but the results obtained differ from the actual ones. I tried both by the nominal value of the disk and to indicate how much of each disk the OS can use. For regular raid, there are calculators where you specify the nominal volume (for example, 10 TB) and it shows how much of each disk the system can use and the actual size of the array, which will be equal to the actual received

SATA SSDs or NVMEs (the latter are better from a latency point) can greatly improve the performance of a pool made of HDDs. Please read the following documentation.
https://www.truenas.com/docs/core/coretutorials/storage/pools/fusionpool/
on nvme ssd settlement nodes - where it is necessary there is enough speed

Moving the system dataset out of the HDD pool might improve the performance of the system a bit.​
what do you mean by system dataset? The system itself or something else?
TruenNas is installed on a mirror of two sata SSDs (Intel D3S 4620 480gb)


Note: since this is for a backup server (if I haven't misread something), you might not need the additional RAM, the L2ARC, or the special VDEVs: if you could tell us more about the network this system would be connected and the kind of performance you expect from it we could provide you with a better estimation.​
This is not a backup server, but a server for storing the results of calculations. There are no speed requirements, there is no special load.
The network is now 1 GB, in the future it will be 10 GB
 

leksand

Dabbler
Joined
Jul 31, 2023
Messages
24
Do also note that, in order to avoid abysmal performance, you will need to leave at least 10% of the total pool empty (usually the reccommendation is to use the pool up to 80% of its capacity, but you can stretch that number up to 90% if you need; don't go over 90%).​
TrueNAS understands this, will he try to prevent taking more than 90% of the pool? Or should I manually limit this through quotas?

Is it possible to prohibit taking up more than 80 or 90% of the free space in the pool settings or only through quotas?

P.S.: I'm just discovering ZFS. Also, the study of the article about the application, features and settings, as well as the best practices of using ZFS for NAS and virtualization - for example, Proxmox pulls everyone to ZFS as the main system.

P.P.S.: if there are links on the topic of best practices and real cases of application in complex tasks and configurations, please send
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
6x10 in RAIDZ3 I'm losing more than 11 Tb of usable space on the calculator
Well, with 6x9 in RAIDZ2 and 6 hotspares you "lose" 60TB just counting the spares.

on nvme ssd settlement nodes - where it is necessary there is enough speed
I don't understand the question... if you want to use L2ARC having low latency is key to not undermine performance.

what do you mean by system dataset? The system itself or something else?
TruenNas is installed on a mirror of two sata SSDs (Intel D3S 4620 480gb)
Something else, see the system dataset documentation.

This is not a backup server, but a server for storing the results of calculations. There are no speed requirements, there is no special load.
The network is now 1 GB, in the future it will be 10 GB
Suggested reads then:

TrueNAS understands this, will he try to prevent taking more than 90% of the pool? Or should I manually limit this through quotas?
Is it possible to prohibit taking up more than 80 or 90% of the free space in the pool settings or only through quotas?
TrueNas will warn you when you use 80% of a pool's space, but won't stop ypu going past that.
I believe you cando so with groups/users quotas.

[...] for example, Proxmox pulls everyone to ZFS as the main system.
Not sure what do you mean here, but suggested reading:

P.P.S.: if there are links on the topic of best practices and real cases of application in complex tasks and configurations, please send
You can find more resources in my signature here on the forum, or directly go in the resource section.
 
Last edited:
Top