Need advice setting up separate pools or server build in gerneal. Best practice.

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
Hello Folks.


I need some advice as my jedi skills are somewhat lacking the force is weak and I have searched the forums, help pages for the past 3 weeks and I cannot get a specific answer to my particular question so I am forced to register to ask a question that has more than likely been asked a million times by now anyhow I can’t find it anywhere.

I have read the tutorials and have a grasp of what is going on and how to go about setting this up I am coming unstuck somewhat as I as I just need some clarity what I wish to know is this the correct approach and can anyone give pointers.

I just need a place to keep my files creating a direct link between pc and my nas via static ip 10Gbe connection.

So here it goes.

I realise hard drives are called Vdevs in turn these are allocated to pools, then a data set, then a share.

My question is I wish to build a server using the below configuration

Ultimately I wish to grow my server to 24 X Seagate 18TB HDD Exos for now I with only buy or use 8 hard drives to start my build with aZ2 configuration,suits me sir.

Intel Core i7-12700K 3.60GHz (Alder Lake) Socket LGA1700 Processor

Asus ROG Strix Z690-E Gaming WIFI - Intel Z690 LGA 1700 DDR5 ATX Motherboard

MEMORY DDR5

POWER SUPPLY 750 watt

LSI 9211-8i P20 IT Mode

And a Lenovo 16 Port Mini SAS SATA Expander

Intel X540-T1 10GbE 1 Port PCI


Problem you see I can not afford to go buy 24 hard drives on day one of my build and configure my nas. I have been reading you should not create anything over a 11 wide hard drive array for best practices yet I need enough room to grow my media library over the years to come without running out of capacity or having to go to the expense of another server build as I would like to keep it all in the one pc case.


Can I create the below

8 X Segate exos hard drives in a raid Z2 configuration with two discs allocated to parity X 3 separate pools one for media one for music and one for miscellaneous documents.

If it is not possible to configure my server build that way so I can grow it over the coming years, is it possible to create separate pools of storage separate from one another I take it they dont interfere with each other and if this is all wrong then what is the better way to do it, I would hate to have too many hard drives in the one pool as rebuild times will be crippling and not recommended. I just dont know enough to be confident in myself to say yes go ahead as it is a lot of money to throw at this project so what to know I am 100% going forward.


Thank you to all who reply.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Some correction to your terminology: A VDEV is a group of disks which make up part of a pool. It can be as small as a single disk.

If I understood your narrative above, you intend to create a RAIDZ2 pool from your current planned increment of 8x disks. This will work for your initial VDEV. For later expansion to your case's maximum capacity of 24x disks, you can either add 2 more 8-disk VDEVs to end up with a 3-way stripe of RAIDZ2 VDEVs. Alternatively, you can create 2x separate RAIDZ2 pools from each bank of 8-disk VDEVs.

Within a pool, you can create datasets for different purposes. In your case, you propose one for media, one for music, and one for documents. If you plan to run Plex or Emby, it may make more sense to drop the music dataset, and lump your music in your media dataset.

However, your power supply is probably too small to support your final set of disks. You may also want to forgo the Strix gaming motherboard, and go with a workstation or server-class motherboard, which will support ECC RAM, and have better stability. TrueNAS really doesn't play well with gaming motherboards and their automatic overclocking.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Maybe this will help:

Why would you want to separate your data?

1. I want different snapshot frequency or retention for one or other of them... datasets in one pool can do that, so can separate pools.

2. I have completely different use-cases for my data, such as block storage/VMs (sync writes) on one and SMB file sharing on the other... best handled with separate pools due to the need for mirrors and RAIDZ in the different cases. This seems not to apply for you.

3. I have a different profile of files, such as very large video files and very small photo files and music files... datasets in one pool can do that, so can separate pools. (recordsize is a setting per dataset).

4. I don't want any performance impact from use case A on use case B... not strictly possible while sharing the same server, but separate pools would be the answer here.

5. I want some of the data encrypted and some not... datasets in one pool can do that, so can separate pools. (a few complications around directory tree structure, but should be clear how to work around that).

6. I want to reduce power consumption in my server and I have some data that needs to be accessible all the time and some that doesn't get accessed frequently... separate pools is the answer here (although I am not aligned with the idea of spinning down drives, it can be done)
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
In addition to the excellent advices in the above two posts, a NAS generally does not require the latest and greatest CPU, and Alder Lake is too new to be fully supported by the TrueNAS kernel (both Core and SCALE). Also there's some mismatch between the old 9200 HBA and a PCIe 4.0 platform.
You'd be better served by an older platform. If it's just serving all kind of files over SMB, a Supermicro X11SC_ motherboard with an ECC-capable Core i3-9100(F) shall do; put the savings into a much bigger PSU.
 

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
In addition to the excellent advices in the above two posts, a NAS generally does not require the latest and greatest CPU, and Alder Lake is too new to be fully supported by the TrueNAS kernel (both Core and SCALE). Also there's some mismatch between the old 9200 HBA and a PCIe 4.0 platform.
You'd be better served by an older platform. If it's just serving all kind of files over SMB, a Supermicro X11SC_ motherboard with an ECC-capable Core i3-9100(F) shall do; put the savings into a much bigger PSU.
Hello. I will rethink my hardware purchasing decisions and spend more time on better purchases of hardware will really ahve to knuckle down and work through this. Thank you for the valued reply.
 

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
Maybe this will help:

Why would you want to separate your data?

1. I want different snapshot frequency or retention for one or other of them... datasets in one pool can do that, so can separate pools.

2. I have completely different use-cases for my data, such as block storage/VMs (sync writes) on one and SMB file sharing on the other... best handled with separate pools due to the need for mirrors and RAIDZ in the different cases. This seems not to apply for you.

3. I have a different profile of files, such as very large video files and very small photo files and music files... datasets in one pool can do that, so can separate pools. (recordsize is a setting per dataset).

4. I don't want any performance impact from use case A on use case B... not strictly possible while sharing the same server, but separate pools would be the answer here.

5. I want some of the data encrypted and some not... datasets in one pool can do that, so can separate pools. (a few complications around directory tree structure, but should be clear how to work around that).

6. I want to reduce power consumption in my server and I have some data that needs to be accessible all the time and some that doesn't get accessed frequently... separate pools is the answer here (although I am not aligned with the idea of spinning down drives, it can be done)


You see I really do not know I have only watched you tube videos and feeling my way also reading the manual if you can call it that very complicated very intricate so have to have a grasp in my head before I start of a place where I will end up. I suppose I am splitting my pools to keep the initial disc size below 11 hard drives to each pool but obviously by your statement it does not work like that so all good advice I myself would prefer to have the whole lot of discs in the one pool with all available space as one big hard drive. So, I can do that can you elaborate further. As I am kind of confused slightly by your reply just need a little pointer.



Thank you for the valued reply.
 

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
Some correction to your terminology: A VDEV is a group of disks which make up part of a pool. It can be as small as a single disk.

If I understood your narrative above, you intend to create a RAIDZ2 pool from your current planned increment of 8x disks. This will work for your initial VDEV. For later expansion to your case's maximum capacity of 24x disks, you can either add 2 more 8-disk VDEVs to end up with a 3-way stripe of RAIDZ2 VDEVs. Alternatively, you can create 2x separate RAIDZ2 pools from each bank of 8-disk VDEVs.

Within a pool, you can create datasets for different purposes. In your case, you propose one for media, one for music, and one for documents. If you plan to run Plex or Emby, it may make more sense to drop the music dataset, and lump your music in your media dataset.

However, your power supply is probably too small to support your final set of disks. You may also want to forgo the Strix gaming motherboard, and go with a workstation or server-class motherboard, which will support ECC RAM, and have better stability. TrueNAS really doesn't play well with gaming motherboards and their automatic overclocking.


I take it I can add the additional 2 X 8 disc set as a three-way stripe will that half the capacity of the available space, I know rebuild times will be super-fast doing it that way and is something which I will have to study more before progressing, this is all a steep learning curve. But needs must and I prefer to build my own server than buy a qnap or a Synology.

Thank you for the valued reply. I really appreciate the replies sure helps me going forward that this is indeed possible.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, you can add "additional 2 x 8 disc set as a three-way stripe", into a single pool. Each "set" in ZFS is called a vDev, (aka Virtual Device). A ZFS pool can contain dozens of vDevs, of various types.

Rebuilding after disk failure is by vDev. Meaning if you lost 1 disk in an 8 disk vDev, and have 2 other 8 disk vDevs, only the vDev with the failed disk is hit hard. The others will be updated on the higher level metadata, but the large amount of regular data rebuild is restricted to the vDev with the failed disk.


This also brings up the point that redundancy is by vDev. Occasionally we see people make a mistake in their ZFS pools. They have a nice configuration, like 8 disks in a RAID-Z2 vDev. That vDev can sustain 2 completely failed disks before data loss. But, some people have mistakenly added another disk, thinking they are growing their 8 disk RAID-Z2 into a 9 disk RAID-Z2. That feature is coming, but not here yet.

So what they really did was add a single disk vDev without any redundancy. This is problematic to remove or correct. (ZFS is not perfect :smile:. And loss of that single disk vDev is loss of the ENTIRE ZFS pool. Nasty. I think newer safe guards in the GUI are in place now. But, people may be able to work around that.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
You see I really do not know I have only watched you tube videos and feeling my way also reading the manual if you can call it that very complicated very intricate so have to have a grasp in my head before I start of a place where I will end up.
Why anyone would try to learn by first watching random videos rather than reading official documentation is beyond me… :rolleyes: It must be a generational thing.
Bottom line: ZFS has many sophisticated features, which possibly sound very cool but are probably not needed for your use case—and could well be counter-indicated and/or downright dangerous if implemented in a wrong way.
Keep it simple: disk->vdev->pool, and no fancy extra (for now).
https://www.truenas.com/community/threads/introduction-to-zfs.75119/ (download button top right)
also
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I myself would prefer to have the whole lot of discs in the one pool with all available space as one big hard drive. So, I can do that can you elaborate further. As I am kind of confused slightly by your reply just need a little pointer.
I do that in my 24-bay system... my pool looks like this:

Code:
    NAME                                            STATE     READ WRITE CKSUM
    vol5                                            ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/a949560e-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ee2e4348-6ba9-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/ab07835c-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/abd1592c-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/aca96f0a-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ad9d274f-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ae8a54c5-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/af6f00fa-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
      raidz2-1                                      ONLINE       0     0     0
        gptid/bbb71f3c-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/cb28624c-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/db043b6b-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/eaebb1d2-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/fb9a4b23-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/0b7499c5-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/1bc8611c-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/2b0d6c86-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
      raidz2-2                                      ONLINE       0     0     0
        gptid/efdda625-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/eff6b5a9-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/1e98cbba-80f7-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/f07fa959-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/f08c46b2-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/8ca1d22e-8150-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/d62eee83-ff58-11eb-997e-2cfda1c746ec  ONLINE       0     0     0
        gptid/c59e4474-811e-11ec-be93-2cfda1c746ec  ONLINE       0     0     0


One pool, 3 VDEVs.

I then divide that pool into several datasets for the different purposes I need to serve. All of my needs are suited to RAIDZ, so there's no need to have a separate pool.

I do have a couple of additional pools with SSDs in mirror for jails and app data storage in addition to that large pool.
 
Last edited:

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
Why anyone would try to learn by first watching random videos rather than reading official documentation is beyond me… :rolleyes: It must be a generational thing.
Bottom line: ZFS has many sophisticated features, which possibly sound very cool but are probably not needed for your use case—and could well be counter-indicated and/or downright dangerous if implemented in a wrong way.
Keep it simple: disk->vdev->pool, and no fancy extra (for now).
https://www.truenas.com/community/threads/introduction-to-zfs.75119/ (download button top right)
also
Ha. Your reply sure made me laugh well that’s the sort of guy I am I need to see the software and see how it all works and watching people perform a first-time setup kind of preps me for what is to come. You are correct I will need to read the official documentation before progressing that is for sure as you say problematic and could ruin the array if implementing stuff without first understanding how things work. I have not got one clue about so I will do as you say and read the official documentation. I just wanted some pointers to know if this is possible and from the replies, I have received from good people like yourself it is indeed possible so thank you for replying you have set me off on a learning curve into the world of truenas. Thank you.
I do that in my 24-bay system... my pool looks like this:

Code:
    NAME                                            STATE     READ WRITE CKSUM
    vol5                                            ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/a949560e-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ee2e4348-6ba9-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/ab07835c-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/abd1592c-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/aca96f0a-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ad9d274f-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/ae8a54c5-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
        gptid/af6f00fa-3e57-11e8-8158-38d547c91e29  ONLINE       0     0     0
      raidz2-1                                      ONLINE       0     0     0
        gptid/bbb71f3c-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/cb28624c-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/db043b6b-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/eaebb1d2-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/fb9a4b23-b45e-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/0b7499c5-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/1bc8611c-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
        gptid/2b0d6c86-b45f-11e9-b3af-2cfda1c746ec  ONLINE       0     0     0
      raidz2-2                                      ONLINE       0     0     0
        gptid/efdda625-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/eff6b5a9-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/1e98cbba-80f7-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/f07fa959-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/f08c46b2-a190-11ea-bf5a-2cfda1c746ec  ONLINE       0     0     0
        gptid/8ca1d22e-8150-11ec-be93-2cfda1c746ec  ONLINE       0     0     0
        gptid/d62eee83-ff58-11eb-997e-2cfda1c746ec  ONLINE       0     0     0
        gptid/c59e4474-811e-11ec-be93-2cfda1c746ec  ONLINE       0     0     0


One pool, 3 VDEVs.

I then divide that pool into several datasets for the different purposes I need to serve. All of my needs are suited to RAIDZ, so there's no need to have a separate pool.

I do have a couple of additional pools with SSDs in mirror for jails and app data storage in addition to that large pool.
Gees that is one massive pool, are those pools individual as noted as raidz2-0, r raidz2-1, aidz2-2. Are your pools striped or just standard Raidz2. I’m so jealous it isn’t even funny. Respect.

Did you install all those hard drives all at once at the one time or did you build that array up over time by adding to it over months, if you don’t mind me asking how did you go about installing all 24 hard drives as that is the end game where I would like to end up eventually but for me there is no way my bank balance will not stretch to 24 hard drives in one purchase as I am buying 18Tib costing £250 a pop that’s £6500 or $7919 for my 24 hard drives.
Yes, you can add "additional 2 x 8 disc set as a three-way stripe", into a single pool. Each "set" in ZFS is called a vDev, (aka Virtual Device). A ZFS pool can contain dozens of vDevs, of various types.

Rebuilding after disk failure is by vDev. Meaning if you lost 1 disk in an 8 disk vDev, and have 2 other 8 disk vDevs, only the vDev with the failed disk is hit hard. The others will be updated on the higher level metadata, but the large amount of regular data rebuild is restricted to the vDev with the failed disk.


This also brings up the point that redundancy is by vDev. Occasionally we see people make a mistake in their ZFS pools. They have a nice configuration, like 8 disks in a RAID-Z2 vDev. That vDev can sustain 2 completely failed disks before data loss. But, some people have mistakenly added another disk, thinking they are growing their 8 disk RAID-Z2 into a 9 disk RAID-Z2. That feature is coming, but not here yet.

So what they really did was add a single disk vDev without any redundancy. This is problematic to remove or correct. (ZFS is not perfect :smile:. And loss of that single disk vDev is loss of the ENTIRE ZFS pool. Nasty. I think newer safe guards in the GUI are in place now. But, people may be able to work around that.
What you have explained is what I am terrified of doing unknowingly and I quote, (growing their 8 disk RAID-Z2 into a 9 disk RAID-Z2) Yes, I was aware of that fact through reading the how to pages and someone had already done just that I read the whole thing and then I though how the hell did that happen and why did it happen suppose that is where you have to read the documentation but receiving information like this is invaluable and is enhancing my knowledge I will have to be totally genned up before ever installing the software or creating a pool or committing data to it so without muddying the waters.

So, If I did create a 8 disc RAID-Z2, array I take it I can add another 8 hard drives do they have to be added all at once or can I add them in four hard drives at a time and install install the other 4 hard drives as and when.


You see this is where I get lost slightly and please forgive me and bear with me.


Say I wanted to add another 8 hard drives to create a brand-new pool, my first pool is already created and in use we will call it pool 1 say I now create a new pool 2, can I add as little as 4 hard drives in a raid z2 configuration then later on add another 4 hard drives to Pool 2 or must I add all 8 hard drives to Pool 2 at the one time.

If you could reply witht eh ebst way to go about that particular task I would eb very grateful.

Finally do I shut the system down to install the hard drives or can they be inserted with the system running? Trying to get it into my head how to tackle this. Your replies are totally invaluable as this is fantastic help as I better understand things this way which you never get a reply when reading something and is giving me more knowledge when I read the literatue I will ebtter understand it. Thank for taking the tiem to replky that is terribly good of you and I appreciate you doing so believe me.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Gees that is one massive pool, are those pools individual as noted as raidz2-0, r raidz2-1, aidz2-2. Are your pools striped or just standard Raidz2. I’m so jealous it isn’t even funny. Respect.
Terminology.
That is one pool, 3 vdevs (each in a RAIDZ2 configuration) striped together
The Pool contains all the space (vdev+vdev+vdev), which you then "split up" into datasets
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Gees that is one massive pool, are those pools individual as noted as raidz2-0, r raidz2-1, aidz2-2. Are your pools striped or just standard Raidz2.
Think of vdevs as "logical drives" made of physical drives.
A ZFS pool is always a stripe of logical drives (vdevs). There can be a single vdev, but conceptually a ZFS pool is always a nested RAID (RAID10 with mirrors, RAID50/60 with raidz vdevs).
Redundancy (mirroring or parity array) is achieved within the vdev.

Did you install all those hard drives all at once at the one time or did you build that array up over time by adding to it over months,
Either solution is possible.

What you have explained is what I am terrified of doing unknowingly and I quote, (growing their 8 disk RAID-Z2 into a 9 disk RAID-Z2)
Follow the steps in the GUI:
  1. Create pool.
  2. Add vdev.
  3. Assign drives to vdev, and set vdev type.
  4. Validate vdev.
  5. Optionally, repeat steps 2-4.
  6. Validate pool layout.
One case is 1, 2a-4a, 5:2b-4b… 6. The other is 1, 2a-4a, 6 and then, later 2b-4b, 6.
It's simple, but you're rightly terrified of making a mistake. Triple check everything before the final validation.

You may practice with your 8 drives before creating the first "real pool". Try 2*(4-wide Z2). Delete. Try 1*(4-wide Z2), validate and then add a second vdev.
It is also possible to run TrueNAS in a VM (on your desktop) with lots of virtual drives, just to play with the GUI and practice.

So, If I did create a 8 disc RAID-Z2, array I take it I can add another 8 hard drives do they have to be added all at once or can I add them in four hard drives at a time and install install the other 4 hard drives as and when.
A raidz# vdev cannot grow (for now, and expansion will come with its own caveats…). So if you go from 8-wide to 8-wide + 4-wide the next steps could be 8-wide + 4-wide + 4-wide or 8-wide + 4-wide + 8-wide but not 8-wide + 8-wide. There is no way to remove the 4-wide vdev, except "backup-delete-restore".

A drawback to adding vdevs as it goes is that ZFS does not rebalance existing data. New data flows to the vdevs according to their free space.
So, if one starts with one vdev and waits until it is 75% full to add a new vdev, the old data remains in old raidz2-0 while new data mostly goes to new raidz2-1; accessing either old or new data remains mostly a single vdev affair and one does not get the performance benefits of the stripe layout.

Finally do I shut the system down to install the hard drives or can they be inserted with the system running?
If the backplane allows hot swap you may hot plug the drives. But it is always cleaner and safer to shut down and cold plug if the server is not "business critical".
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Gees that is one massive pool, are those pools individual as noted as raidz2-0, r raidz2-1, aidz2-2. Are your pools striped or just standard Raidz2. I’m so jealous it isn’t even funny. Respect.

Did you install all those hard drives all at once at the one time or did you build that array up over time by adding to it over months, if you don’t mind me asking how did you go about installing all 24 hard drives as that is the end game where I would like to end up eventually but for me there is no way my bank balance will not stretch to 24 hard drives in one purchase as I am buying 18Tib costing £250 a pop that’s £6500 or $7919 for my 24 hard drives.
That is indeed one pool... you can see the pool name (vol5) at the top... then each of the 3 RAIDZ2 VDEVs of 8 disks each under it.

I originally had 3 separate pools, but I eventually came to the conclusion that I didn't need separate pools (after one of them was filling and the others were mostly empty) and could just consolidate everything into a single pool with some copying and adding the additional VDEVs.

I have also increased the VDEV size of 2 of the VDEVs so far by swapping out all of the disks in the VDEV for larger ones. It's currently at around 180TB in total, about 50% full.
 

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
Think of vdevs as "logical drives" made of physical drives.
A ZFS pool is always a stripe of logical drives (vdevs). There can be a single vdev, but conceptually a ZFS pool is always a nested RAID (RAID10 with mirrors, RAID50/60 with raidz vdevs).
Redundancy (mirroring or parity array) is achieved within the vdev.


Either solution is possible.


Follow the steps in the GUI:
  1. Create pool.
  2. Add vdev.
  3. Assign drives to vdev, and set vdev type.
  4. Validate vdev.
  5. Optionally, repeat steps 2-4.
  6. Validate pool layout.
One case is 1, 2a-4a, 5:2b-4b… 6. The other is 1, 2a-4a, 6 and then, later 2b-4b, 6.
It's simple, but you're rightly terrified of making a mistake. Triple check everything before the final validation.

You may practice with your 8 drives before creating the first "real pool". Try 2*(4-wide Z2). Delete. Try 1*(4-wide Z2), validate and then add a second vdev.
It is also possible to run TrueNAS in a VM (on your desktop) with lots of virtual drives, just to play with the GUI and practice.


A raidz# vdev cannot grow (for now, and expansion will come with its own caveats…). So if you go from 8-wide to 8-wide + 4-wide the next steps could be 8-wide + 4-wide + 4-wide or 8-wide + 4-wide + 8-wide but not 8-wide + 8-wide. There is no way to remove the 4-wide vdev, except "backup-delete-restore".

A drawback to adding vdevs as it goes is that ZFS does not rebalance existing data. New data flows to the vdevs according to their free space.
So, if one starts with one vdev and waits until it is 75% full to add a new vdev, the old data remains in old raidz2-0 while new data mostly goes to new raidz2-1; accessing either old or new data remains mostly a single vdev affair and one does not get the performance benefits of the stripe layout.


If the backplane allows hot swap you may hot plug the drives. But it is always cleaner and safer to shut down and cold plug if the server is not "business critical".
Thank you for replying was interested to read your reply I think you are correct to just go straight in play about with it and get a feel for how it all works and take it from there, fantastic information and great advice. So thank you again. This information is invaluable.
 

harridly

Dabbler
Joined
Jun 27, 2022
Messages
11
That is indeed one pool... you can see the pool name (vol5) at the top... then each of the 3 RAIDZ2 VDEVs of 8 disks each under it.

I originally had 3 separate pools, but I eventually came to the conclusion that I didn't need separate pools (after one of them was filling and the others were mostly empty) and could just consolidate everything into a single pool with some copying and adding the additional VDEVs.

I have also increased the VDEV size of 2 of the VDEVs so far by swapping out all of the disks in the VDEV for larger ones. It's currently at around 180TB in total, about 50% full.
That was interesting to read your post and how you have gone about doing things. You see you have me wondering how did you go about consolidating the 3 Raid Z2 pools into one pool that leads to more questions anyway will ahve to figure thsi all out myself and the only way to do that is start playign about witht eh software to get a feel for it and how it all works that way I will see for myself but more than likely will come unstuck sure I can just pop a question over if I do get stuck.

Thank you again for your time much appreciated.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
how did you go about consolidating the 3 Raid Z2 pools into one pool
Pool1, Pool2 and Pool3 (for simplicity)...

Pool1 getting full-ish

Pool2 and Pool3 not at all full, but some hundreds of GB in them.

Copy the data from Pool2 to Pool1 (yes, getting it closer to being full, but carefully checking that it won't go too close to 100% before starting)

Then (after some checking that the data is where I wanted it in Pool1), destroy Pool2.

Use the disks from Pool2 to add a VDEV to Pool1.

Enjoy the new free space available in the pool.

Repeat the same with Pool3 to Pool1.
 
Top