Replacing Disks, reconfiguring VDevs and Pools

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
I hope I posted this in the right subforum. If not, moderator(s) please feel free to move it where it makes most sense.

My Personal Server (see Signature) is ready for some upgrades/changes. I'm in the process of burning in/validating some new larger drives (2 days into an expected 8 day duration!) and figured it would be best to sort out the actual changes before the drives are ready. :grin:

Some background

Built my Original Server on FreeNAS. Ran out of room between Media and Personal Files. Built the new Media/Backup Server (See Signature) on TrueNAS Scale and transferred all the media and backup processes to that server. Works well. It also holds the household backups, though I will need to increase capacity for that purpose in the future. (Discussion with my CFO will happen before that change!)​
The Original Server was upgraded to TrueNAS Scale and repurosed to be my Personal Server. Pools were renamed, but I messed up on configuing what data was put on which pool. :rolleyes:
Dilemma
The pool structure is not working well, too many files on some pools, too few on others. I could just replace the drives, but I have SMB shares setup and I really don't want to have to modify 50 or so background scripts running on my personal computer that reference the shares.​
What I want to do
Currently I have 2 Pools T & Y that are germain to this discussion:​
Pool "Y" - 2 VDev (4 drive) Total capacity 14 TB​
Pool "T" - 3 VDev (6 drive) Total capacity 22 TB​
Hardwarewise I want to replace Pool "T" with a new Pool "C" - 2 VDev (4 drive) (Total Capcity 38TB)​
And then swap the "Y" & "T" Pool Names (and the data) so that the old "Y" data resides on the new "C" Pool and old "T" data resides on the former "Y" Pool; and swap the pool names so data resides on the "Pools by Name" so I don't have to change my scripts on my PC.​
Proposed process
  1. Add the new drives and configure a new Pool "C" ( I have existing capabilities on the LSI Broadcom SAS 9300-8i)
  2. Copy (with verification) the data from Pool "P" to the new Pool "C" (Preserving owners, dates, etc.)
  3. Delete data on Pool "P"
  4. Copy (with verification) the data from Pool "T" to Pool "P" (Preserving owners, dates, etc.)
  5. Remove Pool "T" - physically (logically as well?)
  6. Rename Pool "P" to Pool "T"
  7. Rename Pool "C" to Pool "P"
I know I will need to turn off SMB while doing this.
What I am wondering is will have to delete all the SMB shares before the moves, and then readd them after all the Pool Renames? Or will TrueNAS handle the change in hardware because the pool names are now the same? (even though physically they are on different hardware?)

Have I missed a step?
Is there a better way to handle this?
Looking forward to all constructive comments.
Thanks!
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
I think the root cause of your problem is that you are building multiple pools. Usually, you do not need multiple pool unless you need different pool-level options.

For personal server, you can put all your mirrors in a single pool and then, you create datasets. You first create datasets per role or function and then, per data.

Here, at the root of my pool, I have :
HAStore
LocalStore
iocage

HAStore is replicated to backup servers. For that, to ensure it will fit in the destination, its total size is restricted.
LocalStore is another storage but that one is not replicated to backup. I acknowledge I can loose that contain with a single incident.
iocage is... for iocage. If I need to backup configs or data, I script it and save it in HAStore.

HAStore itself is splitted with dataset like backups ; cloud ; Library ; ...

So I would recommend you a complete review of your situation and go with separate datasets instead of separate pools. That way, you will not waste space from one that would be needed in another.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
And as for your script, you will now learn that hard coding is never the good way. Instead of hard coding the paths in your script, you should write them with variables at the top and configure your path there. Once configured, you call your variables in the script instead of the hard coded full path.

So after re-designing your drives in a single pool, you should also re-design your scripts to avoid hard coding and have all your paths in a serie of easy to update variables.
 

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
Heracles,
Thanks so much for your comments and the example of your own system. I didn't put all of my design notes into my first post as I felt it was getting too long as it was. I appreciate that you took the time to read what I did write and to respond. Let me address the two major points you brought up.

So I would recommend you a complete review of your situation and go with separate datasets instead of separate pools.
Thank you for that design consideration. One of my original reasons for going with a few pools, with data sets, was the fact that if I lose a VDev, I lose the whole Pool. Even with mirrors that scenario is not very likely, given the hardware I have chosen. By separating things into a few well designed Pools, it also allows me to move the pools among servers should the need arise. Or in the event that I decide to split one of the servers up. (which is how my media server was actually created, by moving over one pool and then setting up the backups on a separate pool with the intent of moving that, someday to a server at an offsite location.

I am usually pretty regular about my hardware upgrades and I find it easier to replace all the drives in a single pool than just half of the drives of the pool and deal with the lack of leveling across VDevs of different sizes (unless that is a feature of ZFS that was added recently).

While I call this my "Personal Server", I do have both personal files and my consulting/business files on the system. And Physical separation is a big issue in the event people want to review my files on the business side. While datasets let me logically separate the data, once logged on to the server, they physically reside on the same disks. By separating the disks into pools, I can physically disconnect my personal files from the server in the event that is required; for auditing or review purposes, without losing the data. (True, a separate server might make more sense in this case, but that requires additional space for another local server, and that kind of space I am currently out of.)

And as for your script, you will now learn that hard coding is never the good way. Instead of hard coding the paths in your script, you should write them with variables at the top and configure your path there. Once configured, you call your variables in the script instead of the hard coded full path.
This is a valid response. And as someone been coding and designing software and systems before there were such things "Personal Computers", I agree with you. And I have done so.

However, even with setting it up with variables, there are still at least 50 scripts that run automatically on a weekly basis that might need to be changed. And while they are properly documented (when was the last time you heard of anyone actually doing that?) it is still a lot of changes. Hence the less I have to do from that end the better.

All in all, the system design I chose is functional, practical, and fits the criteria I am comfortable with; the "flaw" was in the original split off of the Media/Backup server and the numerous Pool name changes made back and forth while trying to get the SMB configurations/connections to work correctly. Hence the "flaw" of putting the wrong data on the wrong Pools (disk size wise).

Sorry for the long explanation, but I wanted to express that I have thought about these things in the past and recently. At this point I'm just trying to document how I can go about updating the drives on my system.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
One of my original reasons for going with a few pools, with data sets, was the fact that if I lose a VDev, I lose the whole Pool.
That will always be true.

Also, note that most times, what makes you loose more than 1 disk at a time loose them all : fire, flooding, etc.

The proper way to protect you against such catastrophic failure is not multiple pool, it is proper backups. No single TrueNAS server, no matter how robust it is, can be more than a single point of failure. See my signature about a complete backup plan.

it also allows me to move the pools among servers should the need arise
Moving the dataset is just as easy if not easier : ZFS Send & ZFS Receive will do it easily.

I find it easier to replace all the drives in a single pool than just half of the drives of the pool and deal with the lack of leveling across VDevs of different sizes (unless that is a feature of ZFS that was added recently).
Not recently... ZFS has been able to do that for ages. With mirrors, you replace one disk with a bigger drive and re-silver. Once donc, you replace the second drive and re-silver. After that, the mirror's size is increased to the size of your new drive. You can have vDevs of different sizes in a pool without any problem.

What is possible but not recommended is to have vDev of different structure within the same pool, ex: RaidZ2 and mirrors in the same pool. But here, you have only mirrors, so no problem.

I do have both personal files and my consulting/business files on the system.
If you have anything professional, proper backups is even more important. May even be required by legislation depending of your specific situation.

With the reasons you gave for multiple pools, my recommendation for you is still to re-design everything around a single pool. If splitting professional and personal data across datasets is not enough, it shows that you need to separate them in different servers because 2 pools in 1 server are still under the control of a single logical entity. Way better to plan moving data logically (zfs send / receive) than physically. Often, when you move hard drives, they fail. They stop, they cool down, they are handled, they endure shocks and vibration, ... A single of these can push a drive over the cliff. All of them at once, not a good thing at all...

In all cases, relying on auto-expand (replace a disk with a larger one and re-silver ; do it across the vDev and voilà) is probably the way to go here. Better if you can add the larger drive and re-silver without removing the smaller one thanks to a free port to connect the larger drive to.

As of now, I think that proper backups should be your top priority and remember that a backup is not functional until you successfully restored it to a new system. As such, I recommend you do a complete restore test once your backups are in place.

Good luck with that,
 

linus12

Explorer
Joined
Oct 12, 2018
Messages
65
Backups are being performed. Full backups once a month, and incremental backups on a daily basis. They are currently performed onsite and then stored at an offsite location. Testing on the backups occurs about twice a year. Future plans are to perform the backups directly to an offsite location, once that is established.

Again, I appreciate your concern with my current design, but it is working. At this point I am not looking to change the underlying design. Only replacing the current disks and then moving files.
If I was going to rework the overall architecture I would have 3 separate servers, with all pools at the Z5 levels in a rack, with an offsite server for backups. (We can dream about the Ferrari, but still drive the 10 year old Chevy because it gets us from point A to point B, and we still have the 12 year old Ford in the garage in case the Chevy needs to go for repairs.)​

What I am looking for is confirmation of my plan as stated above, or for a better/faster way of doing it.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
or for a better/faster way of doing it.
And I answered that already...
In all cases, relying on auto-expand (replace a disk with a larger one and re-silver ; do it across the vDev and voilà) is probably the way to go here. Better if you can add the larger drive and re-silver without removing the smaller one thanks to a free port to connect the larger drive to.

Use auto-expand instead of your procedure. Way less handling, larger space, easier to achieve and the safest option because you can plug and re-silver the new drive before removing the old one (you wrote that you have extra disk slots available).

To rename things is rarely a good choice. Often, things are mapped to IDs under the hood and changing the name just create confusion. By using auto-expand, everything stay in place and you get your extra space. In TrueNAS, disks are identified by an ID written on them and not their physical spot in the server like /dev/sda or similar.

So pool T is built with diskA and B as 1st mirror ; C and D as 2nd mirror ; E and F as 3rd mirror.

You plug new drive G and replace A with it.
Once done, you offline and remove A.

You plug new drive H and replace B with it.
Once done, your first mirror will auto-expand to its new capacity.

Do the same for C and D.

You will have a larger pool without the need to touch anything else.

Use the same logic to expand your other pool or re-add the removed drive as extra vDevs. That will also increase your pool's size with a lower risk and simpler process.
 
Top