SOLVED Rearranging pool layout / migration

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Hello all,

I feel like there are enough topics on this already but if you don't mind / take the time I'd like to get an opinion for my specific case.

Current Situation

Server:

TrueNAS-SCALE-22.12.4.2
Supermicro X10SRi-F, Xeon 2640v4, 128 GB ECC RAM, Seasonic PX-750 in Fractal Design R5
Data pool: 4*4TB WD RED PLUS (RAIDZ2)
VM pool: 2*500GB SSD (Samsung Evo 850 / Crucial mx500)

+ 2 unused / hotspare 4TB WD RED PLUS

PC:

Win 11
1tb SSD boot drive
4tb external drive


Currently there are residing 4 WD Red Plus drives in my server with 4 Tb capacity each.

@sfatula suggested striped mirrors in another thread / in general my redundancy overhead was questioned and it got me thinking. For whatever reason I purchased 4 tb WD Red Plus drives for my personal windows machine two years ago and since I build the server I don't really need that much storage there. So in total I could use 6 x 4 Tb drives.

First I wanted to keep my 4 drive raidz2 layout and create another 2 drive stripe for replication but that doesn't really provide much benefit. Most of my datasets are cloudsynced with backups and currently I have a single external 4 tb drive attached to my PC for a total of 3 copies.

Upgrading costs for raidz2 are rather high in terms of storage but on the other hand 50% redundancy overhead for less fault tolerance with striped mirrors doesn't really strike me as an upgrade either. Although I went 10g, I don't need the IOPS a striped mirror would offer.

So in order to be best prepared for the future I came down with the following option: Destroy my current pool, use all 6 drives in a raidz2 config and buy another external drive / larger internal drive for my PC when 4 Tb are not enough anymore to serve as a full copy of the server.

Currently 3 Tib / 6.92 Tib are used in our setup. 6 drives would enable a whooping 14,5 Tib of usable storage. This should future proof us enough so I can just start replacing failing drives with larger ones in the future. Caveat: Unless all drives have failed (which honeslty could be 10 years from now for all I know) I can't use the larger sizes at all. So maybe I'm stuck until I expand my setup to accommodate a whole new pool in 5-10 years from now? On the other hand, 14.5 Tib should be plenty, our storage need is mainly only increasing with photos / videos at the moment.

But I'm a bit at a loss here on how to migrate:

I will not have the possibility to store my data on a ZFS system during migration if I use 6 instead 5 drives. If I use 5 drives for the new pool I will have 1 drive that could only serve as a hotspare honestly. Not much use for it other than putting it back in my PC and don't worry about needed a storage upgrade there somewhen next year.

1) I cannot migrate the snapshots to the new pool, correct? If I'm sure I don't need an old version of anything this shouldn't be a problem I guess

2) Can I trust a restore from the windows machine? My plan would be to use total commander to synchronize the datasets with the external drives (check the by content box) and upon recopying to the server, check the verify box again.

3) Should I rather download all B2 synced datasets? If I leave out everything that is currently not cloud synced anyway (I just rely on my two local copies because that data is not personal and thus replaceable) I would still need almost 4 days to download everything. Acceptable? Yes. However I'd like to avoid it if possible. But I absolutely want to make sure data integrity is there, that's why I upgraded to ECC RAM a few weeks ago.

How would you proceed in my situation?

edit: added picture for clarity
 

Attachments

  • hdd.drawio (1).png
    hdd.drawio (1).png
    191.5 KB · Views: 58
Last edited:

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
If you're going to use all 6 in one pool, z2 makes sense esp if you don't need the IOPS and value uptime more, it's entirely up to you what you value most. It's not really "safety", as you need backups anyway. Not sure where you are getting 14.6 Tib from, I think you have different size drives so the 14 Tib has to come from the 4 times the smallest of the drives (z2). Obviously, to ever expand you'll either need 6 larger drives, or, 6 more drives.

I'm not really clear what other drives you have on other systems. How do you intend to backup your system? Multiple methods? If you value your data as much as you indicate, then you should have at least 2 copies, I have 3 or 4 copies myself.

Are you sort of indicating all your data is currently on a Windows system? if so, it has drives too. If that is your good copy of data, why can't you trust it? Your B2 data came from it right? B2 couldn't possibly be better than the original.

Where are you getting snapshots from? Zfs? Confused.
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
If you're going to use all 6 in one pool, z2 makes sense esp if you don't need the IOPS and value uptime more, it's entirely up to you what you value most.
Thanks!
Because my internet is rather slow and I don't want to restore from backups I value uptime more.

Not sure where you are getting 14.6 Tib from, I think you have different size drives so the 14 Tib has to come from the 4 times the smallest of the drives (z2).
Yes, 14 Tib, sorry. No all drives are identical.

Obviously, to ever expand you'll either need 6 larger drives, or, 6 more drives.
I would say if we reach the 14 Tib limit, the additional costs would be justified and I assume it's many years down the road.
I'm not really clear what other drives you have on other systems. How do you intend to backup your system? Multiple methods? If you value your data as much as you indicate, then you should have at least 2 copies, I have 3 or 4 copies myself.
1 copy: truenas
1 copy: external wd drive connected to my windows machine
1 copy: backblaze b2

If that is your good copy of data, why can't you trust it? Your B2 data came from it right? B2 couldn't possibly be better than the original.
Maybe there's a misconception on my side. I thought the whole point of ZFS / ECC RAM is to ensure data integrity. If I let my backups sit on my external windows drive for a few years it may be subject to bitrot? Errors during copying wouldn't be corrected due to the lack of ECC? That's why I'd trust the truenas data the most. Of course everything that got corrupted before using truenas is gone, yes.
That's why I asked on where to get my data back on my system, because I will not have any drives available on my server to store the data on during recreation of the pool. I would need get the data back from either the external hard drive or backblaze.
Basically truenas is my working directory (via smb) and windows and B2 are the backups.

Where are you getting snapshots from? Zfs? Confused.
Yes, ZFS snapshots.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
No idea where you are getting 14Tib from? Is your smallest drive 3Tib? Confused. If it's 3Tib, then you'll have < 12Tib storage in a z2.

When I was speaking of the Windows data, I was responding to what I thought was the question, how to get the data to Truenas for the first and only load of it. So, I was assuming Windows ->Truenas. Is that not the case? I was not speaking to having a copy on Windows at all! I thought it was all going to Truenas, and then kill old copy. So, yes, no bitrot. But if your current live data is on Windows, that's the best copy, you copy it to Truenas, it's fastest too. That's a one time thing. Am I misunderstanding? I surely wouldn't want to use a (probably good) copy of the live data which is slower to load? I was not speaking at all of leaving the data to rot on Windows for years. I wouldn't do that! But you want to copy ideally directly from your current live used copy of the data. It should also be the source of your B2 copy. If that data, whereever it is totally lost, is on Windows, then the B2 copy isn't any better. If it had bitrot, then your B2 copy should also.

Your hardware is not clear to me at all. Now you are mentioning some sort of ZFS source of data which was not really mentioned in the OP? What is this zfs data, where is it (what sort of machine), and is that the data you are using to load Truenas? Or is some/all of it really on Windows? Maybe you are relying on your original thread but few would likely read both threads. Please expand on your post so we can all be clear what you are moving, from what to Truenas? All the whats! If the data currently resides in some unknown zfs system, yes, you can replicate that data over.

Backups look good.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thanks for answering!

edit: Added picture to first post for clarity.

I'll start from scratch, probably it makes only sense in my head / I have trouble conveying the information in an understandable manner. Probalby the history of where the additional drives came from were confusing and didn't really matter here.

Currently there are residing 4 WD Red Plus drives in my server with 4 Tb capacity each.
I thought this would imply I'm currently using a machine running truenas.

Current Situation

Server:

TrueNAS-SCALE-22.12.4.2
Supermicro X10SRi-F, Xeon 2640v4, 128 GB ECC RAM, Seasonic PX-750 in Fractal Design R5
Data pool: 4*4TB WD RED PLUS (RAIDZ2)
VM pool: 2*500GB SSD (Samsung Evo 850 / Crucial mx500)

+ 2 unused / hotspare 4TB WD RED PLUS

PC:

Win 11
1tb SSD boot drive
4tb external drive


The raidz2 datapool is currently where the live copy of the data lives. I had two 4 Tb WD Red plus drives previously in my PC, one already found it's way to the server and currently is attached as a hotspare to the pool. I moved the remaining red plus into the server a few days ago when I upgraded my NIC.

Server: 4x4 tb raidz2 + 1 4 tb hotspare + 1 4 tb unused (formerly in my PC, I had two copies on my PC, internally and externally)
PC: 1 external 4 tb drive (copy of server)
backblaze: gets its data via cloud sync task from the server

Idea: 6 drive raidz2 / striped mirror
Now I want to put the hot spare and unused drive to a good use. I'm currently leaning towards a new raidz2 over a striped mirror.

So I keep my copies backblaze and the external PC drive (freshly copied 3 days ago). But in order to recreate either a striped mirror or a raidz2 with 6 disks, I need to destroy my current 4x4tb raidz2 pool.

Now I wondered, if it would be better to get the data back from my windows PC or from backblaze. I do not suspect that the windows copy has suffered from bit rot in 3 days, but I don't know whether there are any bad sectors that would maybe corrupt the data. I subjectively feel that the data on my server should be the safest (and backblaze should be identical to it).

This is basically the question: From where should I restore the data? I could copy it to the unused drive from my server, and recreate a 5 drive pool, but I'd rather use all 6 drives permanently.
I will not have the possibility to store my data on a ZFS system during migration if I use 6 instead 5 drives. If I use 5 drives for the new pool I will have 1 drive that could only serve as a hotspare honestly. Not much use for it other than putting it back in my PC and don't worry about needed a storage upgrade there somewhen next year.

There are some options: Copying from windows -> server. Pulling from the server (maybe scp?). Does that make any difference? Maybe I'm on the wrong track but in case my pool would fail I'd rather restore from backblaze than windows. I don't know how to maintain the windows copy longterm, maybe occasionaly synchronisation with totalcmd or something like that, anything that could verify the file hash. For backblaze I'd trust that my data is in the same condition I uploaded it there.

No idea where you are getting 14Tib from? Is your smallest drive 3Tib? Confused. If it's 3Tib, then you'll have < 12Tib storage in a z2.
Maybe I misunderstood something, but https://wintelguy.com/zfs-calc.pl gives me 14 Tib usable after the migration. The 14.5 was a miscalculation on my end, I didn't use the calculator and just went with 4*3.64 Tib = 14,56 Tib.

1698772562636.png


But you want to copy ideally directly from your current live used copy of the data. It should also be the source of your B2 copy. If that data, whereever it is totally lost, is on Windows, then the B2 copy isn't any better. If it had bitrot, then your B2 copy should also.
Current live copy is on my truenas server. And yes thats the source of B2.

Windows <-- Truenas --> B2

Maybe you are relying on your original thread but few would likely read both threads. Please expand on your post so we can all be clear what you are moving, from what to Truenas?
Sorry maybe it's the language barrier or maybe it's just not my day for providing precise information. Hopefully this post clears everything up :)
 
Last edited:

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Much clearer! So, they are all 4Tib so that makes more sense then. I still don't get why you said 3Tib / 6.92 Tib but will drop it as not relevant anymore.

So, a hot spare is somewhat worthless if you don't have multiple stripes. So, say you had a 5x4Tib Raid z2 with a hot spare. The hot spare comes into play only if there is a failure. So, better to simply make it a Raidz3 really since then you can suffer an extra failure, though admittedly that's highly unlikely. You're still essentially at 50% wasted space either way. Once you have multiple stripes, a hot spare makes more sense. So, it made sense in mirrors as it's a spare for any stripe that fails. But not so much in a Z2. I suppose you could just keep it as a cold spare.

I would say given what you have said, B2 is the best copy (between it and Windows machine) if you need a copy exactly for the reasons you suggested. It's not the fastest copy of course. So, if #1 is highest chance of perfect data, I'd load from B2.

But your diagram shows 2 servers but I think it's planned vs the question section so pretty sure you only have one. You do have 2 extra drives currently, so, you COULD also remove the hot spare from the Z2, giving you 2 drives. You could make a second pool, no redundancy just a stripe merely for the purpose of migrating the data. Then, replicate with zfs the data from existing raidz2 to the new pool. Then, setup say for example a raidz2 as you suggested but with two missing drives (so a 5 "drive" pool with 2 parity but 2 missing drives), yes it can be done. zfs at least would checksum the data so you would be quite safe. Then replicate the data from those 2 drives in the new pool to the newly setup raidz2, and, then replace the missing drives with the other 2 that you used to load them. There's a lot to do here that you may or may not be familiar with. That's likely what I would do as I have backups anyway, however, your likely safest path if you don't know how to do that is B2 still. Just wanted to provide an option as you mentioned it as an option in the OP, this would be fastest but you could easily mess something up too unless you already knew how to do this.

Sorry for mis-understanding the whole situation but it wasn't clear to me.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I still don't get why you said 3Tib / 6.92 Tib but will drop it as not relevant anymore.
Now I get you! I wanted to say, currently the 4 drive raidz offers 6.92 Gib of usable space, we currently used up 3 Gib of that.

But your diagram shows 2 servers but I think it's planned vs the question section so pretty sure you only have one.
Yes only one server, the "second" server was to show the timeline / moment which I am worrying about: destroying my pool. Really not my day today.

You could make a second pool, no redundancy just a stripe merely for the purpose of migrating the data. Then, replicate with zfs the data from existing raidz2 to the new pool. Then, setup say for example a raidz2 as you suggested but with two missing drives (so a 5 "drive" pool with 2 parity but 2 missing drives), yes it can be done. zfs at least would checksum the data so you would be quite safe.
You mean 6 drive pool? If I go raidz2 I won't need any extra drive, so I'd attach both drives to the new pool.

This is the stuff I was looking for! I didn't know you could do that. I'll see if I can find some documentation on it (and understand it). Would this also be possible for mirrors?
This would actually solve my problem because I wouldn't need to use my windows backups [need to look into how to maintaining their integrity] because I do not want to restore from them ever, but if I needed to, it's good their there.

Just wanted to provide an option as you mentioned it as an option in the OP, this would be fastest but you could easily mess something up too unless you already knew how to do this.
I'd probably still ask here when I have made an exact step by step plan on how to proceed if I missed something. Worst case scenario is downloading our backups for a week, so that will probably be the route to go for me.

Obviously, to ever expand you'll either need 6 larger drives, or, 6 more drives.
With a 6 drive setup mirrors could be a thing - if I stick to four I only lose redundancy (and gain speed I do not really need). But I am much more flexible in future expansions. I need to look into it, but I'm pretty sure you can swap each vdev for higher capacity disks individually (disk by disk). It's a rock and a hard place deciding striped mirror vs raidz2. Resilvering a striped mirror is also much faster. One could get the impression that raidz2 over raidz1 is chosen because of the higher probability of a drive failing during resilvering.
Would you go for a 2x3 striped mirror with no hot spares? With the server upgrade, upgrade to 10g and probably upgrading our router to an appliance that can you pfsense, purchasing another HDD is something I'd like to put off (maybe wait for a good deal in the next months) for the foreseeable future.
it's entirely up to you what you value most. It's not really "safety", as you need backups anyway.
o_OAs I posted in another thread, I probably need to be more relaxed about the redundancy of my pool. I haven't heard back if we can fibre next year, but even with our current internet I could at least my wifes dataset in 8 hours (+ she also has another copy in her machine, but I didn't mention it because it's just an extra of her dataset and wouldn't solve anything here).
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Now I get you! I wanted to say, currently the 4 drive raidz offers 6.92 Gib of usable space, we currently used up 3 Gib of that.

Yeah, I thought you had different size drives!

You mean 6 drive pool? If I go raidz2 I won't need any extra drive, so I'd attach both drives to the new pool.

5 or 6, you mentioned 5 so went with it. You coulc do 6 but then no spare, I do think an on hand spare is important.

This is the stuff I was looking for! I didn't know you could do that. I'll see if I can find some documentation on it (and understand it). Would this also be possible for mirrors?
This would actually solve my problem because I wouldn't need to use my windows backups [need to look into how to maintaining their integrity] because I do not want to restore from them ever, but if I needed to, it's good their there.

It's kind of a trick with sparse files, here's a post I just found:


With a 6 drive setup mirrors could be a thing - if I stick to four I only lose redundancy (and gain speed I do not really need). But I am much more flexible in future expansions. I need to look into it, but I'm pretty sure you can swap each vdev for higher capacity disks individually (disk by disk). It's a rock and a hard place deciding striped mirror vs raidz2. Resilvering a striped mirror is also much faster. One could get the impression that raidz2 over raidz1 is chosen because of the higher probability of a drive failing during resilvering.
Would you go for a 2x3 striped mirror with no hot spares? With the server upgrade, upgrade to 10g and probably upgrading our router to an appliance that can you pfsense, purchasing another HDD is something I'd like to put off (maybe wait for a good deal in the next months) for the foreseeable future.

If you did a 2 vdev mirror setup, you could use one of the remaining 2 as a hot spare. But you'd have less space too, may not fit. If you do the 3 vdev mirror then you have no spare which I would not do. I use mirrors myself but I have a hot spare as what if I am on vacation when a drive fails? You can indeed swap for larger drives vdev at a time. That is all up to you! And yes resilvering is much faster in mirrors. However, the hidden thing is any error on the remaining drive is not really handled fully in the sense that a checksum error could lose a file since it has nothing else to rebuild it from, but it tells you what file(s). Probably minor as you have backups. Raidz2 or greater doesn't have that issue. As long as you keep up on scrubs though, likelihood would be pretty low. At least weekly scrubs.
[/QUOTE]
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thanks!

@danb35 posted an instruction on how to do that, however if I'm honest - his warning label kinda puts me off. I'm not sure I really want to go this route.

So RAIDZ2 with 6 disks would be possible, maybe over holidays when no one needs any files -> there's enough time to download from B2.

On the other hand I checked the zfs documentation: it says zfs will try to recover a file in case of checksum error.
And if I get you correctly, I should be fine as long as the mirror can provide the file. So I should only need to restore when the copy on the mirror is also bad.
Doesn't sound too bad.

I need to figure out how to retrieve individual files with filename encryption. Or if I want to disable that (renaming a folder triggers a complete reupload currently).

I'll increase scrub frequency to weekly then (have it on every 2 weeks right now).

Also going for mirror would make migration easier. I don't want to use the two free drives in a vdev together (because both are older and have a lot more power offs), but I could remove a disk from the Z2 pool and create the first vdev with RAIDZ2 degraded and then add the other vdevs after replication.

Then I could go with 2x2 and keep two hot spares. And when I need more space I'll add another vdev.
Possible caveat: when I switch disk capacity during expansion I will probably need a hot spare for each disk size?
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
There's really nothing to lose and time to gain. If it fails or you mess it up, whatever, you download from B2 and do the slow plan. Nothing lost but a little time. If it works, then, you save 3+ days. ZFS will tell you any file(s) that have any errors, theres nothing hidden. Up to you of course. You won't be the first or last to use the technique. Step 1 is to do a scrub just before you start the process to make sure you are starting in a good state.

If you are buying larger disks, yes, you start with replacing the spares as the spare needs to be big enough to replace any failed drive.

I'll add my own caution, look at it this way...... If you go the download from B2 route, you are overlaying your known good current data. Now, let's say something goes wrong, the data not good on B2, things happen and you never know for sure until you try and restore of course. Now you've lost your copy of good data, yes you still have it on Windows but that's not your favorite copy either.

Let's say you take the "risk". Now you have an extra chance, you have the conversion chance, you have the B2 chance, and you finally have the Windows method. Surely one of the 3 will work!

I believe the big red warning was only due to people using the technique but not having backups, i.e., reading it all.
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I believe the big red warning was only due to people using the technique but not having backups, i.e., reading it all.
This thread suggests there may be long term issues.

Honestly, I'm grateful for the suggestion but would steer away from route. I get your point about backups (and I have successfully restored smaller test datasets from B2, to test that everything works) and that could even protect me from said issues in the future.

If you add all the costs for the server, then it would make more sense to either buy the appropriate number of drives now and make a conversion without tricks or stay away from it. Why dump 1k into a NAS and risk the pool by saving 100 bucks now.

The more I think about how I would be trapped to replace 6 disks at once, the more striped mirrors come into play again. Especially because I can migrate now the right way.

Sorry I'm thinking out loud, the decision is not easy on me as you can see.

In the end no one can decide my layout for me :)
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
You change your mind every post, lol. Do whatever makes you feel safe and secure. For me, I don't buy a thing that thread said but again I am not you, so do as you wish. Many people have built zfs pools using that technique (not necessarily Truenas but that is mostly irrelevant). Hopefully you have enough info now to make that choice. If that is steer away, do that! I am not trying to change your mind about anything. I just don't generally agree with FUD. I participate in a lot of non Truenas ZFS discussion groups.

Do it over that holidays you mentioned, you have until then to make pool type choice.

For sure having a few extra disks makes the whole thing easier. I hope you can make the best decision for you.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
I slept a few on the issue and came to a solution: 3x2 mirror vdevs I value the ease of expansion more than the extra parity of raidz2. I also decided rather than having two hot spares I'd invest in another drive and have just one hot spare but another 4 tb mirror to attach.
Unfortunately I will need to get my PCIe SATA extender out again for this, the brand (delock) was not recommended here, but it ran flawlessly for almost a year before and I will only attach one of the VM mirror drives to it or one of the boot pool mirrors, so nothing really critical.

So I still have a few questions on the migration itself.

1) I will gain speed with the striped mirror layout: Only for transactions that exceed some % of my available RAM because the writes go to RAM first anyway. Although I would say I have a rather large amount of memory I will still see benefits, right?

2) When I create the new pool, will the data be automatically striped across all vdevs when I add new vdevs later on? I ask because initially I would only have 1 vdev available and this would almost hold all my data before I can add new vdevs.

3) I also want to upgrade to cobia and it said that you need an empty configuration file anyway. So I thought about a fresh install and then setup my shares and everything from scratch again. Would you first do the new install and rearrange the disks then oder rearrange first and then reinstall?

4) Not specifically a truenas question: my PC also uses a seasonic psu from the focus series. The PX750 is specified for a max. load of 62 A on 12V. I'd assume it would be safe to take the 4 connector SATA cables from that PSU rather than using some molex adapter solution? It has 4 slots for peripherals, so in theory I could power 4x4 = 16 SATA devices if I reuse the cables from there. This would yield 3.8 A per device, at 12 V it's over 45 W. I could also setup a staggered spin up in BIOS but number wise I should be safe to power 7 3.5" HDDs and 4 2.-5" SSDs from one PSU?

5) Migration procedure

Current layout
sdbraidz2same manuf. date as sdg
sdcraidz2
sdfraidz2
sdgraidz2same manuf. date as sdb
sdehot spareolder drive from PC
sdaunusedolder drive from PC
sdkunusednew drive

  1. Buy 7th hard drive skd and burn in with badblocks
  2. I don't want sdb and sdg in the same vdev
  3. I dont want sde and sda in the same vdev
  4. detach hot spare
  5. detach sdg from pool, pool is now degraded but I still have 1 drive of parity
  6. Create a mirror vdevs sdk, sda and sde, sdg and stripe them
  7. Replicate data from degraded raidz2 to new striped mirror
  8. destroy destroy raidz2 pool, create a mirror sdb, sdc
  9. add mirror to new pool
  10. add sdf to pool as a hot spare
final layout
sdbvdev3same manuf. date as sdg
sdcvdev3
sdfhot spare
sdgvdev2same manuf. date as sdb
sdevdev2older drive from PC
sdavdev1older drive from PC
sdkvdev1new drive

  • I didn't find the option to detach the hotspare from the pool in the webui - is this a cli option only? zpool remove poolname device?
  • Contrary to the documentation my ZFS Info options read "Extend" and "Offline" instead of "Extend" and "Remove" - I assume choosing offline will remove the drive from the pool?
  • For replication I set up a task and choose my individual datasets and check:
    1699103448806.png

    I would change Read Only from REQUIRE to IGNORE, why would I want them read only?
    Currently almost all datasets on my pool are encrypted. After the rebooting the server it's fun to unlock them all. I would create a new encrypted dataset on the new pool and replicate all my current datasets under that one and then check inherit encryption? I need to upgrade various paths anyway because the new pool will probably need a different name from the current one
Anything I overlooked? Thanks in advance!
 
Last edited:

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Bad blocks should finish tonight, so I could start replication and migration tomorrow.

4) got solved in the meantime, cables are compatible according to seasonic documentation although they do not match exactly in naming.

I know this is really small scale questioning on my side and probably not really encouraged. However I'd really like to avoid creating a thread asking for help recovering after migration when I find I out I missed a crucial step.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
1. The mirror will see improvements for virtually all IO. Esp reads as they can come from either drive.

2. The data will not be rebalanced when you add another vdev. You'd have to do that yourself if you want that done. There are rebalancing scripts out there on the internet.

3. I think I'd just install Cobia fresh first. Not sure it matters much. I like fresh installs, give you a chance to make things better.

First step (but I think you have it) is backup before doing any data moves, detaching anything.

I don't know what options for Cobia, you can always do command line. The doc likely doesn't match as it's likely been updated for Cobia and it's different there.

Why would you want them read only? When you are doing replication for backup purposes, you wouldn't want someone to change something on a true backup machine. But you would definitely not want that option for your purposes.

You might just file copy things if you don't want the same data in the same place, maybe reorganize things if that makes any sense.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thanks for chiming in again! Really appreciated!

2. The data will not be rebalanced when you add another vdev. You'd have to do that yourself if you want that done. There are rebalancing scripts out there on the internet.
Thanks! I found some scripts. I'll probably come back to this later, seems like it can be done anytime. Only way I could achieve creating the pool of all 3 vdevs first is if I put all data on the drive that will serve as a hotspare later on. But that would mean no redundancy.
It's 1.5-2 Tib of data where I'd really appreciate integrity. Most of the rest is not even in my b2 backup anyway and I could restore from my windows copy. I'd assume replicating 1.5 - 2 Tib should be possible without the drive failing on me.
Otherwise I could at least create 2 vdevs and have my data stored temporarily with parity in the meantime.

3. I think I'd just install Cobia fresh first. Not sure it matters much. I like fresh installs, give you a chance to make things better.
Thanks! I'll do some reading tonight though, I gathered bits of some missing features (like the SMB aux parameters) that got removed, for which I would want to have a solution prior to reinstalling.

First step (but I think you have it) is backup before doing any data moves, detaching anything.
Yes! I'll check if all B2 datasets are synced and compare my offline copy with totalcmd against the server before I detach anything.

Why would you want them read only? When you are doing replication for backup purposes, you wouldn't want someone to change something on a true backup machine. But you would definitely not want that option for your purposes.
Required was the default selection, I figured as much that I do not want / need that.

You might just file copy things if you don't want the same data in the same place, maybe reorganize things if that makes any sense.
I spend the last months organizing my data ;) I'm as good as done with it. But I think I get your point. I'll test how it plays with the filename encryption / reuploading to B2. If there are no problems I check out one of the scripts you mentionend or just move around some files manually.

Thanks for holding my hand here, I feel rather confident that the migration will succeed.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Only way I could achieve creating the pool of all 3 vdevs first is if I put all data on the drive that will serve as a hotspare later on. But that would mean no redundancy.
I'm not entirely sure I assess the following risk correctly:

There will be checksum information stored even if I use a single disk as intermediate storage. So I would definitely learn during the next scrub if any of my data went corrupt and I would need to pull the corrupt files from backup?

a) I'm extremely unlucky and the drive fails completely -> I need to pull a complete backup, I estimate a timeframe 6-8h for replication in which there must not be a drive failure.
b) ZFS will tell me on the single drive volume that _some_ files are corrupted -> I need to find the correct files albeit I use filename encryption on my B2 and need to restore single files
c) everything goes well and I don't need to do anything

I would take the risk and store my data on a single drive then and save myself from rebalancing later on if I'm not fundamentaly wrong here.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
ZFS will tell you which files it cannot recover. I believe you have it correct otherwise. I've been traveling, so skimming through right now but I think so.
 

chuck32

Guru
Joined
Jan 14, 2023
Messages
623
Thank you very much for all your help!

I decided to create a mirror for intermediate storage, I was still a bit confused about snapshots / need some reading to do. I'll do some further reading and if I'm still unclear Ill ask in a separate thread.

Only thing I forgot was that when I run the replication task manually no current snapshot is created. But I made sure no one wrote any new data since the last automatic snapshot task and after I checked two datasets for completeness of data I assumed all data was there.

I created a fresh install of cobia after migration.

So again, thank you very much for all your help! Really appreciated!
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
Did you reload existing configuration from configuration backup or just manually configure it all again?
 
Top