Fixing user error on volume creation

Status
Not open for further replies.

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Hey guys,
Here's the background on my situation. I didn't RTFM and assumed (incorrectly) I could add a device to a ZFS 1 Vdev. Since that's not the case, I now have the following setup.

1 3 x 3 TB ZFS 1 vdev

1 3 TB spare disk.

This happened because I didn't have enough SATA ports for 5 disks during my initial transfer. Is it possible to do the following?

  1. Move my data to the 1 3TB disk
  2. Allocate a vdev that's zfs 1 with 4 devices in the vdev expected. Only 3 active from the previous 3 x 3 TB drives
  3. Copy my ZFS volume off the 3TB single drive to the 4 vdev volume using my previous technique
  4. Wipe the single 1 3TB disk, and add it to the 4 vdev as a replacement which will give me the single disk failure redundancy
I know I'm in a situation for potential data loss during this process, so I'll back up to an external source as well beforehand. Unfortunately my hardware isn't ideal with only 4 SATA ports, but I'm on a budget and it's what I have to work with.

Thanks for the help guys.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
First of all, nomenclature: The RAID5-like thing ZFS has is RAIDZ1.

The big problem I see is that you can't just remove the single drive. If you have another single drive around, you can copy stuff over to it and then copy it back to a new pool.

Are you suggesting you want to use a degraded-by-nature RAIDZ1 vdev, copy stuff over to it and then resilver a drive? That's the kind of thing I can't recommend in good conscience.
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
Hi Ericloewe,
That's exactly what I'm suggesting. To be clear, while I move the data off my volume to an external storage, it's just a bunch of striped USB drives, so I'm at no greater practical risk for data loss than having a degraded RAIDz1 system. Hence why I'd just like to have it in 2 places, on my single internal drive, and my striped USB drives. I'll have the data in 2 places temporarily. While not a perfect setup, it should limit my exposure to potential data loss. If it's not really possible, I can only put it in 1 place, my external USB drives, then recreate the vdev with 4 devices. This increases my exposure to data loss since I'll only have 1 location of higher risk storage.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
To pull this off so that you'd have a completely legitimate FreeNAS pool when you are done might be tricky. Plus striped usb drives, Z1, and things like that make me cringe a little. If you could practice in a vm... you could likely pull this off.

Easiest by far would be buy an extra disk. Build the 4 disk pool. Shutdown and physically pull the 4th disk to make it degraded. Then plug in the existing data and transfer to the degraded pool. When you are done. Replace the removed disk, and let it resilver. That way you know you have a FreeNAS friendly pool built in the GUI.

The hard way:
Start with this thread https://forums.freenas.org/index.ph...dz-3-of-4-drives-i-e-to-allow-migration.7748/.
Copy the data over from your source.
Build a replica FreeNAS Drive. https://forums.freenas.org/index.ph...ol-to-change-existing-stripe-to-mirror.26326/
Replace the faked degraded drive using proper gptid's
Export the pool. Auto-import.
If you did everything perfect. You will have a pool identical to building it from the GUI.

Truth is that is a little crazy, and no one in their right mind would tell you to do it if you had a choice. But if your data is safe on a disconnected drive, and you don't mind work and a chance to learn... why not?

Full disclaimers in effect. You are an adult. You could wipe your data. You could screw this up and it won't work. You could screw this up and lose your data later. It is not for the faint of heart. I've never done it. I would just buy the extra drive if I wanted simple. I would also do it in a heartbeat for fun. ;) Yes I am that messed up.

Good luck.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
So I did a couple of these just for fun. It is definitely a pain to be super kosher with partitions but not impossible. Gotta build em all right. :)

I couldn't get the file backed md device to work nicely. zpool create complained about the device being full. I didn't try real hard to find an answer. Just used a memory backed device instead. 'mdconfig -a -t malloc -s 3072g -u 0' as per http://www.freebsddiary.org/zfs-resizing.php This is a sparse file, so takes almost no space in memory. Works fine, but you don't want to screw around writing and panic the system.

With your backup in place and the data disconnected. There is no risk to anything important. You can screw around until it's perfect. I also saw a middleware bug importing pools on 9.3 that I didn't troubleshoot. I am assuming my very old 9.3 was screwy and will work if fresh, clean, and recent. 9.2.1.9 worked perfectly. Do some testing while you can't lose anything. If I get a chance I'll test 9.3 again and throw a console log up.

Kind of a fun little project.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So far I've seen a few people try the faked redundancy thing with memory devices. Had 3 or 4 that had everything work fine until they went to replace a disk. Then the pool threw up a hairball and that was the end of the pool (and the user data). Don't ask why. I don't care why either. I just know from watching users try to do the pre-degraded array with memory devices that something isn't quite right. Can't say I'm too terribly surprised since there's no doubt things going on that are totally untested in FreeNAS with using md.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A less hacky, but still dangerous option is to create a regular RAIDZ1 pool, remove one drive, wipe it, connect the drive with the data and finally resilver the drive you removed.
Beats RAM drives, I guess.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
No doubt this is going to be untested on a significant scale. I don't really think causality of loss could fairly be attributed to using md, a file, memory or any other source to back a device within a vdev. ZFS is designed from the ground up to do use different storage types. So you either trust ZFS to work as designed or you don't. I have no doubt people could screw up replacing a device, or have things go sideways there. They do that all the time through the GUI as well. There is no significant sample size to place blame and/or no way to rule out user error.

All one can really do is give FreeNAS the pool it expects to see. It doesn't matter if the devices were created by a call from notifier.py or a call from the CLI if the EXACT same commands are issued. The nice thing about open source is we don't have to guess. I had no trouble replacing devices, nor any issues dealing with the pool from the GUI. It is simply a bunch of extra steps to do it the long way to match what FreeNAS wants to see.

I wouldn't hesitate to trust my data to a pool made this way. I would also trust FreeNAS to interact with it gracefully, as it sees exactly what it expects. If we have executed correctly, this is no different than exporting a GUI made pool and auto-importing it. I kind of view it this way, if I asked a developer at iX to build me a pool from the command line, they could get it done. If I asked them to build me a pool using different storage types, they could get it done. If I asked them to ensure it would import nicely and function with the GUI, they could get it done. They wouldn't have concerns that some user screwed up a pool and lost data. They would expect that their code and commands would function as designed and intended.

OP. You can obviously screw this up. I've said so a dozen times. Cyber has even seen it happen. I do believe there are people that could pull this off successfully.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I tend to take the same opinion as you mjws00. My concern is that some stuff in FreeNAS is custom specific to FreeNAS. For example Warden (the program that handles jails) is unlike the other wardens out there for FreeBSD, PC-BSD, etc. Its well known to not be identical, and we've had quite a few complaints about that and the answer is "use it as designed or don't use FreeNAS".

I have no idea what may (or may not) be customized with the memory devices, ZFS, etc. There's a BOATLOAD of mechanism at work that make ZFS work properly. Its entirely possible that this is a dangerous game because of those differences. I'm just hypothesizing or I'd provide solid evidence. But considering the customization of FreeNAS, the fact that it is billed as an appliance less than an OS, and the fact that others have had problems, this is not only untested but the devs would probably poop in their panties if they knew people were doing this. Its a dangerous game, no matter the reason, and *I* would never endorse this or try to do this with any data that I would *ever* care about. Pools just suddenly going bad is the "worst case scenario" for a file server, and that's exactly what we've seen.

I'd argue that if you are so desperate to do things that you're going to bend FreeNAS to do what you want despite the fact that it's not covered in the manual, you should probably go back to Windows or Linux. The user that will do those kinds of things from the CLI cannot possibly be experienced or adequately prepared for the consequences since FreeNAS is a custom OS of its own.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I don't make judgments about a user's experience or what they may be adequately prepared for. I'll give them fair warning, and then the opportunity to learn something if they choose. Some folks are clueless, others just need a nudge in the right direction to accomplish something awesome. In this case the guy has his data protected, he's shown aptitude to jump into an issue and learn something, and seems intelligent and polite. The primary constraint is budget. So I'm happy to show him a little of what is possible, especially when he's already started researching it. It's not like this is new. We're borrowing from devs and guys that dig deep. But it seems many of the guys that used to go deep on FreeNAS have left.

I get bored with "rtfm", and "click here or gtfo". ;) Sometimes one should jump deep into the code and inhale deeply.

Play safe. For all those concerned, Cyberjock always gives solid safe advice. I respect his position immensely. I kinda like to stretch and see people grow. That may involve mistakes and/or pain.
 

Todd Nine

Dabbler
Joined
Nov 16, 2013
Messages
37
So, just to put some closure on the thread, I played it safe and performed the following.

ssh into the console

Code:
#Take the initial snapshot and begin transferring it
zfs snapshot -r internal@migration

nohup /bin/sh -c "zfs send -R -v internal@migration | zfs recv -v -F external"  > & /mnt/internal/move.log &

#Wait for completion
less /mnt/internal/move.log

#Take a new incremental snapshot and only send the delta, which is much faster
zfs snapshot -r internal@migration2

nohup /bin/sh -c "zfs send -R -v -i migration internal@migration2 | zfs recv -v -F -d -u external" > & /mnt/internal/move2.log &
#Wait for completion
less /mnt/internal/move2.log



Go to the UI, and detach and destroy the "internal" volume

Shut down the system, installed my 4th disk, booted then created the volume with 4 disks.


Ssh back in, and performed the following

Code:
#Transfer my migration2 snapshot back to the internal volume

nohup /bin/sh -c "zfs send -R -v  external@migration2 | zfs recv -v -F -d -u internal" > & /mnt/external/moveback.log &

#Delete the snapshots we don't need anymore after the transfer

zfs list -t snapshot -o name | grep migration2 | xargs -n 1 zfs destroy -d
zfs list -t snapshot -o name | grep migration | xargs -n 1 zfs destroy -d
 
Status
Not open for further replies.
Top