Screwed up disk add... best way to fix

Status
Not open for further replies.

TravisT

Patron
Joined
May 29, 2011
Messages
297
I messed up when I added two new hard drives to my iSCSI target. I wanted to setup a ZFS RAID 10 array, and I had two disks striped running in the box. I added my two drives (each of the 4 drives are 1TB) and accidentally added them as a stripe, which created a 4 disk stripe instead of two, two disk stripes mirrored. I currently have one spare 2TB drive that is not currently part of a pool.

Any suggestions on how to fix this without screwing this up more?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, the only way to fix your situation is to destroy the zpool and create a new one...
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi TravisT,

Sorry to say but backup & recreate is the only way out of that pickle.

-Will
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
That's what I was afraid of. It made sense in my head when I added the disks - what would have been nice is a confirmation screen showing the layout before committing. Oh well, live and learn.

What is the best method to backup that data before destroying/recreating? Should I use my spare 2TB disk, create a single disk ZFS volume and do a copy from the CLI? Then copy back? rsync? I have servers running on the iSCSI LUN, so something with minimal run time would be ideal. I can shut the servers down if necessary though.

I haven't dealt with snapshots up to this point, so I'm a little hesitant to do so in this case (although it would be a good learning experience).

Suggestions?
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
oh, and to further complicate matters, the volume I need to destroy/recreate contains 8 zvols that are shared via iSCSI. Not sure that the copy or rsync is even an option due to that.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
What is the best method to backup that data before destroying/recreating?
You need to be careful with iSCSI. Make sure you successfully mount the backup pools before destroying the original.

I can shut the servers down if necessary though.
Don't see how you can avoid it.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Any pointers on how to do this? I'm out of my league here.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Any pointers on how to do this? I'm out of my league here.
Not really. You have so many options as to how to proceed that you can do alot of different things depending on where you intend to move your data, how much you plan to move, and other factors. I'd give the manual a read through and keep a particular eye on for things that you may want to use.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I've been searching the web since last night, and I'm not really finding a good walk-through of how to do this. The closest I found was this. I started following along, but I'm stuck when replicating the data from one dataset to another.

The first plan that comes to mind to recover this is to copy the data from the striped zpool over to a single 2TB disk that I just installed in the server. Once the data is there (and verified), destroy and rebuild the 4 disk pool to a mirrored stripe, then move the data back over to that zpool, again verify, and destroy the data on the 2TB disk which would free it up for future use again. Problem is, I don't even know how to copy the data over, much less get all the iSCSI working again. Is this doable?

I have dug through the user guide, and can't find anything that goes into real detail about RESTORING a snapshot. I only saw where it covers creating and replicating a snapshot to another freenas box.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi TravisT,

I don't know what you are running the VM's under....I run ESXi so this is how i would do it.

I'd get a second drive that I could install into the ESXi box and move the vm's from the iscsi devices to the disk through the ESXi datastore browser. It wil be slow, but once it's done you can import the vm's back into ESXi and verify they work.....and even run them while you are rebuilding the filer.

Once you have the filer back up off it's knees you can then just move the vm's back.

-Will
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
I run ESXi as well.

That seemed like a good option, but I can't figure out how to browse an iSCSI target to get the files over. Instead of installing a physical disk, I was thinking I could just move the VMs over to a NFS share on the freeNAS box on the spare drive. Copy over to that and rebuild my zpool.

I have one machine that was already off, so I attempted to remove the LUN from my VM so I could add it under the storage tab under datastores. It showed up, but doesn't look like I can "mount" it without reformatting the zvol. I'm tempted to go through with it and see what happens since that particular machine isn't of great importance, but it doesn't look like it will work.

Any ideas of how to browse a zvol through ESXi's vSphere client?
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Still have't made any significant progress on this. I think my biggest problem is figuring out how to move a zvol from one zpool to another (then back). I can't figure out how to do it through the GUI or CLI. I'm probably searching the wrong terms.

It seems that using snapshots to recreate the zvols on another zpool is the right approach, but I can't figure out how to do that (using zfs send/recv I think). Being that it is data that I don't want to loose (not critical data, but it would still take some time to re-create it), I'd rather not experiment during these transfers.

Can anyone help me before I mess this up more?
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Ok, I stumbled across this writeup that may be helpful.

I started by grabbing my least important zvol and creating a snapshot.

Code:
zfs snapshot raptor/media01.disk1@1


Then I used ZFS send / receive to copy that snapshot over to another zpool that I had previously created (on a spare disk) named "Temp".

Code:
zfs send raptor/media01.disk1@1 | zfs receive Temp/media01.disk1.b


It is currently copying, but when it's done I think I can just add an iSCSI device extent, point it to the new zvol (media01.disk1.b) and update my associated targets to point to the new device extent.
 

TravisT

Patron
Joined
May 29, 2011
Messages
297
Update:

Thought things would work, and after re-booting a couple times, it seemed like they were. As an easy way for me to test this out, I added a file to the desktop of my machine when it was pointed to the Temp zpool. I shutdown and reverted back to the raptor zpool. The file created on the Temp pool was still on the desktop. Not sure where to go from here, but at least I've recreated the data on a different pool. That counts for something.

Advice on how to properly migrate these iSCSI targets over would be appreciated. It seems that the zfs send worked like it should.
 
Status
Not open for further replies.
Top