Replacing Stipe member with RaidZ member?

Status
Not open for further replies.

Seattle

Cadet
Joined
Oct 16, 2012
Messages
8
I don't know how this happened but somehow I ended up with a volume that I am using that consists of 3x1TB drives in RAIDz and 1x500GB in the same volume as a Stripe member.
I would like to replace the 500GB and add another 1TB. Normally, if this was all in ths RaidZ pool, I could just pull the 500GB and Put in the 1TB, add the drive and let it rebuild. When I pulled the 500GB, the entire volume went down and was unusable. I suspect that this 500GB, because being a different size was added as a strip instead of a RaidZ member.
My question is: Can I somehow take the stripe member out of the array by moving the data on it to the Raidz pool even though its all in the same volume?? and then add the new 1TB drive?
It looks like this:

Volume:
DATA >
raidz1 >
ada1p2
ada3p2
ada4p2
stripe >
ada0p2

Of course I don't want the stripe ata ll and would like to eliminate that and add 1 new drive to the raidz1.

thanks ahead of time.
 

Seattle

Cadet
Joined
Oct 16, 2012
Messages
8
Ill add this output form the CLI:

[root@freenas] ~# zpool status Data
pool: Data
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
raidz1 ONLINE 0 0 0
gptid/b3197980-083e-11e2-84aa-0019d155623d ONLINE 0 0 0
gptid/b3b2f576-083e-11e2-84aa-0019d155623d ONLINE 0 0 0
gptid/b41a12a4-083e-11e2-84aa-0019d155623d ONLINE 0 0 0
gptid/2b6639a2-083f-11e2-84aa-0019d155623d ONLINE 0 0 0

errors: No known data errors
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If I take what you wrote correctly, you tried to add 500GB of storage to your pool. Read my guide(see my sig for the link). You can never remove that drive from your zpool. Also, if your small 500GB drive fails, you will lose everything in your zpool. If you look at slide 21 you will a visual representation of what you did.. and why its a very bad thing to do.

The only way to fix your mistake is to erase the zpool and start over. But there are ways to copy everything from the 500GB to a larger drive and then do a scrub and fix everything(someone else will have to post the exact command you'll need to type if it will work for a stripe). But keep in mind you will never have failure protection from the 4th disk as until you delete the zpool and recreate it.
 

Seattle

Cadet
Joined
Oct 16, 2012
Messages
8
Yes. Thats exactly what happened. By accident of course. And yes, I did notice that the raidz1 zpool will not allow online status while the stripe member is missing.
Instead of having to completely backup all this data and recreate the zpool, I would like to know if there is a way to transfer the data that is on the stripe member to the extra 1tb drive ( I now have it connected using a spare data port) and then remove the 500GB, add the 1tb to the raids and scrub to fix the volume. If I am left with a 1TB stripe member, ill have to find a way to backup and rebuild the spool ( which I would rather not do of course).
Thanks for your help.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Honestly, I'd focus ALOT less on trying to replace the 500GB with a 1TB and ALOT more with rebuilding the zpool into a true RAID-Z1 configuration. You are playing with fire right now as your stripe drive is your single failure point.

I know in a RAIDZx configuration you could shutdown your server, remove the drive and image it on another machine with FreeBSD from command line (it could take several hours to copy) and then plug the drive back in to your FreeNAS server and bootup. But I'm not sure if a striped drive will auto-expand to fill the drive. I've only seen one or two people do it (usually because of other emergency hard drive failures) and never with the intention of expanding the array. They made the exact same mistake you did and tried to add a single drive. My guess is the few people that could give you the command line to do the transition may choose not to post because you are already in a very bad way in terms of data reliability. If you make a mistake on the command line and somehow destroy the data on your 500GB you'd lose everything on your zpool.

Please don't take this the wrong way, but we see the same errors over and over and eventually the smart experienced people just start tuning out those errors because there's plenty of forum posts to read if you choose to search. I'm not sure if searching would be easy, but based on the mistake you made I'm not sure how much help you're going to get. Your mistake is one that was occurring more frequently than every week earlier this year. Most people stopped posting with any assistance because it got old always telling people they're SOL. (Hint: That's why I created the guide.. to hopefully save the forum from unnecessary repetitive posting and save some people from losing their precious data)


Edit: I just saw another post of someone else desperately trying to save their data. See http://forums.freenas.org/showthread.php?9280-FreeNAS-8-3-Will-Not-retain-ZFS-Pool/page2 to see where it is identified that he made the same mistake you did :(
 

Seattle

Cadet
Joined
Oct 16, 2012
Messages
8
Yeah. It was a big mistake and I didn't even catch it till I wanted to replace the smaller drive with a larger one. Thats when I had to dig a little deeper and figure it out.
The strange thing is that I used the GUI fro every step and I was able to reproduce the error exactly in a test machine. Added a new Raidz1 with 3 HDD. Then went back and added a single drive choosing to expand the same pool and, bam, added the drive as a stripe instead of a raidz1 member. Is this a glitch in the gUI? I know that you can add the new drive at the CLI no problem.
Anyway. long story short, I ended up doing what you said, and blew out the pool and recreated it. Oddly enough, however, I learned a pretty neat set of commands that I had never used. ZFS Send and ZFS Recv. I created a new pool on a new 4TB hard drive called Backup. then just did a quick> zfs send Data@now | zfs recv -dF Backups. Worked perfectly. After I created a new correct raidz1 pool I did the opposite. Looks like everything is working great. I couldn't get the zfs send/recv commands to work without using the -d and -F switches but thats for another day.
Any.... Thanks for you help on this. pushed my in the right direction.
 

JaimieV

Guru
Joined
Oct 12, 2012
Messages
742
Added a new Raidz1 with 3 HDD. Then went back and added a single drive choosing to expand the same pool and, bam, added the drive as a stripe instead of a raidz1 member. Is this a glitch in the gUI? I know that you can add the new drive at the CLI no problem.

No, you can't - you can't expand a RAIDZ1 to be a larger RAIDZ1 by adding a new drive to it, ZFS doesn't support that (more's the pity!). Read Noobsauce80's guide again.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah. It was a big mistake and I didn't even catch it till I wanted to replace the smaller drive with a larger one. Thats when I had to dig a little deeper and figure it out.
The strange thing is that I used the GUI fro every step and I was able to reproduce the error exactly in a test machine. Added a new Raidz1 with 3 HDD. Then went back and added a single drive choosing to expand the same pool and, bam, added the drive as a stripe instead of a raidz1 member. Is this a glitch in the gUI? I know that you can add the new drive at the CLI no problem.
Anyway. long story short, I ended up doing what you said, and blew out the pool and recreated it. Oddly enough, however, I learned a pretty neat set of commands that I had never used. ZFS Send and ZFS Recv. I created a new pool on a new 4TB hard drive called Backup. then just did a quick> zfs send Data@now | zfs recv -dF Backups. Worked perfectly. After I created a new correct raidz1 pool I did the opposite. Looks like everything is working great. I couldn't get the zfs send/recv commands to work without using the -d and -F switches but thats for another day.
Any.... Thanks for you help on this. pushed my in the right direction.

It is not a glitch in the GUI. It is a mistake noobies make when they don't have a thorough knowledge of what they are doing. FreeBSD/FreeNAS is unforgiving of admins that dont have the knowledge to do what they think they are doing. This is by design because it is expected that admins have a good understanding of what they are doing. Windows tries to protect the users from themselves, and we can see how that is working out for them. FreeBSD is the opposite end of the spectrum. FreeBSD is an OS of "know what you are doing or pay the price". That's why Unix guys can make great money.
 

Seattle

Cadet
Joined
Oct 16, 2012
Messages
8
Well... Just to retort. I dont consider myself a "noobie" when it comes to unix/linux/mac/windows. And keeping all the different technologies straight in my head is sometimes challenging. Not saying Im an expert by any means, (thats why i posted here int eh first place).
But, I realized I was mixing up LVM with zpools. I had forgotten that with zpools you cant add single vdev devices. I didnt remember until JamieV mentioned it above and then it all came back.
Hind sight being 20/20 I would have planed to expanding and nested the raidz1's in 500GB raidz1 vdevs and then nested those in a new zpool. I did some testing before with this. Basically, you create a "Drobo" or "unraid" type of partition scheme yourself and it allows you to expand in the smallest set you added in the beginning.
Thanks again for all your help guys. Im actually testing this for use in an XSAN replacement for a media company with 60TB + 120TB. They are currently using Nexsan and ActiveStorage arrays but the xsan controllers are still running on old G5's with xsan2.0. ZFS with nested pools works a lot like xsan at the core level. Luns being like vdevs, etc...
Thanks again. :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I wasn't saying you were a noobie, just that it was a mistake that noobies make. Hopefully I didn't offend you with my comment. We all make that dumb "noobie" mistake from time to time ;)
 
Status
Not open for further replies.
Top