1 Dead drive, how do I recover the data on the 3 good drives?

Status
Not open for further replies.

Gunslinger

Cadet
Joined
Sep 24, 2014
Messages
4
I'm running FreeNAS 9.1.1 with 4 1TB drives in a single ZFS volume. I believe it's a RAID 0 setup, because I have all 4 TB available. Now, one of the drives is essentially kaput. It's the last of the 4, by chance. I can't boot into FreeNAS now because during boot, after trying to mount the local file systems, I get scrolling ATA errors, ending with these:

ada3:ata5:0:0:0): READ_DMA. ACB: c8 00 90 02 40 40 00 00 00 00 10 00
ada3:ata5:0:0:0): CAM Status: ATA Status Error
ada3:ata5:0:0:0): ATA Status: 71 (DRDY DF SERV ERR) error: 0 (ABRT )
ada3:ata5:0:0:0): RES: 71 04 9d 00 32 00 00 00 00 04 00
ada3:ata5:0:0:0): Error 5, Retries exhausted

It was a different ATA Status earlier in the scroll, but it's difficult to capture. Regardless of that, when I just remove the drive, I can boot into FreeNAS, and see the three disks, but there is no zpool available. At the moment, I just want to be able to recover whatever data is on the three disks and copy it to a 3TB drive I have for this purpose. How can I do that?

If I go buy a 1 TB drive and replace the bad one with that, will I be able to boot into FreeNAS and see my zpool and copy my data off? Or do I need to do something else to recover the data?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Short answer: Your data is gone.

Data is striped, so you're missing 1/4 of the data.

I'm not sure why you expected anything else, especially since you realize it was a RAID0ish setup (stored vdevs of one drive each).
 

Gunslinger

Cadet
Joined
Sep 24, 2014
Messages
4
Short answer: Your data is gone.

Data is striped, so you're missing 1/4 of the data.

I'm not sure why you expected anything else, especially since you realize it was a RAID0ish setup (stored vdevs of one drive each).
I expect to be able to get the 3/4 of the data though. And I'm not sure it's in a RAID array at all, so it may all be stored on the 3 good drives, since I only had about 2.5TB of data on the system. But I need to be able to see the zpool to see what's there to recover.

There's no reason for a system to be designed in such a way that losing 1/4 of the data means you can't recover any of it. I'd call that a major design flaw.
 

Gunslinger

Cadet
Joined
Sep 24, 2014
Messages
4
Short answer: Your data is gone.

Data is striped, so you're missing 1/4 of the data.

I'm not sure why you expected anything else, especially since you realize it was a RAID0ish setup (stored vdevs of one drive each).

The closest thing I could find to what I want is this article:
http://dcprom0.blogspot.com/2013/09/freenas-replacing-failed-disk.html

However, when I disconnect the bad drive from the system and boot into FreeNAS, there is no zpool available. I don't know why the person who wrote the article doesn't have that problem.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Have u ever read freenas documentation and forum stickies?
Raid 0 or better to say STRIPED diaks are a completely insane configuration for data that you care, because that if a single disk has a problem, u lose the complete pool, no matter if the other disks are ok. Note that this happens with every raid0, not only with ZFS and freenas.
You could just try changing the sata cables or the disks on a different hw, to see if it is a mobo/connectors/cables problem, but if it's the disk, your data is gone. Hope you can restore your data from backups...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I expect to be able to get the 3/4 of the data though. And I'm not sure it's in a RAID array at all, so it may all be stored on the 3 good drives, since I only had about 2.5TB of data on the system.
Unfortunately, your expectations are not consistent with reality. With a striped array (whether in ZFS or any other filesystem or RAID controller), data is written across all the disks so it can be read simultaneously from all the disks. Great for performance, but sucks for data protection. There is a valid use case for arrays of this sort, but it isn't to store important data. This is not a design flaw in FreeNAS; it was a poor choice on your part to set up an array that wouldn't protect your data.

If you had all four disks together in a single volume, and that volume had about 4x the capacity of a single disk, they were striped. If you want to confirm this, once you've booted with the problematic disk detached, go into the shell and type "zpool import." Post the results here.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
However, when I disconnect the bad drive from the system and boot into FreeNAS, there is no zpool available. I don't know why the person who wrote the article doesn't have that problem.
The person who wrote that article has a redundant pool (mirrors, in his case). That means he sacrificed some data capacity to protect his data in the event of a disk failure. When you've set up your pool that way, the disk replacement procedure (also spelled out in the manual, click-by-click) works well. I know; I just replaced a failing disk last week on my server. You, however, appear to have chosen to set up a pool with no redundancy.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
New sata cable? If it's truly bad drive and striped I don't think there will be much hope
.

Sent from my SGH-I257M using Tapatalk 2
 

Gunslinger

Cadet
Joined
Sep 24, 2014
Messages
4
Unfortunately, your expectations are not consistent with reality. With a striped array (whether in ZFS or any other filesystem or RAID controller), data is written across all the disks so it can be read simultaneously from all the disks. Great for performance, but sucks for data protection. There is a valid use case for arrays of this sort, but it isn't to store important data. This is not a design flaw in FreeNAS; it was a poor choice on your part to set up an array that wouldn't protect your data.

If you had all four disks together in a single volume, and that volume had about 4x the capacity of a single disk, they were striped. If you want to confirm this, once you've booted with the problematic disk detached, go into the shell and type "zpool import." Post the results here.
Then the design flaw is forcing that configuration to be a RADI0 array. I just wanted to see the 4 disks as 1 logical disk, I don't need the performance or the striping. FreeNAS should be able to do that.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Then the design flaw is forcing that configuration to be a RADI0 array. I just wanted to see the 4 disks as 1 logical disk, I don't need the performance or the striping. FreeNAS should be able to do that.

Let me be honest: Instead of blaming FreeNAS (which does have its flaws), if you'd read the manual, you'd know just how ignorant that statement of yours is. The configuration is not forced to be anything (other than impossible configurations). You could've easily made two mirrored pairs, a RAIDZ2 or four mirrored drives.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The "design flaw," if you choose to call it such, is in the fact that FreeNAS does not support (because ZFS does not support) a JBOD configuration. There was nothing in the GUI or the manual to suggest to you that it did support such a configuration, nor that you were creating such a configuration when you set up your storage. This appears to have been a pure assumption on your part.

There's also no reason at all to assume that a JBOD configuration would have resulted in any of your data being recoverable. I'm not wanting to pile on here, and I sympathize with your losing your data (it's happened to me more times than I can count--though not yet with FreeNAS), but you're blaming FreeNAS for doing exactly what you told it to do, and for your not having read the documentation (which is among the best I've seen for a Free software product, and better than a whole lot of paid products). You chose to create a disk configuration which provided no protection for your data, which was a poor decision. Based on your apparent frustration, it seems your data was important. You put your important data on a storage device without reading the documentation explaining how it works--this, again, was a poor decision.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So here's your problem.. the disks are striped. All 4 disks must be functioning and not have corruption that crashes the system for your pool to mount. Break a disk or have corruption and you can kiss *all* of your data goodbye. No, you can't get 3/4 of it back because 1/4 of the disks are gone. 1/4 of your file system is actually missing right now, so there is no mounting it at all without *all* disks. There are also no recovery tools for ZFS, so your options are basically make the bad drive start working, be ready to pay 5 figures to get the data back at some professional company like OnTrack, or kiss the data good bye.

This is why I have very clear warnings in my noobie guide to not do exactly what you are doing. I have it multiple times to make it 10000% clear NOT do do what you did. Sorry for your loss and good luck.
 

pjc

Contributor
Joined
Aug 26, 2014
Messages
187
I'm sorry about your data. That sucks.

The way to do what I think you were trying to do is to create a dataset on each individual drive, creating 4 mountpoints between which you need to manually divide your storage. That would be really true of any filesystem. ZFS can't both magically distribute your data among those disks AND survive the failure of one of those disks. Instead, it does the former by smearing your data across all of them. That's why redundancy is so critical for stripes.

The possible improvement I can imagine for FreeNAS is a red warning message in the ZFS Volume Manager when you set up a stripe without any redundancy. Something like, "Warning: A stripe configuration without redundancy will lose the data on ALL of the drives if even ONE of the drives fails. RAIDZ1 or better is highly recommended."

By making volume creation so easy, it becomes very easy to shoot yourself in the foot. And, since not everybody RTFM, the warning might go a long way toward helping newbies.

And of course, now would be a good time for us all to repeat the mantra: "RAID is not a backup. RAID is not a backup. RAID is not a backup..."
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It already warns you if I remember correctly for striped configurations.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Here are some options:

You could try ddrescue on another system to another drive, and see if you can get the drive to copy at all, or you can send the one drive away for data recovery, and once you get it back, connect it up to the pool and see if you get lucky.
 

pjc

Contributor
Joined
Aug 26, 2014
Messages
187
It already warns you if I remember correctly for striped configurations.
I wasn't positive, so I didn't want to reply until I checked:

It doesn't provide any warning as of 9.2.1.7. Your only clue is that the "optimal" notation disappears. So I think I'll submit a feature request for that.

@rs225: Good call on ddrescue. That might let Gunslinger save some of the data.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Then the design flaw is forcing that configuration to be a RADI0 array. I just wanted to see the 4 disks as 1 logical disk, I don't need the performance or the striping. FreeNAS should be able to do that.

4 disks as one pool.. Raid0 has no redundancy its striping only.. Its all in the documentation.

Sent from my SGH-I257M using Tapatalk 2
 
Status
Not open for further replies.
Top