Volume won't mount =[

Status
Not open for further replies.

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
I apologize ahead of time for my ignorance (I am new to this stuff) and I hope this isn't a repeat question, but I tried searching for the answers and had no luck =[

Anywho, long story short I have a promise tech SAN connected via fiber to a FreeNAS box, which then hosts NFS shares for some Xen Server hosts. The SAN is separated into one array of SAS drives, and two arrays of SATA drives. I had FreeNAS running for quite some time basically with 1 ZFS volume per array (SAS0, SATA0, SATA1). I haven't really been able to back up anything yet (my co-location goes online tomorrow, go figure).

Now for the problem. The controller in the SAN shut down randomly, so the FreeNAS box could no longer talk to the drives. I got it back up and running, but I had to reboot the FreeNAS box and when it came back up, two of the arrays (SAS0 and SATA0) worked fine, but the other one came up as "WARNING: The Volume SATA1 (ZFS) status is UNKNOWN". I checked the SAN itself and none of the drives failed and the array is still intact and is listed as "OK", so its not hardware.

Here's what I did:

- Rebooted FreeNAS, no luck

- Checked the volumes and it listed SATA1 with errors (Error getting available space, Error getting total space)

- I detached the volume hoping that I could auto import it back in, but it didn't show up in the list.

- If I "Import Volume", select SATA1 as the name, select "multipath/disk1" (which is different from the rest? the others aren't multipath?) I still get "Errors getting available space..." etc.

- I found one suggestion of running "gmultipath destroy disk1" and that removes the multipath AND when I tell it to auto-import volume, SATA1 shows up on the list, but when I finish the import, it isn't listed as a volume. In the spirit of hope, I rebooted but it went back to being a multipath and still wasn't showing up as a volume.


I hope this info is helpful, again I apologize if I'm missing anything. I'm running version 9.2.1.2. I would really like to get this volume attached again without losing the data.

=[
 
Last edited:

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
I tried the following this morning, this is after running gmultipath destroy disk1:

Code:
[root@PHL-FreeNAS ~]# zpool import                                             
   pool: SATA1                                                                 
     id: 9342728976671780986                                                   
  state: ONLINE                                                                
action: The pool can be imported using its name or numeric identifier.        
config:                                                                       
                                                                               
        SATA1                                         ONLINE                   
          gptid/f8bab269-f97b-11e3-85e5-00219bfcec72  ONLINE                   
[root@PHL-FreeNAS ~]# zpool import SATA1                                       
cannot import 'SATA1': I/O error                                               
        Recovery is possible, but will result in some data loss.               
        Returning the pool to its state as of Mon Apr  6 17:22:53 2015         
        should correct the problem.  Approximately 15 seconds of data          
        must be discarded, irreversibly.  After rewind, several                
        persistent user-data errors will remain.  Recovery can be attempted    
        by executing 'zpool import -F SATA1'.  A scrub of the pool             
        is strongly recommended after recovery.                                
[root@PHL-FreeNAS ~]# zpool import -F SATA1                                    
cannot mount '/SATA1': failed to create mountpoint                             
[root@PHL-FreeNAS ~]#   
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
If I run the same commands after rebooting (and the multipath/drive1 comes back), running the same commands gets me even less:


Code:
[root@PHL-FreeNAS ~]# zpool import                                                                                                 
[root@PHL-FreeNAS ~]# zpool import SATA1                                                                                           
cannot import 'SATA1': no such pool available 
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
If anyone has any suggestions they would be appreciated, I'm basically shooting in the dark xD

Every time I reboot, things go back to the way they were, the multipath/disk1 becomes visible again and the volume still isn't. The only way the volume becomes visible to be imported is if I run gmultipath destroy disk1, and I tried zpool import -R /mnt -F -n SATA1, and this SEEMS like it works, but its not visible in the GUI. When I then run zpool status, it shows it twice, but thats only until I reboot and then everything goes back to the way it was.

HALP! I need to get this back up and running and I'm at a total loss =[
 

Alvin

Explorer
Joined
Aug 12, 2013
Messages
65
What driver are you using for the fiber channel card, and is it loaded?
Also, is the SAN visible? Use camcontrol devlist
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
The HBA is okay and the SAN is visible (two of the three volumes are visible, mounted, and working). The problem is that third volume, which is visible only under certain conditions yet still doesn't mount.
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
Here is the output from camcontrol devlist


Code:
[root@PHL-FreeNAS ~]# camcontrol devlist                                                                                           
<Promise VTrak E610f 0328>         at scbus0 target 0 lun 0 (da0,pass0)                                                            
<Promise VTrak E610f 0328>         at scbus0 target 0 lun 1 (da1,pass1)                                                            
<Promise VTrak E610f 0328>         at scbus0 target 0 lun 2 (da2,pass2)                                                            
<Promise VTrak E610f 0328>         at scbus1 target 0 lun 0 (da3,pass3)                                                            
<Promise VTrak E610f 0328>         at scbus1 target 0 lun 1 (da4,pass4)                                                            
<Promise VTrak E610f 0328>         at scbus1 target 0 lun 2 (da5,pass5)                                                            
<Dell VIRTUAL DISK 1028>           at scbus2 target 0 lun 0 (da6,pass6)                                                            
<ATA WDC WD2502ABYS-1 3B04>        at scbus3 target 0 lun 0 (pass7)                                                                
<TEAC DVD-ROM DV28SV D.0J>         at scbus4 target 1 lun 0 (cd0,pass8)                                                            
[root@PHL-FreeNAS ~]#                                                                                                              
                         
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
Don't know if it's worth noting, but the multipath/disk1 (Optimal) consists of da2 (Passive) and da5 (Active)
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
:( Anyone?

I would hate to remove the volume, there is a considerable amount of data on there =[
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The warnings that go for hardware RAID are equally valid for schemes such as this.
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
According to the SAN (and from what I can tell, FreeNAS) the hardware is fine, thankfully =]

Regardless the problem persists =[ I've almost given up on this, but I am really hesitant to rebuild the volume.
 
Joined
Oct 2, 2014
Messages
925
What is the hardware specs? I think Ericloewe meant that the issues youre having typically occur when using hardware RAID + FreeNAS.
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
SAN is a Promise Technologies VTrak E610f. The Drives are in a Hardware RAID:
SAS0 - RAID10
SATA0 - RAID1
SATA1 - RAID50

This is connected via Fiber to the FreeNAS box:
Dell PowerEdge
BuildFreeNAS-9.2.1.2-RELEASE-x64 (002022c)
PlatformIntel(R) Xeon(R) CPU E3110 @ 3.00GHz
Memory4048MB

However, at this point it may be a little late. Due to time constraints I had to create a new volume, doubt i'll be able to get the old one back unless you guys know any tricks haha.

Now it's onto trying to rebuild my linux file systems :(
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Honestly, you pretty much killed yourself by doing gmultipath destroy. That changes the geometry of the disks because the multipath info takes up some space at the beginning of the disks. That was your "game over".

But, in the larger picture, it is beyond "very well known" that not having 8GB of RAM minimum with FreeNAS is a death cry for ZFS. So I wouldn't be surprised if you killed your pool just because of that alone. We've seen it, dozens of times, you certainly wouldn't be the last either.

Hardware RAID + ZFS = lost data

That's the hard truth. There's multiple very serious mistakes made, so I'm not too surprised (disappointed that you didn't listen to our warnings though) that things are where they are.

I do wish you luck, I'd recommend you read over our docs and such and rebuild using more appropriate hardware and such. FreeNAS is one of those things that, if you do right, will do very well by you. But, if you don't do *everything* right, you stand a very real chance of waking up one morning to problems and you never see that data again.
 

j-rod

Dabbler
Joined
Apr 6, 2015
Messages
11
I definitely appreciate the feedback, this whole thing has been one giant learning experience for me. Thankfully, the data lost is recoverable, it'll just take some time to reassemble it.

The crazy thing is, although the hardware was outside of the recommended specs (I did get to read the recommendations, but cost and available equipment was a factor =] ) the FreeNAS box has run beautifully for a while now. Even with the memory lacking and the hardware RAID, it was never a problem. This problem all started because of the controller on the SAN. Had it never gone down, chances are things would still be running. Now that's not to say recovery might have been easier if I were running to spec, but all in all, I am still very impressed with FreeNAS, even on my flawed environment.

That being said, once the co-lo is online and everything is backed up, I am going to be doing some serious hardware upgrades including a new SAN and FreeNAS box (I like the open source stuff). This time around I won't use a hardware RAID though =]

Thanks again, I'll be back here with more questions soon, I'm sure =]
 
Status
Not open for further replies.
Top