Restarted, lost volume

Status
Not open for further replies.

streetlamp

Cadet
Joined
Jun 17, 2013
Messages
3
Hi all, linux/freeNAS noob here

Recently got a freeNAS setup and had it running strong for about a month, had severe storm warnings the other night and since I don't have a UPS I decided to shutdown the system incase we lost power. Restarted the next day to find an error and unable to access the volume, unfortunately I have lost the error however.

At this point I could still see the volume but I had read a few other forum posts about exporting and reimporting the volume. So I exported through the GUI and now can't reimport or do anything.

Some console outputs below, any help is extremely grateful

Code:
                                                                               
[root@freenas ~]# camcontrol dev list                                         
<SAMSUNG HD501LJ CR10>            at scbus0 target 0 lun 0 (pass0,da0)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 1 lun 0 (pass1,da1)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 2 lun 0 (pass2,da2)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 3 lun 0 (pass3,da3)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 4 lun 0 (pass4,da4)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 5 lun 0 (pass5,da5)       
<SAMSUNG HD501LJ CR10>            at scbus0 target 6 lun 0 (pass6,da6)       
<HP v125w 1.00>                    at scbus5 target 0 lun 0 (pass7,da7) 


Code:
[root@freenas ~]# gpart show                                                   
=>      34  976773090  raid5/NASty5  GPT  (2.7T) [CORRUPT]                   
        34        94                - free -  (47k)                         
        128    4194304            1  freebsd-swap  (2.0G)                     
    4194432  972578692            2  freebsd-zfs  (463G)                     
                                                                               
=>    63  7827329  da7  MBR  (3.7G)                                           
      63  1930257    1  freebsd  [active]  (942M)                             
  1930320      63      - free -  (31k)                                       
  1930383  1930257    2  freebsd  (942M)                                       
  3860640    3024    3  freebsd  (1.5M)                                       
  3863664    41328    4  freebsd  (20M)                                       
  3904992  3922400      - free -  (1.9G)                                     
                                                                               
=>      0  1930257  da7s1  BSD  (942M)                                         
        0      16        - free -  (8.0k)                                   
      16  1930241      1  !0  (942M) 


Code:
[root@freenas ~]# zpool status                                                 
no pools available 
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Code:
                                                                               
=>      34  976773090  raid5/NASty5  GPT  (2.7T) [CORRUPT]                   
        34        94                - free -  (47k)                         
        128    4194304            1  freebsd-swap  (2.0G)                     
    4194432  972578692            2  freebsd-zfs  (463G)                     
 
Looks to be your problem right there. This is a single stripe ZFS pool on hardware RAID5?
Code:
zpool import

zdb -l /dev/raid5/NASty5p2
Or whatever the actual path is to the RAID5 disk.

You also forgot your FreeNAS version.
 

streetlamp

Cadet
Joined
Jun 17, 2013
Messages
3
Sorry, I just got back around to trying to figure this out.
This is FreeNAS 8.3.1
zpool import does nothing and zpool status shows no pools being available.

I used
Code:
gpart recover raid5/NASty5

Which got rid of the CORRUPT error
Code:
[root@freenas] ~# gpart show
=>    63  7827329  da7  MBR  (3.7G)
      63  1930257    1  freebsd  [active]  (942M)
  1930320      63      - free -  (31k)
  1930383  1930257    2  freebsd  (942M)
  3860640    3024    3  freebsd  (1.5M)
  3863664    41328    4  freebsd  (20M)
  3904992  3922400      - free -  (1.9G)
 
=>        34  5860638653  raid5/NASty5  GPT  (2.7T)
          34          94                - free -  (47k)
        128    4194304            1  freebsd-swap  (2.0G)
    4194432  972578692            2  freebsd-zfs  (463G)
  976773124  4883865563                - free -  (2.3T)
 
=>      0  1930257  da7s1  BSD  (942M)
        0      16        - free -  (8.0k)
      16  1930241      1  !0  (942M)


zdb seems to see something

Code:
[root@freenas] ~# zdb
biggiesmalls:
    version: 28
    name: 'biggiesmalls'
    state: 0
    txg: 4
    pool_guid: 15953681985580829583
    hostid: 155295792
    hostname: 'freenas.local'
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 15953681985580829583
        create_txg: 4
        children[0]:
            type: 'raidz'
            id: 0
            guid: 7801677219120891812
            nparity: 1
            metaslab_array: 31
            metaslab_shift: 35
            ashift: 9
            asize: 3485687611392
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 2453748634043803478
                path: '/dev/gptid/611f2100-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/611f2100-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 1696696682218549933
                path: '/dev/gptid/616996d8-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/616996d8-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 13489200959984321196
                path: '/dev/gptid/61af12ad-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/61af12ad-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 17193127502085437770
                path: '/dev/gptid/61fc07c4-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/61fc07c4-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[4]:
                type: 'disk'
                id: 4
                guid: 573737378232088921
                path: '/dev/gptid/6245451e-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/6245451e-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[5]:
                type: 'disk'
                id: 5
                guid: 13013286292227126104
                path: '/dev/gptid/628d5d22-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/628d5d22-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4
            children[6]:
                type: 'disk'
                id: 6
                guid: 4336766267479159020
                path: '/dev/gptid/62d46bc3-c1c6-11e2-9736-0002b3a9ac16'
                phys_path: '/dev/gptid/62d46bc3-c1c6-11e2-9736-0002b3a9ac16'
                whole_disk: 1
                create_txg: 4


However, this is all I get from zdb -l /dev/da0 to da6

Code:
[root@freenas] ~# zdb -l /dev/da0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to unpack label 2
--------------------------------------------
LABEL 3
--------------------------------------------
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Which got rid of the CORRUPT error
This wasn't the best idea to run this early.

zdb seems to see something
What's the actual pool configuration? Correct me if I'm wrong, but you appear to have 7 500 GB drives in a RAID5 configuration. On which there is a 2GB swap partition and a 463 GB ZFS partition, hmm. That's a bit odd now that I think about it. Is the RAID5 configuration something old? The zdb output appears to indicate it's actually using the individual disks or rather the GPT partitions on them.

However, this is all I get from zdb -l /dev/da0 to da6
Not what I asked for. However, if there is old GRAID metadata on the disks that would stop ZFS from being able to import the disks as GRAID grabs them first. If that's the case then see this thread.
 

streetlamp

Cadet
Joined
Jun 17, 2013
Messages
3
The pool configuration is in Raid-z. And its all fresh, disks were all wiped and pooled when I bought this system. The pool is ~2.7tb, 463gb is what is currently full.
 
Status
Not open for further replies.
Top