Upgraded drives - Error when trying to get volume to expand

Status
Not open for further replies.

zfrogz

Dabbler
Joined
Jun 29, 2012
Messages
43
I've got a 6 drive RAIDZ2. I'm running 8.3. I started out with 4x2TB drives and 2x3TB drives but finally replaced all of the 2TB with 3TB drives. After replacing the last drive, it didn't automatically grow the volume so I dug around the forums to see what I missed. I found the "zpool online -e poolname device" command and ran it for each drive. It ran without a problem on all of my drives except one which gave me this error:

zpool online -e vol0 gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f
cannot expand gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f: no such device in pool

When I run zpool status, I get this:

[root@freenas] ~# zpool status
pool: vol0
state: ONLINE
scan: resilvered 1.38T in 9h55m with 0 errors on Thu Dec 6 04:38:04 2012
config:

NAME STATE READ WRITE CKSUM
vol0 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/c52171ac-cd5d-11e1-bb5d-f46d0473ba2f ONLINE 0 0 0
gptid/83396047-d165-11e1-b3fe-f46d0473ba2f ONLINE 0 0 0
gptid/99f2a7bc-3a04-11e2-ba1d-f46d0473ba2f ONLINE 0 0 0
gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f ONLINE 0 0 0
gptid/1e31f0fe-3e97-11e2-8ceb-f46d0473ba2f ONLINE 0 0 0
gptid/87ccd894-3f4e-11e2-92e0-f46d0473ba2f ONLINE 0 0 0

errors: No known data errors

Any ideas? Everything checks out fine and I've detached the old drives already. Thanks.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Any ideas? Everything checks out fine and I've detached the old drives already. Thanks.
Seems odd. First, start using [code][/code] tags instead of quoting. Then the output of:
Code:
zpool status -v

camcontrol devlist

gpart show

glabel status
 

zfrogz

Dabbler
Joined
Jun 29, 2012
Messages
43
Sorry about that. Here are the results:
Code:
[root@freenas] ~# zpool status -v
  pool: vol0
 state: ONLINE
  scan: scrub in progress since Thu Dec  6 11:48:57 2012
        3.62T scanned out of 8.47T at 193M/s, 7h20m to go
        0 repaired, 42.68% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        vol0                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/c52171ac-cd5d-11e1-bb5d-f46d0473ba2f  ONLINE       0     0     0
            gptid/83396047-d165-11e1-b3fe-f46d0473ba2f  ONLINE       0     0     0
            gptid/99f2a7bc-3a04-11e2-ba1d-f46d0473ba2f  ONLINE       0     0     0
            gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f  ONLINE       0     0     0
            gptid/1e31f0fe-3e97-11e2-8ceb-f46d0473ba2f  ONLINE       0     0     0
            gptid/87ccd894-3f4e-11e2-92e0-f46d0473ba2f  ONLINE       0     0     0

errors: No known data errors

Code:
[root@freenas] ~# camcontrol devlist
<WDC WD30EZRX-00MMMB0 80.00A80>    at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD30EZRX-00MMMB0 80.00A80>    at scbus1 target 0 lun 0 (pass1,ada1)
<ST3000DM001-9YN166 CC4H>          at scbus2 target 0 lun 0 (pass2,ada2)
<ST3000DM001-9YN166 CC4H>          at scbus3 target 0 lun 0 (pass3,ada3)
<ST3000DM001-9YN166 CC4B>          at scbus4 target 0 lun 0 (pass4,ada4)
<ST3000DM001-9YN166 CC4B>          at scbus4 target 1 lun 0 (pass5,ada5)
<Kingston DataTraveler SE9 PMAP>   at scbus6 target 0 lun 0 (pass6,da0)

Code:
[root@freenas] ~# gpart show
=>        34  5860533101  ada0  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338703     2  freebsd-zfs  (2.7T)

=>        34  5860533101  ada1  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338703     2  freebsd-zfs  (2.7T)

=>        34  5860533101  ada2  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338696     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>        34  5860533101  ada3  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338696     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>        34  5860533101  ada4  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338696     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>        34  5860533101  ada5  GPT  (2.7T)
          34          94        - free -  (47k)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  5856338696     2  freebsd-zfs  (2.7T)
  5860533128           7        - free -  (3.5k)

=>      63  15240513  da0  MBR  (7.3G)
        63   1930257    1  freebsd  (942M)
   1930320        63       - free -  (31k)
   1930383   1930257    2  freebsd  [active]  (942M)
   3860640      3024    3  freebsd  (1.5M)
   3863664     41328    4  freebsd  (20M)
   3904992  11335584       - free -  (5.4G)

=>      0  1930257  da0s1  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)

=>      0  1930257  da0s2  BSD  (942M)
        0       16         - free -  (8.0k)
       16  1930241      1  !0  (942M)

Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
gptid/c52171ac-cd5d-11e1-bb5d-f46d0473ba2f     N/A  ada0p2
gptid/83396047-d165-11e1-b3fe-f46d0473ba2f     N/A  ada1p2
gptid/99f2a7bc-3a04-11e2-ba1d-f46d0473ba2f     N/A  ada2p2
gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f     N/A  ada3p2
gptid/1e31f0fe-3e97-11e2-8ceb-f46d0473ba2f     N/A  ada4p2
gptid/87ccd894-3f4e-11e2-92e0-f46d0473ba2f     N/A  ada5p2
                             ufs/FreeNASs3     N/A  da0s3
                             ufs/FreeNASs4     N/A  da0s4
                    ufsid/5009cfdd91a55783     N/A  da0s1a
                            ufs/FreeNASs1a     N/A  da0s1a
                            ufs/FreeNASs2a     N/A  da0s2a

Strange, huh?
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Have you tried restarting or exporting/importing the pool? Output of:
Code:
zdb -C vol0

zpool list

ls -al /dev/gptid/
 

zfrogz

Dabbler
Joined
Jun 29, 2012
Messages
43
I think I got it. I ran that first command you suggested and it gave me this:
Code:
[root@freenas] ~# zdb -C vol0

MOS Configuration:
        version: 28
        name: 'vol0'
        state: 0
        txg: 1003988
        pool_guid: 152496064410700730
        hostid: 2483920184
        hostname: 'freenas.local'
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 152496064410700730
            children[0]:
                type: 'raidz'
                id: 0
                guid: 17347345487035827117
                nparity: 2
                metaslab_array: 23
                metaslab_shift: 36
                ashift: 12
                asize: 17990643351552
                is_log: 0
                children[0]:
                    type: 'disk'
                    id: 0
                    guid: 570781809171005220
                    path: '/dev/gptid/c52171ac-cd5d-11e1-bb5d-f46d0473ba2f'
                    phys_path: '/dev/gptid/c52171ac-cd5d-11e1-bb5d-f46d0473ba2f'
                    whole_disk: 0
                    DTL: 131
                children[1]:
                    type: 'disk'
                    id: 1
                    guid: 4223052869589719356
                    path: '/dev/gptid/83396047-d165-11e1-b3fe-f46d0473ba2f'
                    phys_path: '/dev/gptid/83396047-d165-11e1-b3fe-f46d0473ba2f'
                    whole_disk: 0
                    DTL: 134
                children[2]:
                    type: 'disk'
                    id: 2
                    guid: 10249169688719891529
                    path: '/dev/gptid/99f2a7bc-3a04-11e2-ba1d-f46d0473ba2f'
                    phys_path: '/dev/gptid/99f2a7bc-3a04-11e2-ba1d-f46d0473ba2f'
                    whole_disk: 1
                    DTL: 189
                children[3]:
                    type: 'disk'
                    id: 3
                    guid: 17863706504511067176
                    path: '/dev/gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f'
                    phys_path: '/dev/gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f'
                    whole_disk: 1
                    DTL: 130
                children[4]:
                    type: 'disk'
                    id: 4
                    guid: 16103866033388932060
                    path: '/dev/gptid/1e31f0fe-3e97-11e2-8ceb-f46d0473ba2f'
                    phys_path: '/dev/gptid/1e31f0fe-3e97-11e2-8ceb-f46d0473ba2f'
                    whole_disk: 1
                    DTL: 129
                children[5]:
                    type: 'disk'
                    id: 5
                    guid: 4535765049891193324
                    path: '/dev/gptid/87ccd894-3f4e-11e2-92e0-f46d0473ba2f'
                    phys_path: '/dev/gptid/87ccd894-3f4e-11e2-92e0-f46d0473ba2f'
                    whole_disk: 1
                    DTL: 169

The original command worked on that drive when I used 17863706504511067176 instead of gptid/c1da9baa-3a62-11e2-b68d-f46d0473ba2f
Code:
zpool online -e vol0 17863706504511067176

I had previously tried rebooting and exporting/reimporting the volume unsuccessfully but now that I was able to get that drive to take that command, I was able to reboot and the volume expanded to it's proper size.
Code:
[root@freenas] ~# zpool list
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
vol0  16.3T  10.0T  6.27T    61%  1.00x  ONLINE  /mnt

Bug?
 
Status
Not open for further replies.
Top