9.2 problem expanding pool

Status
Not open for further replies.

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
I've had this same problem on two different 9.2 setups. I don't want to post a bug because i'm not sure if it's a bug or something i'm missing.

Basically I have an existing pool. Added another HBA and a shelf of drives.

zpool add tank1 raidz2 da24 da25 da26 da27

Then it just hangs. Nothing in top. Nothing in /var/log/messages. And i'm not able to do any other related zfs commands. So if i do zfs list or zpool status in a new terminal window they just hang.

Not sure what's up. The only way to get things back to normal is to reboot the system. It's gonna be a long friday night in the data center.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you post the output of zpool status, hardware used, and freenas version?
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
FreeNAS-9.2.0-RELEASE-x64 (ab098f4)
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GH
262088MB
24 Samsung 840 Pro SSDs in a jbod connected via external sas to a LSI HBA.

Trying to add another jbod worth of the same disks to an identical LSI HBA.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Code:
[root@ds0] ~# zpool status
  pool: tank1
state: ONLINE
  scan: scrub repaired 0 in 4h24m with 0 errors on Sat Jan 18 04:24:42 2014
config:
 
    NAME        STATE    READ WRITE CKSUM
    tank1      ONLINE      0    0    0
      raidz2-0  ONLINE      0    0    0
        da0    ONLINE      0    0    0
        da1    ONLINE      0    0    0
        da2    ONLINE      0    0    0
        da3    ONLINE      0    0    0
      raidz2-1  ONLINE      0    0    0
        da4    ONLINE      0    0    0
        da5    ONLINE      0    0    0
        da6    ONLINE      0    0    0
        da7    ONLINE      0    0    0
      raidz2-2  ONLINE      0    0    0
        da8    ONLINE      0    0    0
        da9    ONLINE      0    0    0
        da10    ONLINE      0    0    0
        da11    ONLINE      0    0    0
      raidz2-3  ONLINE      0    0    0
        da12    ONLINE      0    0    0
        da14    ONLINE      0    0    0
        da15    ONLINE      0    0    0
        da16    ONLINE      0    0    0
      raidz2-4  ONLINE      0    0    0
        da17    ONLINE      0    0    0
        da18    ONLINE      0    0    0
        da19    ONLINE      0    0    0
        da13    ONLINE      0    0    0
      raidz2-5  ONLINE      0    0    0
        da20    ONLINE      0    0    0
        da21    ONLINE      0    0    0
        da22    ONLINE      0    0    0
        da23    ONLINE      0    0    0
    logs
      mfid0    ONLINE      0    0    0
    cache
      mfid1    ONLINE      0    0    0
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Code:
[root@ds0] ~# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
tank1              4.58T  754G  231M  /mnt/tank1
tank1/ISOs          227K  754G  227K  /mnt/tank1/ISOs
tank1/Users        1.60T  754G  1.60T  /mnt/tank1/Users
tank1/esx          1.72T  754G  1.04T  /mnt/tank1/esx
tank1/iscsi-groups  1.26T  754G  1.25T  -
tank1/test          221K  754G  221K  /mnt/tank1/test
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah. I have no clues. I'd have to start digging into your stuff to figure out what the problem is.

You are taking major risks with a log drive that isn't mirrored.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wait.. your pool is whole disks. That's not really supported in FreeNAS. Sounds like you did everything from the CLI, which we keep telling people not to do. If the WebGUI supports the intended function you should use it. The CLI is for when you can't do what needs to be done, and only then after you are 100% sure its safe to do from the CLI. Go around FreeNAS' back and it can get grumpy.

Not sure if that's your problem, but the question then becomes "What else is going on that isn't working because its not as FreeNAS expects it to be?"
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Maybe you are right. Ugh. Not sure about the whole disks comment though. I created the pool in the cli, exported it and then used the auto import when i built this server. Seemed like a good idea at the time. I couldn't get the slider thing to work correctly in the gui. It's been up and running fine since this summer.

As for the ZIL i thought pool version 28 filed the problem with losing your pool because of a dead zil?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You won't lose the pool from a dead ZIL, but you'll lose data thats in the ZIL because the data was in the ZIL that was lost. You may not even be able to ascertain what was lost.

In short, you should stop and go back to the drawing board with that server. Not like you have much choice since you can't add more disks anyway...
 
Status
Not open for further replies.
Top