Volume Manager Shows all my disk to be available.

Status
Not open for further replies.

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
Running Freenas 9.3-STABLE-201601181840 on iX-systems 24 bay filer.

I just add 11 new drives to the 13 drives I had originally purchased with the machine. I'm trying to extend the original pool and I'm getting some odd behavior in the Volume Manager. Here is the original configuration:
-----------
Code:
[root@anas2] /mnt/tank# zpool status tank
  pool: tank
state: ONLINE
  scan: scrub repaired 0 in 26h12m with 0 errors on Mon Aug 22 02:12:56 2016
config:

   NAME  STATE  READ WRITE CKSUM
   tank  ONLINE  0  0  0
    raidz2-0  ONLINE  0  0  0
    gptid/0bb6b55d-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/0c23d38a-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/0c8c6999-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/0cf6f473-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/0d63d000-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/0dd36558-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    raidz2-2  ONLINE  0  0  0
    gptid/57a455b2-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/5816fdad-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/58820272-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/58fc2a4d-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/596aae61-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
    gptid/59d60b15-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
   logs
    gptid/0e4e13f7-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
   cache
    gptid/0e085869-c6ab-11e5-9be4-0cc47a204b38  ONLINE  0  0  0
   spares
    gptid/6b6a49c2-c6ab-11e5-9be4-0cc47a204b38  AVAIL  

errors: No known data errors 


------------

I wanting to extend the pool with the 11 new disks as 1 raidz2 vdev. When I go into the Volume to do this I select "tank" as the Volume to extend it shows the following information:
upload_2016-8-25_10-41-43.png


I'm presented with all 22 disks as available. If I select the first 11 disks it seems as if it is referencing the first 11 disks in the array and not the 11 new drives I just installed:

upload_2016-8-25_10-45-23.png


If I select the + button under available disks I get all 22 disks added to the volume. In an 11x2x8.0TB configuration:

upload_2016-8-25_10-52-50.png


This is a bit unnerving because it look like the Volume Manager has no idea about the underlying ZFS structure already in place and is about to wipe out all the data currently on the pool. Now if I try this manually I'm still presented with all the disks as if they are available. I can select the last 11 drives:

upload_2016-8-25_10-57-34.png


I'm not sure what will happen here if I select add volume. For some reason this just doesn't look right. Am I doing something wrong here to extend this volume or is the GUI out of sync with the underlying ZFS system?
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
Code:
[pmorris@anas2 ~]$ zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  29.8G   539M  29.2G         -      -     1%  1.00x  ONLINE  -
tank            87T  78.5T  8.48T         -    48%    90%  1.00x  ONLINE  /mnt
[pmorris@anas2 ~]$


upload_2016-8-26_6-43-57.png
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Yeah that's definitely weird. Disks currently part of a pool (especially one that shows up under zpool list, lol) should not be "available" for a volume extension in the volume manager.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
Exactly what I thought. I tried to add the drives to the pool manually last night and guess what, nothing happened! looks like the GUI interface is hosed up. Must be time for a reboot and upgrade. For grins here is a tail on the history.
Code:
2016-06-15.11:45:05 zfs clone tank/jails/.warden-template-pluginjail@clean tank/jails/bacula-sd_1
2016-06-15.11:46:57 zfs create -o mountpoint=/tank/jails/.warden-template-standard -p tank/jails/.warden-template-standard
2016-06-15.11:50:03 zfs snapshot tank/jails/.warden-template-standard@clean
2016-06-15.11:50:31 zfs clone tank/jails/.warden-template-standard@clean tank/jails/anas2bak-bacula-sd
2016-06-15.11:54:06 zfs destroy -fr tank/jails/bacula-sd_1
2016-06-16.06:47:58 zfs snapshot tank/jails/anas2bak-bacula-sd@manual-20160616
2016-06-24.13:20:42 zfs inherit compression tank/scratch
2016-06-24.13:20:42 zfs inherit dedup tank/scratch
2016-06-24.13:20:47 zfs set volsize=250G tank/scratch
2016-07-07.14:12:57 zfs create -o casesensitivity=sensitive -o aclmode=restricted -o refquota=500M tank/PintoMigration
2016-07-07.14:13:12 zfs destroy -r tank/PintoMigration
2016-07-07.14:14:10 zfs create -o casesensitivity=sensitive -o aclmode=restricted -o refquota=500G tank/PintoMigrate
2016-07-08.15:12:39 zfs create -o casesensitivity=sensitive -o refquota=3T tank/ugradspace
2016-07-10.00:00:18 zpool scrub tank
2016-07-11.13:33:34 zfs inherit compression tank/testzvol
2016-07-11.13:33:34 zfs inherit dedup tank/testzvol
2016-07-11.13:33:39 zfs set volsize=2T tank/testzvol
2016-07-24.04:33:39 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 9066354443635770804
2016-07-24.04:33:39 zpool set cachefile=/data/zfs/zpool.cache tank
2016-08-16.11:32:30 zfs clone tank/jails/.warden-template-pluginjail@clean tank/jails/crashplan_1
2016-08-21.00:00:17 zpool scrub tank
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
Interesting thread on the multipath. Now you have me digging into something I haven't looked at either... Thanks
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
This is a "Certified" system in other words "It all on you baby, ask the community for help". They only give you a warranty on the hardware no software or system support. For the amount of disk space for the price, I can live with that.

From reading the post you provided it is apparent that the multipath is a configuration issue not well covered by SuperMicro or iXsystems. I was planning on rebooting the machine this morning anyhow. So that should detect the multipath disks and should make things settle down a bit. And since I am rebooting, after that I'll upgrade to 9.10. I'll report back after the reboot/upgrade and let everyone know what happened.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Gotcha, well best of luck. I would recommend just seeing if the reboot does the trick. I don't use jails or plug-ins, but have seen some posts about them being finicky with 9.10.1 so take that with a grain of salt as well.
 

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
OK, here's the scoop -- Shutdown and power cycled the filer. On reboot the system found the multipath disks and setup the configuration for them. I now realize why I had 22 drives being displayed, the multipath configuration had not been applied and each drive was seen by both controllers and thought there were 11 disks on each controller, hence 22 drives. After the reboot the drives show up as you would expect and I was able extend the pool. However, my original configuration was two 6 disk vdevs and when I tried to add the 11 disk vdev I was greeted with the message:

Code:
 You are trying to add a virtual device consisting of 11 device(s) in a pool that has a virtual device consisted of 6 device(s) 


So clicking "manual setup" button to add the drives was the way to go here.
Patching 9.3 went smoothly and the upgrade to 9.10 was just as smooth with only one little issue that I have been assured is no issue:

Code:
 WARNING: Aug. 26, 2016, 10:55 a.m. - Firmware version 20 does not match driver version 21 for /dev/mps1. Please flash controller to P21 IT firmware.


Apparently someone pulled the trigger on the 21 driver before the 21 firmware has been released, as the story goes... the 21 driver is backward compatible with the 20 firmware, so no worries??? We will see.

Thanks for all the pointers. Got where I needed to be!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Glad it all worked out for you.
Apparently someone pulled the trigger on the 21 driver before the 21 firmware has been released, as the story goes... the 21 driver is backward compatible with the 20 firmware, so no worries??? We will see.
As far as this, from my understanding the Alert can be ignored and silenced for now. There is a thread somewhere regarding that. I have ignored the alerts on my system.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Apparently someone pulled the trigger on the 21 driver before the 21 firmware has been released, as the story goes...
As far as I understand the story, there is no version 21 firmware--it's not a matter of the driver having been released early, it's a matter of the version number being bumped inappropriately.
 

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
So, That means only a reference to a version number has changed and the driver is actually version 20?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
No, the driver has been bumped to version 21, but not in connection with a firmware release (actual or anticipated). There's a pretty lengthy thread in the Hardware forum about this change, but as I understand it there isn't any version 21 firmware released or expected.
 
Status
Not open for further replies.
Top