Will upgrading my SAS controller automagically let FreeNas "see" the extra space on my drives?

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
Just follow the commands in the link I posted earlier.

I went and did so; I could not export the pool from FreeNAS, I kept getting errors about files in /var/log/ being in use, so I assume there are some important log files stored on the pool.

Instead, I made a FreeBSD installer memory stick and used the live environment on it to repair the disks (the gpart recovery and gpart resize steps.) interestingly, the four disks connected to the motherboard were just fine, their ZFS partitions took up the entire disk, and were not "corrupted". The eight connected to the SAS controller, however, were "corrupted," and those I had to expand the partitions on (the recovery step left a single 3.5K block at the end of the disk free, which the 4 disks connected to the mobo do not have.)

The layout of a SAS-connected disk looks like this:
Code:
=>        40  7814037095  da8  GPT  (3.6T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842696    2  freebsd-zfs  (3.6T)
  7814037128           7       - free -  (3.5K)

While disks connected to the motherboard look like this:
Code:
=>        40  7814037088  da11  GPT  (3.6T)
          40          88        - free -  (44K)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)

I booted FreeNAS back up afterwards, but the pool is still only showing 15TB. I verified that FreeNAS sees the partitions correctly (with gpart show,) but even after rebooting again, the pool size has not changed.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I went and did so; I could not export the pool from FreeNAS, I kept getting errors about files in /var/log/ being in use, so I assume there are some important log files stored on the pool.

Instead, I made a FreeBSD installer memory stick and used the live environment on it to repair the disks (the gpart recovery and gpart resize steps.) interestingly, the four disks connected to the motherboard were just fine, their ZFS partitions took up the entire disk, and were not "corrupted". The eight connected to the SAS controller, however, were "corrupted," and those I had to expand the partitions on (the recovery step left a single 3.5K block at the end of the disk free, which the 4 disks connected to the mobo do not have.)

The layout of a SAS-connected disk looks like this:
Code:
=>        40  7814037095  da8  GPT  (3.6T)
          40          88       - free -  (44K)
         128     4194304    1  freebsd-swap  (2.0G)
     4194432  7809842696    2  freebsd-zfs  (3.6T)
  7814037128           7       - free -  (3.5K)

While disks connected to the motherboard look like this:
Code:
=>        40  7814037088  da11  GPT  (3.6T)
          40          88        - free -  (44K)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)

I booted FreeNAS back up afterwards, but the pool is still only showing 15TB. I verified that FreeNAS sees the partitions correctly (with gpart show,) but even after rebooting again, the pool size has not changed.
Is the autoexpand flag 'on' for the pool? zpool get all | grep autoexpand
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
Is the autoexpand flag 'on' for the pool?
My pool is labeled "BigNas", for reference. Results:
Code:
$    zpool get all | grep autoexpand
BigNas        autoexpand                     on                             local
freenas-boot  autoexpand                     off                            default
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
the four disks connected to the motherboard were just fine, their ZFS partitions took up the entire disk, and were not "corrupted
just to note, that would be because the motherboard isn't the old SAS controller, so the LBA limit of your sas controller doesn't apply to those disks at all.
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
post the output of zpool status -v

Code:
[user@freenas ~]$ zpool status -v
  pool: BigNas
 state: ONLINE
  scan: resilvered 871G in 0 days 02:44:34 with 0 errors on Fri May 24 22:09:26 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        BigNas                                          ONLINE       0     0     0
          raidz3-0                                      ONLINE       0     0     0
            da1p2                                       ONLINE       0     0     0
            da2p2                                       ONLINE       0     0     0
            gptid/5321b92f-7e10-11e9-9302-0cc47a9cd5b6  ONLINE       0     0     0
            gptid/20e6a71f-7e7b-11e9-ac2f-0cc47a9cd5b6  ONLINE       0     0     0
            gptid/a621ef7f-75da-11e9-9302-0cc47a9cd5b6  ONLINE       0     0     0
            da4p2                                       ONLINE       0     0     0
            da0p2                                       ONLINE       0     0     0
            da7p2                                       ONLINE       0     0     0
            da6p2                                       ONLINE       0     0     0
            da5p2                                       ONLINE       0     0     0
            da3p2                                       ONLINE       0     0     0
        spares
          gptid/213759b2-7f08-11e9-b18b-0cc47a9cd5b6    AVAIL

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: resilvered 3.09G in 0 days 00:01:10 with 0 errors on Sat May 25 11:44:02 2019
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada1p2  ONLINE       0     0     0
            ada0p2  ONLINE       0     0    48

errors: No known data errors

Please ignore the boot drive error. I took the time while replacing the SAS card to also install two SSD's to replace my USB boot sticks, and it looks like just in time, as well. The SSD's are devices ada0 and ada1.
just to note, that would be because the motherboard isn't the old SAS controller, so the LBA limit of your sas controller doesn't apply to those disks at all.
Thanks for pointing that out, I won't worry about it, then.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
So, this is a bit of a problem. FreeNAS uses the gptid because the daX assignment can and does change. Lets first get your pool expanded...

run zpool online -e BigNas /dev/gptid/a621ef7f-75da-11e9-9302-0cc47a9cd5b6
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
run zpool online -e BigNas /dev/gptid/a621ef7f-75da-11e9-9302-0cc47a9cd5b6
??? okay:
1558804228707.png

??¿??¿¿?¿??

You're gonna have to explain that one to me. How does forcing an already online disk to be online fix the pool size?
Never mind, looked at the zpool man page:
Code:
 zpool online [-e] pool device ...

         Brings the specified physical device online.

         This command is not applicable to spares or cache devices.

         -e      Expand the device to use all available space. If the device
                 is part of a mirror or raidz then all devices must be
                 expanded before the new space will become available to the
                 pool.

As for "first get your pool expanded..." are you saying I should next force the system to use the daX drive names?
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
basically "zpool online -e" forces zfs to recheck everything and actually autoexpand, because sometimes it doesn't, or in cases like this where you've changed something manually.
are you saying I should next force the system to use the daX drive names?
no, the opposite; he's saying you should not be using the dX/adaX names, because those are transient and can change on a reboot, causing your pool to be degraded/dead even though the disks are actually present.

It's great you seem to have gotten your expansion, but unfortunately they should have been attached by gptid instead of /dev (or, well, replaced the disks AFTER the new controller, as that would have at least reduced the number of disks manually requiring this process)
I'm not sure of the best way to fix that, so I await Mlovelace's next steps.
 
Last edited:

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
So, the expansion worked based on the picture. I'm guessing you ended up with the daXp2 associations in the pool instead of the gptids because you did the replacements through the CLI. The only way I know of to correct that issue is to replace the drives properly in the GUI. Which means each one of those needs to be offlined in the GUI then replaced, again in the GUI. It will resilver giving the appropriate gptid association, then on to the next. There might be another way of doing it without resilvering but I'm not aware of one. You'll have to engage your googlefoo for that.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
you can do gptid at command line, but getting them is a little awkward. the gptid's are in /dev/gptid, but they aren't clearly mapped to a drive.
you can get the gptid from gpart list, and map them that way, but that would be essentially the same as replacing in the GUI, and would have to resilver anyway. this would have been the better way to do the partition changes and reonline part, but since its missed now anyway, I don't think it matters if you do CLI or GUI. the GUI would probably be easier.
Code:
ls /dev/gptid
gpart list da0
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
gptid's are in /dev/gptid, but they aren't clearly mapped to a drive.
glabel status -s is your friend...

Code:
root@homenas:~ # glabel status -s
gptid/6a53ce85-6804-11e9-b91d-40167e7bd334  N/A  ada0p1
gptid/46e29b22-6841-11e9-bfe8-40167e7bd334  N/A  ada1p2
gptid/3df530d0-6c00-11e9-8c88-40167e7bd334  N/A  da0p2
gptid/0e9988a4-6bbd-11e9-bb70-001b2134932c  N/A  da1p2
gptid/15f247c1-6bbd-11e9-bb70-001b2134932c  N/A  da2p2
gptid/1247750e-6bbd-11e9-bb70-001b2134932c  N/A  da3p2
gptid/19ab3796-6bbd-11e9-bb70-001b2134932c  N/A  da4p2
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
oh, right, haha, forgot about that one, that would make it less onerous to remap with CLI
 

keboose

Explorer
Joined
Mar 5, 2016
Messages
92
I'm guessing you ended up with the daXp2 associations in the pool instead of the gptids because you did the replacements through the CLI.
Nope! I replaced all the drives via the GUI. The disks listed by gptid now were even listed as daX at the time, they must have been "fixed" somewhere along the way. At no point was I ever presented with a device ID when replacing disks, just da0 through da11.

right now da8 is my hot spare. If I removed it from the pool, and replaced a drive with it, it would stay as da8 the whole way.
 
Top