Upgrading Raid card, Drives wont have 2TB limit anymore. What happens?

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Do you reckon that auto-expand will kick in, in these circumstances?
That's what I wonder. There's no disk change, they just suddenly appear larger - but ZFS will probably be oblivious unless it specifically queries all drives' sizes when mounting the pool. I'm not sure I'd write ZFS to do that, it's an incredible edge case.

I'm not even sure manual expansion would work. I'm fairly certain the controller swap is safe for the existing data, though. Not completely, and it's not something I'd try without a backup or additional information about ZFS' behavior.

tl;dr - we need a guinea pig.
 

FlynnVT

Dabbler
Joined
Aug 12, 2013
Messages
36
I can't warrant or stand over my opinion as I haven't seen your exact scenario. Use this at your own risk!

When the discs are moved to a new controller, the raw block devices will increase in effective size.

Assuming a default FreeNAS disc layout (GPT label, swap partition, ZFS partition), the GPT consistency check will probably fail. The secondary label that was once at [tail-16KB] will now be at [tail-16KB-extra_uncapped_space]. On FreeBSD, I think this means that the partitions won't enumerate into /dev, meaning that the ZFS device won't appear for remounting or import.

No problem. Use "gpart recover /dev/<device_name>" to rebuild the secondary label at [tail-16KB] from the primary at [head+512]. You might need to reboot and reimport, but once done to all discs, the zpool should simply work again.

It's a secondary question as to how you want to manage resizing the ZFS-containing partitions to fill the entire discs. Worst case, assuming redundant devs/vdevs, you can de-attach, repartition/resize and re-add each device one at a time. The zpool will resize once all are done. Best case, you might be able to simply export the pool, resize the ZFS-containing partitions and reimport with autoexpand enabled.

I think I've seen this latter scenario working OK while messing with restricted-size whole-disc vdevs on gnop devices in the past: export the pool, destroy the gnop devices, re-import via the raw block devices and ZFS will resize the zpool is autoexpand is enabled.


Practice the whole thing beforehand in a VM if your data is important. Resizing the virtual discs from (e.g.) 2Tb to 4Tb should recreate the exact same scenario in a sandbox.
 

clamschlauder

Dabbler
Joined
Feb 23, 2013
Messages
26
I would like to report that the drives are now running on my p20 M1015 without incident. Pool imported and shares are available.

All drives read at 3TB instead of the 2.2 it was showing before. The pool looks as it did not expand though.

Output AFTER moving to M1015.

Code:
# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT                                                 
Sharable      10.9T  2.40T  8.47T         -    12%    22%  1.00x  ONLINE  /mnt                                                    
Storage-Z3    15.9T  1.48T  14.4T         -     4%     9%  1.00x  ONLINE  /mnt


Code:
# zfs list                                                                                                      
NAME                                                          USED  AVAIL  REFER  MOUNTPOINT                                      
Sharable                                                     1.16T  3.94T  1.16T  /mnt/Sharable                                   
Storage-Z3                                                    862G  7.93T   862G  /mnt/Storage-Z3                                 
Storage-Z3/.system                                           43.5M  7.93T  2.94M  legacy                                          
Storage-Z3/.system/configs-bf937a0da5564cb596eff5f8aba08500   219K  7.93T   219K  legacy                                          
Storage-Z3/.system/configs-f1ae6c68bbe041c7bb38cadeec088781   219K  7.93T   219K  legacy                                          
Storage-Z3/.system/cores                                     1.28M  7.93T  1.28M  legacy                                          
Storage-Z3/.system/rrd-32e30c2d33b04a61af2309a946fa0d51       219K  7.93T   219K  legacy                                          
Storage-Z3/.system/rrd-bf937a0da5564cb596eff5f8aba08500       219K  7.93T   219K  legacy                                          
Storage-Z3/.system/rrd-f1ae6c68bbe041c7bb38cadeec088781       219K  7.93T   219K  legacy                                          
Storage-Z3/.system/samba4                                     922K  7.93T   922K  legacy                                          
Storage-Z3/.system/syslog-32e30c2d33b04a61af2309a946fa0d51   1.28M  7.93T  1.28M  legacy                                          
Storage-Z3/.system/syslog-bf937a0da5564cb596eff5f8aba08500    493K  7.93T   493K  legacy                                          
Storage-Z3/.system/syslog-f1ae6c68bbe041c7bb38cadeec088781   35.5M  7.93T  35.5M  legacy                                          
Storage-Z3/config_backups                                     310K  7.93T   310K  /mnt/Storage-Z3/config_backups                  
freenas-boot                                                  527M  13.9G    31K  none                                            
freenas-boot/ROOT                                             520M  13.9G    25K  none                                            
freenas-boot/ROOT/Initial-Install                               1K  13.9G   509M  legacy                                          
freenas-boot/ROOT/default                                     520M  13.9G   514M  legacy                                          
freenas-boot/grub                                            6.79M  13.9G  6.79M  legacy  


I dont remember exactly, and stupid me for not obtaining this info before, but I am sure the output for "zpool list" is the same that it was before the switch.
 
Status
Not open for further replies.
Top