Error extending pool - LOG and CACHE

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
I'm on the latest stable FreeNAS 11.3.

I'm attempting to replace the SLOG and ARC disks on my system and am getting the following error when trying to add either a new SLOG or new ARC. I previously had a SLOG and ARC, but used the GUI to remove both successfully. I get the below error when attempting to add both at once or when trying to add either one individually.

Error: Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 219, in wrapper response = callback(request, *args, **kwargs) File "./freenasUI/api/resources.py", line 1421, in dispatch_list request, **kwargs File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 450, in dispatch_list return self.dispatch('list', request, **kwargs) File "./freenasUI/api/utils.py", line 252, in dispatch request_type, request, *args, **kwargs File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 482, in dispatch response = method(request, **kwargs) File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 1384, in post_list updated_bundle = self.obj_create(bundle, **self.remove_api_resource_names(kwargs)) File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 2175, in obj_create return self.save(bundle) File "./freenasUI/api/utils.py", line 493, in save form.save() File "./freenasUI/storage/forms.py", line 282, in save return False File "./freenasUI/storage/forms.py", line 273, in save pool = c.call('pool.update', add.id, {'topology': topology}, job=True) File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 513, in call return jobobj.result() File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 276, in result raise ClientException(job['error'], trace={'formatted': job['exception']}) middlewared.client.client.ClientException: 'HOLE'

I also thought it was weird that this same odd "HOLE" thing appears on my only pool's status page. I attached a pic of that. What should I do?
 

Attachments

  • Screen Shot 2020-02-07 at 6.32.05 PM.png
    Screen Shot 2020-02-07 at 6.32.05 PM.png
    31.5 KB · Views: 210

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
Additional info:
  • I have 128GB ECC memory and a Xeon E5-2630 @ 2.30GHz (if that matters)
  • I've rebooted in between tries
 

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
Sorry.. one more thing:
  • I tried "QUICK" wiping the disks via the GUI and then adding them, same results
 
Joined
Oct 18, 2018
Messages
969
I'm not super familiar with 11.3 but my guess is that the SLOG/L2ARC devices were not properly removed/configured. You can first check plain old zfs with a good ol' zpool status.That'll tell you what zfs thinks the slog/zil format should be. If you see devices listed as log or cache devices it would indicate that removing the devices was not fully successful. If you do not see them then my next place to look would be configs specific to freenas.
 

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
Thanks for the response! zpool status looks normal (see below). What sort of FreeNAS-specific configs should I be looking for?
Code:
# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:52 with 0 errors on Tue Feb  4 03:45:52 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da2p2     ONLINE       0     0     0

errors: No known data errors

  pool: main_pool
 state: ONLINE
  scan: scrub repaired 0 in 0 days 05:31:41 with 0 errors on Sun Feb  2 05:31:42 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    main_pool                                       ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/97f9f59a-0c5b-11e8-95e0-00270e107dc0  ONLINE       0     0     0
        gptid/761840a6-e536-11e7-9bf0-0015176195f2  ONLINE       0     0     0
        gptid/d5422f16-e816-11e7-b74c-0015176195f2  ONLINE       0     0     0

errors: No known data errors
 
Joined
Oct 18, 2018
Messages
969
That behavior seems odd to me. It might be that the log/cache devices were removed but not erased and that resulted in the failure to add them back. That doesn't explain why the UI would show "HOLE" but that zpool status would not show any log/cache devices.

If no one else has a great idea for something you may have done wrong or can do to fix it I would suggest you consider filing this as a bug. I've played around with log/cache devices in 11.2 and have not seen "HOLE" appear at any time.
 

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
Decided to try adding both from the command line and was successful. See attached for what the GUI now shows. I'm assuming this is a GUI bug in 11.3.
Code:
# zpool add main_pool cache /dev/da3
# zpool add main_pool log /dev/da1
# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:52 with 0 errors on Tue Feb  4 03:45:52 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      da2p2     ONLINE       0     0     0

errors: No known data errors

  pool: main_pool
 state: ONLINE
  scan: scrub repaired 0 in 0 days 05:31:41 with 0 errors on Sun Feb  2 05:31:42 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    main_pool                                       ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/97f9f59a-0c5b-11e8-95e0-00270e107dc0  ONLINE       0     0     0
        gptid/761840a6-e536-11e7-9bf0-0015176195f2  ONLINE       0     0     0
        gptid/d5422f16-e816-11e7-b74c-0015176195f2  ONLINE       0     0     0
    logs
      da1                                           ONLINE       0     0     0
    cache
      da3                                           ONLINE       0     0     0

errors: No known data errors
 

Attachments

  • Screen Shot 2020-02-08 at 10.20.53 AM.png
    Screen Shot 2020-02-08 at 10.20.53 AM.png
    37.8 KB · Views: 214
Joined
Oct 18, 2018
Messages
969
Decided to try adding both from the command line and was successful. See attached for what the GUI now shows. I'm assuming this is a GUI bug in 11.3.
I would guess you're right. I would also suggest you not use the CLI for this for permanent configuration. FreeNAS makes use of certain partitions, flags, etc with its drives and if you don't do it just the exact same way unexpected behavior may result. But it is good to confirm that it should work.
 

ralphte

Cadet
Joined
Jun 1, 2014
Messages
5
Thanks @PhiloEpisteme! I'll remove it via CLI to put it back how it was. Thank you for the advice!

Hey @llamb I did some research on this and the short answer is the problem has been fixed. I was running into the same problem right after upgrading to 11.3. Here is the fix https://jira.ixsystems.com/browse/N...ewared.client.client.ClientException: 'HOLE'". The problem is with py-libzfs https://github.com/freenas/py-libzfs and it showing the hole device in the web interface. The real question is how would you get this fix. Because of the way that Freenas is built you can't just change some setting or code to fix this problem. You need a whole new build. At this point there is two options. You can wait for freenas to release a new build part of 11.3 train will fix the problem. OR you can build your own image using this git repo https://github.com/freenas/build/. WARNING this is NOT for beginners as you will need a clean install of FreeBSD version 11 with 100gb of free space and 16gb of ram. Also it will take a while to build mine took over 5 hours but your time may vary depending on CPU. The current master build did fixed the problem and everything is working fine now.
 

llamb

Dabbler
Joined
Feb 7, 2020
Messages
10
Great find, @ralphte! In that case, I think I'll wait for the next 11.3 release. Thanks for researching this!
 
Top