Unable to import the pool, pls help!

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
Hello,

Recently, our FreeNAS suddenly stopped working and we are unable to find the root cause. So, I have attempted two scenarios here to rebuild the corrupted FreeNAS, unfortunately, both attempts failed to import the pool

1. I have re-installed the new FreeNAS-boot and tried to import the pool from web-GUI, but no luck. I just don't see the line item in the drop-down while importing the pool
2. I tried to mount and read the files from the corrupted FreeNAS-boot drive separately using FreeBSD but when I tried to mount the drive, I am getting an error that pool has already been in use in another system. Also, it shows the date and time of the pool last accessed.

Please provide your inputs here on how to recover the data by importing the pool or through any other method. Thank you
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hardware details needed. Please see: https://www.truenas.com/community/threads/forum-guidelines.45124

Without knowing something about your system configuration, I can only point you to generic guides, such as this:

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

and these:

Useful Commands
https://www.ixsystems.com/community/threads/useful-commands.30314/#post-195192

Hard Drive Troubleshooting Guide (All Versions of FreeNAS)
https://www.ixsystems.com/community...bleshooting-guide-all-versions-of-freenas.17/

CAM status: ATA Status Errors - thread
https://www.ixsystems.com/community/threads/cam-status-ata-status-error.16833/

While these might help you figure it out, we are more than happy to help, we just can't guess where to start without some more details.
 

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
Agenda: We were asked to recover the data from the corrupted FreeNAS system.

Hardware details:
Motherboard make and model - Asus TUF Gaming X570 Plus
CPU make and model - AMD Ryzen 2700X Eight core
RAM quantity - 64GB
Hard drives - Total 8 x 4TB
1. Seagate IronWolf - NAS HD - ST4000VN008
2. Seagate IronWolf - NAS HD - ST4000VN008
3. WD Purple Surveillance HD - WD40PURZ
4. WD Red NAS HD - WD40EFRX
5. WD Red NAS HD - WD40EFRX
6. WD Red NAS HD - WD40EFRX
7. Toshiba NAS N300 - HDWQ140
8. Toshiba NAS N300 - HDWQ140
RAID configuration : Unknown ( FreeNAS was brought to us to fix the RAID)
Boot drive: Intel SSD 660P series - SSDPEKNW010TB
Hard disk controllers : Not Available
Network Cards: Intel Pro 1000 PT Dual

Actions Taken:
We created drive to drive clones using Falcon imager of above 8 x 4TB onto another 8 x 4TB of model Seagate Burracuda 4 TB - ST4000DM004 and the corrupted boot drive onto Seagate Burracuda 2TB - ST2000DM008.

1. Arranged the HDDs in their respective slots and tried to boot from the cloned NAS-boot drive
Result: Not able to boot

2. Installed new FreeNAS OS onto new drive separately and tried to import the pool
Result: Not able to see the existing pool from web GUI

3. Used FreeBSD through LiveCD and also connected NAS boot drive and tried to read existing pool, mount, and copy the old config file. We followed the method from https://gmpreussner.com/reference/recovering-freenas-configuration-from-zfs-boot-drive
Result: I could see the pool online but not the disks listed

zpool status:

1.png



zfs list

1.png


Able to mount freenas-boot/ROOT/11.3-U4.1 dataset but did not see data folder to copy the config file.

I have tried above process in Ubuntu OS by installing ZFS utils but the output remained same. This is a show stopper for me.

Please advise me how to recover the pool/data from the drives. This is the first time i am working on fixing the RAID.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Asus TUF Gaming X570 Plus
Are you still using that system board to troubleshoot or some other system?

You must know by what means the system board is communicating with the storage drives. That board only has 8 SATA ports, but you have 9 drives listed (including the boot drive).

The owner needs to give you some guidance also, because if the pool was encrypted, there will be no way to recover the data without the decryption key.

If you issue the command zpool import from the command line, the system should lists pools available to import.
This command searches all devices that the system can see, so if the system does not see a pool to import, you either have no connection to the drives that contain the pool, or the pool is damaged beyond recognition, OR the pool is encrypted...

You might also try using the zpool import -F option as that will attempt to return a damaged pool to an importable state by discarding the last few transactions. Not all damaged pools can be recovered by using this option. If successful, the data from the discarded transactions is irretrievably lost.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
Thanks for your reply.

Are you still using that system board to troubleshoot or some other system?
I am working on same system

You must know by what means the system board is communicating with the storage drives. That board only has 8 SATA ports, but you have 9 drives listed (including the boot drive).
System has 8 slots for HD`s and a NVMe slot on motherboard itself, thats where 1TB SSD boot drive was installed. I cloned NVMe onto 2TB Seagate HDD, Please find below system setup.

I have tried re-installing new Free-NAS onto corrupted boot drive but while installation its asking to erase the partitions instead of FreeNAS upgrade. So, i used another drive separately as a boot drive.

set-up.png


The owner needs to give you some guidance also, because if the pool was encrypted, there will be no way to recover the data without the decryption key.
I will check that.

If you issue the command zpool import from the command line, the system should lists pools available to import.
This command searches all devices that the system can see, so if the system does not see a pool to import, you either have no connection to the drives that contain the pool, or the pool is damaged beyond recognition, OR the pool is encrypted...

No luck, please find below screenshots
zpool import.png


mount.png


Now, what are my odds for restoring the data?

Thanks!
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
When I look at all those loose disks in the picture, it makes me shudder. Drives are specified for certain positions to operate, which usually means either horizontal or vertical. Having them "fly around" means that they more or less vertical (i.e. not truly vertical), which puts additional stress on the mechanical components and therefore increases risk of failure.
 

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
When I look at all those loose disks in the picture, it makes me shudder. Drives are specified for certain positions to operate, which usually means either horizontal or vertical. Having them "fly around" means that they more or less vertical (i.e. not truly vertical), which puts additional stress on the mechanical components and therefore increases risk of failure.
Those were not the original disks, they are working copies we are testing in the lab . The originals were placed vertical with each disk supported with small enclosure. However, i am able to detect all the connected drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
2TB disk is where clone of corrupted FreeNAS boot resides.

Code:
root@lab:~# fdisk -l
Disk /dev/loop0: 255.58 MiB, 267980800 bytes, 523400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 54.98 MiB, 57626624 bytes, 112552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 62.9 MiB, 65105920 bytes, 127160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 49.8 MiB, 52203520 bytes, 101960 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop4: 29.9 MiB, 31334400 bytes, 61200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdb: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D437077B-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdb1      128    4194431    4194304    2G FreeBSD swap
/dev/sdb2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sda: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D1F979D0-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sda1      128    4194431    4194304    2G FreeBSD swap
/dev/sda2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdh: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D1443642-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdh1      128    4194431    4194304    2G FreeBSD swap
/dev/sdh2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdg: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CFC0F6B5-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdg1      128    4194431    4194304    2G FreeBSD swap
/dev/sdg2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdc: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D2D6BB9A-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdc1      128    4194431    4194304    2G FreeBSD swap
/dev/sdc2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sde: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CEF5104E-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sde1      128    4194431    4194304    2G FreeBSD swap
/dev/sde2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdd: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D3880F46-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdd1      128    4194431    4194304    2G FreeBSD swap
/dev/sdd2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


Disk /dev/sdf: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: ST4000DM004-2CV1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D0902FF4-XXXX-XXXX-XXXX-4CEDFB750096

Device       Start        End    Sectors  Size Type
/dev/sdf1      128    4194431    4194304    2G FreeBSD swap
/dev/sdf2  4194432 7814037127 7809842696  3.7T FreeBSD ZFS


GPT PMBR size mismatch (2000409263 != 3907029167) will be corrected by write.
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sdi: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: 008-2FR102     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9712A011-XXXX-XXXX-XXXX-4CEDFB750096

Device      Start        End    Sectors   Size Type
/dev/sdi1      40     532519     532480   260M EFI System
/dev/sdi2  532520 2000396327 1999863808 953.6G FreeBSD ZFS


Disk /dev/sdj: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: 003-1ER162     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF53D1C0-XXXX-XXXX-XXXX-C1E24C49770C

Device       Start        End    Sectors  Size Type
/dev/sdj1     2048    1050623    1048576  512M EFI System
/dev/sdj2  1050624 1953523711 1952473088  931G Linux filesystem
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Is the FreeNAS boot really relevant if the task is to recover the data? You could do a fresh install to some drive and try to import the pool. Possibly you could use FreeBSD 13 to have a more flexible rescue system at hand ...
 

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
Is the FreeNAS boot really relevant if the task is to recover the data? You could do a fresh install to some drive and try to import the pool. Possibly you could use FreeBSD 13 to have a more flexible rescue system at hand ...

Hi Patrick,

I did try to import pool with FreeBSD 12.2 and also with fresh installation of FreeNAS 11.3-U4.5 and Ubuntu 20.04. All three resulted in "no pools available".

the command i tried:
zpool import
zpool import -f /tmp/freenas
zpool import -d /dev

When i connect the corrupted FreeNAS boot drive, the pool is visible but disks under that pool are not. So, i tried to mount the corrupted FreeNAS to copy over the old config file but the /data folder is missing.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
but the /data folder is missing.
You can hopefully be able to find the daily autosaved config file in the the pool data - see post #35 in this thread https://www.truenas.com/community/threads/update-went-wrong.74084/page-2#post-514298.

Then get the 11.3-U4.1 iso from https://download.freenas.org/11.3/STABLE/ - install it to make a new boot drive, get the system up and import the config file from the step above, then attach the drives and I'd hope you can import the pool.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
When i connect the corrupted FreeNAS boot drive, the pool is visible but disks under that pool are not.
I think we might be having a terminology problem here.

When you attempt to import the pool under FreeNAS, using the GUI, does it find a pool to import?
 

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
You can hopefully be able to find the daily autosaved config file in the the pool data - see post #35 in this thread

I did not find configs directory in /var/db/system/ path. All i could see is samba4/ alone

When you attempt to import the pool under FreeNAS, using the GUI, does it find a pool to import?

When i boot the system with new FreeNAS 11.3-U4.1 from different drive i did not find any import pool in GUI.

Next, when connected the corrupted FreeNAS boot drive back along with new boot drive, as shown in above picture setup, I saw freenas-5123903XXXXXXXXX pool item but facing below error that pool already exists with below details.

I think because of new boot drive the old pool is not getting imported? is it possible to call the pool from specific drive?

Code:
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 97, in main_worker
    res = loop.run_until_complete(coro)
  File "/usr/local/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 53, in _run
    return await self._call(name, serviceobj, methodobj, params=args, job=job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/worker.py", line 45, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 965, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 390, in import_pool
    'Failed to mount datasets after importing "%s" pool: %s', name_or_guid, str(e), exc_info=True
  File "libzfs.pyx", line 369, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/zfs.py", line 383, in import_pool
    zfs.import_pool(found, found.name, options, any_host=any_host)
  File "libzfs.pyx", line 870, in libzfs.ZFS.import_pool
libzfs.ZFSException: pool already exists
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 349, in run
    await self.future
  File "/usr/local/lib/python3.7/site-packages/middlewared/job.py", line 385, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.7/site-packages/middlewared/schema.py", line 961, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/plugins/pool.py", line 1934, in import_pool
    'cachefile': ZPOOL_CACHE_FILE,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1141, in call
    app=app, pipes=pipes, job_on_progress_cb=job_on_progress_cb, io_thread=True,
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1081, in _call
    return await self._call_worker(name, *args)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1101, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1036, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/middlewared/main.py", line 1010, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('pool already exists',)
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
I did not find configs directory in /var/db/system/ path. All i could see is samba4/ alone
Sounds like the system dataset was on the boot pool...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
When i boot the system with new FreeNAS 11.3-U4.1 from different drive i did not find any import pool in GUI.
Perhaps the storage pool was upgraded (feature flags) with a newer version of FreeNAS? Have you tried using TrueNAS Core as the boot media?
 

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9

Serious_Sam

Cadet
Joined
Dec 31, 2020
Messages
9
I am facing similar issues with TrueNAS as well that pool already exists. Error information is same as above.

However, I got the Geli key from the device owner but not the passphrase. Still trying to figure out the ways to recover the data.
 

seldo

Dabbler
Joined
Jan 4, 2021
Messages
47
Top