Truenas Scale Restore (Proxmox VM)

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
I've been running Truenas in a Proxmox VM for quite some time. I've been using pci-e passthrough, allowing Truenas to have full access to my six 6TB drives attached to my 9211-8i card.

Today I put a new motherboard and CPU in my server, installed Proxmox, and restored my Truenas VM via Proxmox. The PCI address for the storage card changed. When I log into the web interface of Truenas and go to storage, I see this.

I click add to pool and can make these selections.

When I select each of the six drives I get this message.

Once I get all six drives added to the data vdev, I see this error message and can't add the vdev.

Any ideas to restore my vdev/pool and keep my data?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
First off, never try to re-create a pool that you want to preserve data. That destroys data. Hopefully the error stopped that.

Assuming you have the PCIe pass-through for the LSI 9211-8i correct, a simple lsblk should show all the disks. If it does, then import your pool from the GUI.

If you want to verify that your pool seems good, try the below command, which only lists importable pools, not imports them. Let the GUI do that.
zpool import


It is always helpful to know some things about ZFS when using TrueNAS, (SCALE or Core). Some people get into trouble, even data loss trouble, due to lack of knowledge. That said, in simpler cases, (aka not VM), TrueNAS can work out of the box. Then learning ZFS can occur over time.
 

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
First off, never try to re-create a pool that you want to preserve data. That destroys data. Hopefully the error stopped that.

Assuming you have the PCIe pass-through for the LSI 9211-8i correct, a simple lsblk should show all the disks. If it does, then import your pool from the GUI.

If you want to verify that your pool seems good, try the below command, which only lists importable pools, not imports them. Let the GUI do that.
zpool import


It is always helpful to know some things about ZFS when using TrueNAS, (SCALE or Core). Some people get into trouble, even data loss trouble, due to lack of knowledge. That said, in simpler cases, (aka not VM), TrueNAS can work out of the box. Then learning ZFS can occur over time.
Results from lsblk.
1702306500749.png


Results from zpool import.
1702306600619.png
 

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
If I reset the config, System Settings > General > Reset to Defaults.

Storage > Import Pool, I can see the pool in the dropdown box.
1702307165130.png


When I try to import I get this error.
1702307229941.png


More info:
Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 438, in import_pool zfs.import_pool(found, pool_name, properties, missing_log=missing_log, any_host=any_host) File "libzfs.pyx", line 1265, in libzfs.ZFS.import_pool File "libzfs.pyx", line 1293, in libzfs.ZFS.__import_pool libzfs.ZFSException: cannot import 'Pool01-RZ' as 'Pool01-RZ': one or more devices is currently unavailable During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 115, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1383, in nf return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 444, in import_pool self.logger.error( File "libzfs.pyx", line 465, in libzfs.ZFS.__exit__ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 442, in import_pool raise CallError(f'Failed to import {pool_name!r} pool: {e}', e.code) middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'Pool01-RZ' pool: cannot import 'Pool01-RZ' as 'Pool01-RZ': one or more devices is currently unavailable """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 427, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 465, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1379, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1247, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1459, in import_pool await self.middleware.call('zfs.pool.import_pool', guid, opts, any_host, use_cachefile, new_name) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1368, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1325, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1331, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1246, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1231, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) middlewared.service_exception.CallError: [EZFS_BADDEV] Failed to import 'Pool01-RZ' pool: cannot import 'Pool01-RZ' as 'Pool01-RZ': one or more devices is currently unavailable
 

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
The only thing I had different with this pool, I brought in a 20G portion of my nvme storage and used it for logging. In the lsblk above, that drive is available here too.
 

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
First off, never try to re-create a pool that you want to preserve data. That destroys data. Hopefully the error stopped that.

Assuming you have the PCIe pass-through for the LSI 9211-8i correct, a simple lsblk should show all the disks. If it does, then import your pool from the GUI.

If you want to verify that your pool seems good, try the below command, which only lists importable pools, not imports them. Let the GUI do that.
zpool import


It is always helpful to know some things about ZFS when using TrueNAS, (SCALE or Core). Some people get into trouble, even data loss trouble, due to lack of knowledge. That said, in simpler cases, (aka not VM), TrueNAS can work out of the box. Then learning ZFS can occur over time.
Results of zpool import.
1702309295757.png


How do I make the logs drive available for the import?
 

wkm001

Cadet
Joined
Dec 10, 2023
Messages
6
Got it working, the log drive was the issue. I ran this command in Linux and was able to replace the log drive in the GUI. zpool import <pool_name> -m
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If you truly imported the pool from the Unix SHELL, this can sometimes confuse the GUI. A simple reboot, (from the GUI), would likely take care of any weirdness.

In general, all work should be through the TrueNAS middleware GUI or TUI, (Text User Interface, aka CLI, which is the command line equivalent of the GUI). There is a ton of things that can be done in the Unix SHELL without impacting the TrueNAS middleware. But, it takes a bit of thought and sometimes skill to know what does and does not affect the middleware.
 
Top