Main drive pool no access

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
Everything was working great yesterday we were watching plex and my kids were working on projects in the folder. we went to bed and this morning there is no connection to the pool. I logged into the server and find this.

Core files for the following executables were found: /usr/sbin/smbd (Fri Nov 26 12:12:48 2021), /usr/sbin/smbd (Fri Nov 26 12:12:49 2021), /usr/sbin/smbd (Fri Nov 26 12:12:49 2021), /usr/sbin/smbd (Fri Nov 26 12:12:50 2021), /usr/sbin/smbd (Fri Nov 26 12:12:56 2021), /usr/sbin/smbd (Fri Nov 26 12:12:57 2021), /usr/sbin/smbd (Fri Nov 26 12:12:57 2021), /usr/sbin/smbd (Fri Nov 26 12:12:57 2021), /usr/sbin/smbd (Fri Nov 26 12:12:58 2021), /usr/sbin/smbd (Fri Nov 26 12:12:59 2021), /usr/sbin/smbd (Fri Nov 26 12:13:00 2021), /usr/sbin/smbd (Fri Nov 26 12:13:00 2021), /usr/sbin/smbd (Fri Nov 26 12:13:01 2021), /usr/sbin/smbd (Fri Nov 26 12:13:02 2021), /usr/sbin/smbd (Fri Nov 26 12:13:02 2021), /usr/sbin/smbd (Fri Nov 26 12:13:03 2021), /usr/sbin/smbd (Fri Nov 26 12:13:03 2021), /usr/sbin/smbd (Fri Nov 26 12:13:04 2021), /usr/sbin/smbd (Fri Nov 26 12:13:05 2021), /usr/sbin/smbd (Fri Nov 26 12:13:05 2021), /usr/sbin/smbd (Fri Nov 26 12:13:49 2021), /usr/sbin/smbd (Fri Nov 26 12:13:50 2021), /usr/sbin/smbd (Fri Nov 26 12:13:51 2021), /usr/sbin/smbd (Fri Nov 26 12:13:51 2021), /usr/sbin/smbd (Fri Nov 26 12:13:56 2021), /usr/sbin/smbd (Fri Nov 26 12:13:57 2021), /usr/sbin/smbd (Fri Nov 26 12:13:58 2021), /usr/sbin/smbd (Fri Nov 26 12:13:58 2021), /usr/sbin/smbd (Fri Nov 26 12:13:59 2021), /usr/sbin/smbd (Fri Nov 26 12:14:00 2021), /usr/sbin/smbd (Fri Nov 26 12:14:00 2021), /usr/sbin/smbd (Fri Nov 26 12:14:01 2021), /usr/sbin/smbd (Fri Nov 26 12:14:02 2021), /usr/sbin/smbd (Fri Nov 26 12:14:03 2021), /usr/sbin/smbd (Fri Nov 26 12:14:03 2021), /usr/sbin/smbd (Fri Nov 26 12:14:04 2021), /usr/sbin/smbd (Fri Nov 26 12:14:04 2021), /usr/sbin/smbd (Fri Nov 26 12:14:05 2021), /usr/sbin/smbd (Fri Nov 26 12:14:06 2021), /usr/sbin/smbd (Fri Nov 26 12:14:07 2021), /usr/sbin/smbd (Fri Nov 26 12:14:50 2021), /usr/sbin/smbd (Fri Nov 26 12:14:51 2021), /usr/sbin/smbd (Fri Nov 26 12:14:51 2021), /usr/sbin/smbd (Fri Nov 26 12:14:52 2021), /usr/sbin/smbd (Fri Nov 26 12:14:56 2021), /usr/sbin/smbd (Fri Nov 26 12:14:57 2021), /usr/sbin/smbd (Fri Nov 26 12:14:57 2021), /usr/sbin/smbd (Fri Nov 26 12:14:58 2021), /usr/sbin/smbd (Fri Nov 26 12:14:58 2021), /usr/sbin/smbd (Fri Nov 26 12:14:59 2021), /usr/sbin/smbd (Fri Nov 26 12:15:00 2021), /usr/sbin/smbd (Fri Nov 26 12:15:00 2021), /usr/sbin/smbd (Fri Nov 26 12:15:01 2021), /usr/sbin/smbd (Fri Nov 26 12:15:01 2021), /usr/sbin/smbd (Fri Nov 26 12:15:02 2021), /usr/sbin/smbd (Fri Nov 26 12:15:03 2021), /usr/sbin/smbd (Fri Nov 26 12:15:03 2021), /usr/sbin/smbd (Fri Nov 26 12:15:04 2021), /usr/sbin/smbd (Fri Nov 26 12:15:05 2021), /usr/sbin/smbd (Fri Nov 26 12:15:06 2021), /usr/sbin/smbd (Fri Nov 26 12:15:51 2021), /usr/sbin/smbd (Fri Nov 26 12:15:52 2021), /usr/sbin/smbd (Fri Nov 26 12:15:52 2021), /usr/sbin/smbd (Fri Nov 26 12:15:53 2021), /usr/sbin/smbd (Fri Nov 26 12:15:56 2021), /usr/sbin/smbd (Fri Nov 26 12:15:57 2021), /usr/sbin/smbd (Fri Nov 26 12:15:57 2021), /usr/sbin/smbd (Fri Nov 26 12:15:58 2021), /usr/sbin/smbd (Fri Nov 26 12:15:58 2021), /usr/sbin/smbd (Fri Nov 26 12:15:59 2021), /usr/sbin/smbd (Fri Nov 26 12:15:59 2021), /usr/sbin/smbd (Fri Nov 26 12:15:59 2021), /usr/sbin/smbd (Fri Nov 26 12:16:00 2021), /usr/sbin/smbd (Fri Nov 26 12:16:01 2021), /usr/sbin/smbd (Fri Nov 26 12:16:01 2021), /usr/sbin/smbd (Fri Nov 26 12:16:02 2021), /usr/sbin/smbd (Fri Nov 26 12:16:02 2021), /usr/sbin/smbd (Fri Nov 26 12:16:03 2021), /usr/sbin/smbd (Fri Nov 26 12:16:03 2021), /usr/sbin/smbd (Fri Nov 26 12:16:04 2021), /usr/sbin/smbd (Fri Nov 26 12:16:52 2021), /usr/sbin/smbd (Fri Nov 26 12:16:52 2021), /usr/sbin/smbd (Fri Nov 26 12:16:53 2021), /usr/sbin/smbd (Fri Nov 26 12:16:53 2021), /usr/sbin/smbd (Fri Nov 26 12:16:56 2021), /usr/sbin/smbd (Fri Nov 26 12:16:57 2021), /usr/sbin/smbd (Fri Nov 26 12:16:57 2021), /usr/sbin/smbd (Fri Nov 26 12:16:58 2021), /usr/sbin/smbd (Fri Nov 26 12:16:58 2021), /usr/sbin/smbd (Fri Nov 26 12:16:59 2021), /usr/sbin/smbd (Fri Nov 26 12:16:59 2021), /usr/sbin/smbd (Fri Nov 26 12:17:00 2021), /usr/sbin/smbd (Fri Nov 26 12:17:00 2021), /usr/sbin/smbd (Fri Nov 26 12:17:01 2021), /usr/sbin/smbd (Fri Nov 26 12:17:01 2021), /usr/sbin/smbd (Fri Nov 26 12:17:02 2021), /usr/sbin/smbd (Fri Nov 26 12:17:02 2021), /usr/sbin/smbd (Fri Nov 26 12:17:03 2021), /usr/sbin/smbd (Fri Nov 26 12:17:04 2021), /usr/sbin/smbd (Fri Nov 26 12:17:04 2021). Please create a ticket at https://jira.ixsystems.com/ and attach the relevant core files along with a system debug. Once the core files have been archived and attached to the ticket, they may be removed by running the following command in shell: 'rm /var/db/system/cores/*'.​

2021-11-26 12:17:23 (America/New_York)

I tried to just export the disks and then reimport them. and I get this.
Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 97, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1267, in nf return func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 393, in import_pool self.logger.error( File "nvpair.pxi", line 404, in items File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 387, in import_pool zfs.import_pool(found, new_name or found.name, options, any_host=any_host) File "libzfs.pyx", line 1105, in libzfs.ZFS.import_pool File "libzfs.pyx", line 1133, in libzfs.ZFS.__import_pool libzfs.ZFSException: I/O error """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run await self.future File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body rv = await self.method(*([self] + args)) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1263, in nf return await func(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1131, in nf res = await f(*args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1489, in import_pool await self.middleware.call('zfs.pool.import_pool', pool['guid'], { File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call return await self._call( File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1281, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1208, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1182, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) libzfs.ZFSException: ('I/O error',)

please tell me I didn't loose everything. for christmas I was going to get myself another disk shelf that way I had a full back up.
 

Attachments

  • debug-Autobots-20211126121548.tgz
    13.3 MB · Views: 128

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
I tried to just export the disks and then reimport them

What do you mean by exporting the disks ??? By playing trial and error before asking help, you may well have destroyed that pool.

So as of now, what is returned by
Code:
zpool status -v
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
I imported a pervious backup of my config and all it says is offline. When i run zpool status -v it only gives me information on my other pool for my Apps and VM's
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
If I go to create pool I can see all the disks.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
What do you mean by exporting the disks ??? By playing trial and error before asking help, you may well have destroyed that pool.
When i did the Export/disconnect i made sure the only thing checked was Confirm Export/Disconnect*. Destroy data on this pool? was not checked because the disks were offline like i had already removed the disks
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
When i did the Export/disconnect i made sure the only thing checked was Confirm Export/Disconnect*. Destroy data on this pool? was not checked because the disks were offline like i had already removed the disks

Ok, so you exported / disconnected the pool and not the drives. As such, the pool may be recoverable.

So as of now, you have your server online, the disks are plugged in, the pool is not mounted and you see all the drives. You would now have to import the pool.

In the GUI, go to Storage / Pools and click Add.

Next screen will ask you to either create a new pool or to import an existing one. Select that option and see if it offers you to import your old pool. If it does, go for it and import it. You should be good after that. If it does not, we will have to work harder to figure if the pool is recoverable and if it is, how.
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
@Heracles I tried to import the pool again from the GUI as I have done with other pools with no issues. I got an error

Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.9/concurrent/futures/process.py", line 243, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 97, in main_worker
res = MIDDLEWARE._run(*call_args)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 45, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 39, in _call
return methodobj(*params)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1267, in nf
return func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 393, in import_pool
self.logger.error(
File "nvpair.pxi", line 404, in items
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs.py", line 387, in import_pool
zfs.import_pool(found, new_name or found.name, options, any_host=any_host)
File "libzfs.pyx", line 1105, in libzfs.ZFS.import_pool
File "libzfs.pyx", line 1133, in libzfs.ZFS.__import_pool
libzfs.ZFSException: I/O error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run
await self.future
File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body
rv = await self.method(*([self] + args))
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1263, in nf
return await func(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1131, in nf
res = await f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 1489, in import_pool
await self.middleware.call('zfs.pool.import_pool', pool['guid'], {
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1310, in call
return await self._call(
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1275, in _call
return await self._call_worker(name, *prepared_call.args)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1281, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1208, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1182, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('I/O error',)
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
Ok, so you exported / disconnected the pool and not the drives. As such, the pool may be recoverable.

So as of now, you have your server online, the disks are plugged in, the pool is not mounted and you see all the drives. You would now have to import the pool.

In the GUI, go to Storage / Pools and click Add.

Next screen will ask you to either create a new pool or to import an existing one. Select that option and see if it offers you to import your old pool. If it does, go for it and import it. You should be good after that. If it does not, we will have to work harder to figure if the pool is recoverable and if it is, how.
Just tried updating from TrueNAS-SCALE-22.02-RC.1-1 to TrueNAS-SCALE-22.02-RC.1-2 to see if that would help but no change
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
Ok, so you exported / disconnected the pool and not the drives. As such, the pool may be recoverable.

So as of now, you have your server online, the disks are plugged in, the pool is not mounted and you see all the drives. You would now have to import the pool.

In the GUI, go to Storage / Pools and click Add.

Next screen will ask you to either create a new pool or to import an existing one. Select that option and see if it offers you to import your old pool. If it does, go for it and import it. You should be good after that. If it does not, we will have to work harder to figure if the pool is recoverable and if it is, how.
If i go to shell and go
root@Autobots[~]# zpool import -f
pool: Optimus_Prime
id: 17918408099470146932
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
config:

Optimus_Prime ONLINE

it was never accessed by another machine so
it shows all the disks then i try to force the import

root@Autobots[~]# zpool import -f Optimus_Prime
cannot import 'Optimus_Prime': I/O error
Recovery is possible, but will result in some data loss.
Returning the pool to its state as of Fri Nov 26 03:20:09 2021
should correct the problem. Approximately 5 seconds of data
must be discarded, irreversibly. Recovery can be attempted
by executing 'zpool import -F Optimus_Prime'. A scrub of the pool
is strongly recommended after recovery.
root@Autobots[~]# zpool import -F Optimus_Prime
cannot import 'Optimus_Prime': pool was previously in use from another system.
Last accessed by Autobots.local (hostid=7e358ec) at Fri Nov 26 03:20:14 2021
The pool can be imported, use 'zpool import -f' to import the pool.

and we start this loop again
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Ok. So first thing I would say is stop playing trial and error. Your pool seems to be recoverable but every time you try something new, you may very well destroy it once and for all.

I will look about this and maybe other seniors on the forum may also have a look at this but for your own safety, stop playing trial and error....
 

oumpa31

Patron
Joined
Apr 7, 2015
Messages
253
I was able to save everything I had to use :
zpool import -m -F -f -R /mnt Optimus_Prime
that fixed and forced the mounting of it to the system. Then I had to:
zpool export Optimus_Prime
which exported it from the system. That allowed me to import it through the GUI.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Good that you got back access to your pool and data. Still, trial and error often ends up with more errors and catastrophe than success. Next time, you may be better to ask before gambling your luck...

Up to your backups now :smile:
 
Top