unable to go from core to scale

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
i tried going from core 13 u6\.1 to scale 23.10.2. the first time it did the comversion when it came up my pool was gone with scale claiming it was encrypted...it isn't. i rolled back to core and when i tried again i get the following.

Update​

Error: 30 is not a valid PoolStatus

since i have another server with my data i am going to nuke my primary server and rebuild as i no longer trust the pool is stable. just a heads up. if there is o0the5r imnfo you want le5t me know.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456

Failed to check for alert ZpoolCapacity: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 87, in query pools = [i.__getstate__(**state_kwargs) for i in zfs.pools] File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 87, in query pools = [i.__getstate__(**state_kwargs) for i in zfs.pools] File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 87, in pools = [i.__getstate__(**state_kwargs) for i in zfs.pools] File "libzfs.pyx", line 2489, in libzfs.ZFSPool.__getstate__ File "libzfs.pyx", line 2693, in libzfs.ZFSPool.healthy.__get__ File "libzfs.pyx", line 2675, in libzfs.ZFSPool.status_code.__get__ File "/usr/local/lib/python3.9/enum.py", line 384, in __call__ return cls.__new__(cls, value) File "/usr/local/lib/python3.9/enum.py", line 702, in __new__ raise ve_exc ValueError: 30 is not a valid PoolStatus """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/alert.py", line 740, in __run_source alerts = (await alert_source.check()) or [] File "/usr/local/lib/python3.9/site-packages/middlewared/alert/source/zpool_capacity.py", line 48, in check for pool in await self.middleware.call("zfs.pool.query"): File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1283, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1248, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1254, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1173, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1156, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ValueError: 30 is not a valid PoolStatus​

2024-03-05 11:17:20 (America/Los_Angeles)
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
i destroyed the pool tried again and got the above. i am thinking the system is fubar....format/reload time.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
so now i totallu reformatted he server...now when i create a pool it is looked and i am unable to unlock it. what is going on? i am trying to use the latest cibion release of scale
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Error: concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker
res = MIDDLEWARE._run(*call_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run
return self._call(name, serviceobj, methodobj, args, job=job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call
with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c:
File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call
return methodobj(*params)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 50, in nf
res = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 181, in nf
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset.py", line 169, in do_update
with libzfs.ZFS() as zfs:
File "libzfs.pyx", line 529, in libzfs.ZFS.__exit__
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset.py", line 178, in do_update
self.update_zfs_object_props(properties, dataset)
File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset.py", line 247, in update_zfs_object_props
verrors.check()
File "/usr/lib/python3/dist-packages/middlewared/service_exception.py", line 70, in check
raise self
middlewared.service_exception.ValidationErrors: [EINVAL] properties.org.freenas:quota_warning: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:quota_critical: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:refquota_warning: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:refquota_critical: Property does not exist and cannot be inherited

"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method
result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 180, in update
return await self.middleware._call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 197, in nf
rv = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset.py", line 894, in do_update
await self.middleware.call('zfs.dataset.update', id, {'properties': props})
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 180, in update
return await self.middleware._call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1350, in _call
return await self._call_worker(name, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1356, in _call_worker
return await self.run_in_proc(main_worker, name, args, job)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc
return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middlewared.service_exception.ValidationErrors: [EINVAL] properties.org.freenas:quota_warning: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:quota_critical: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:refquota_warning: Property does not exist and cannot be inherited
[EINVAL] properties.org.freenas:refquota_critical: Property does not exist and cannot be inherited
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
any ideas?
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
does cobia create encrypted pools by default now> if so how to permanently unlock them////?
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
so i installed angelfish and a pool created normally. when i tired to update to cobia i got

Update​

Error: 31 is not a valid PoolStatus
 

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
Did you try and go direct from Angelfish to Cobia?
Did you follow the migration guide provided on the docs website? Need to understand more about the process you followed and about what was configured prior to upgrade.

 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
So I first had a true Nas Cora machine at the latest version and then I tried to do a conversion from core to scale. When scale came back up it showed it as a locked pool and I was unable to unlock it. Then I deleted the pool while in scale. This is Cobia and then tried to recreate it. It created a blank pool that was locked and I could not unlock it. So then I totally reformatted the server and installed straight Cobia. I then created a pool and it created it locked and I was unable to unlock it. I then reformatted the server again and installed angelfish. When I installed angelfish it created the pool correctly. When I tried to upgrade to Cobia it gave me that invalid pull status error. So every time I tried to use Cobia it either gave me the invalid pull status or it would just create a locked pool. I finally said screw it reformatted the machine for like the fourth time reinstalled core and pulled all my data from my Cobia my other Cobia server back over into my true Nas core machine. Honestly, at this point I'm going to take my backup server and format it off of Cobia and I'm going to go back to true NASC core on both of my machines. At this point. I no longer trust true Nas scale to be a good Steward of my data.
 

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
. I then created a pool and it created it locked and I was unable to unlock it

So you did not select to create an encrypted pool? A new pool should not be encrypted unless that option is selected in the UI, I have not ever seen this happen.

If you have a debug from the system which was clean installed and a locked pool was created blocking you, I'd appreciate it if you could direct message me that so I can take a look and see if we can understand the issue.

Did you attempt to upgrade directly from angelfish to cobia? If so this path is not supported, we specify you need to go to the next version latest stepwise, so from Angelfish you'd need to upgrade to bluefin first and then upgrade to cobia.

 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
I recreated your exact problem, accidentally, just playing around with a fresh VM of TrueNAS Core and upgrading to SCALE then trying to roll back, took me less than 15 minutes to break it.

Including your first error and the second one - this was a BRAND NEW SYSTEM.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Well rolling back brakes the system as I found out by reading the documentation but that's not where the problem was. I had a fully operating system on trueness core and when I upgraded the scale it could completely destroyed my pool. Luckily I had a backup and I wound up just having to start completely over and I tried to upgrade again and it still destroyed the pool. I don't know why this was happening and after messing with us for a good 4 to 6 hours. I finally just formatted the machine loaded core replicated all my data back and stick with core. My other scale machine will stay scale but honestly core to scale right now at least for me is dangerous and destructive to your pool.
 

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
If this is reproducible, we would really appreciate bug tickets with debugs in the private uploads; if a debug is available before and after upgrade, that would be really helpful. We have not been able to reproduce this, so we really need some data to investigate this.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
This was one of those times that I wasn't going to go through the whole process again. It took my entire home network down in terms of data storage and the only reason I got everything back is because I was smart and have a second server that everything's replicating to. But I was able to have it happen twice and at that point I wasn't willing to go any further. Maybe at some point in the future I'll take the risk again because now I know what all is going to be entailed. If I decide to go through this again then I'll be more than happy to file a bug report.
 

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
This was one of those times that I wasn't going to go through the whole process again. It took my entire home network down in terms of data storage and the only reason I got everything back is because I was smart and have a second server that everything's replicating to. But I was able to have it happen twice and at that point I wasn't willing to go any further. Maybe at some point in the future I'll take the risk again because now I know what all is going to be entailed. If I decide to go through this again then I'll be more than happy to file a bug report.
Totally understand that, if you have a debug on the core system you want to send me directly this might help us identify what we need to configure to reproduce the issue, my concern is that unless we have a datapoint we will struggle to identify the root cause.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
No I do not as I had to fully wipe the system to get it to reinstall and not create corrupted pools....I will consider trying it again in the next couple of weeks....
 

ABain

Bug Conductor
iXsystems
Joined
Aug 18, 2023
Messages
172
No I do not as I had to fully wipe the system to get it to reinstall and not create corrupted pools....I will consider trying it again in the next couple of weeks....
apologies I thought from your previous post you'd already recovered the system to core, hence the ask for a debug from that config so we could try and replicate the core environment.
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Yeah I'm back to corr on my primary servers. I'm not a huge fan of scale. It's just my own personal preference. But since I know that core is pretty much done, I'm going to be eventually trying to migrate again if it blows up again, I'll definitely make a bug report and reference this thread. I'll make sure I take a debug first before I try to conversion and we'll see what happens
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
So I got back to the home office today and I have a quiet time so I made a debug save the config and I am now going from trueness core to the latest release of Cobia. Let's see what happens
 
Top