Citadel - Build Plan and Log

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I downloaded the key.json file from the "Backups" ancestor, and then edited it to point at the vbasftp dataset. Uploaded that as vbasftp's keyfile, and the dataset unlocked! But after rebooting the Truenas system it is back to locked, so this is just a temporary workaround...

2023-01-08_11-44.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I haven't experienced this before:

As part of troubleshooting the dataset issues above, I turned off all the VMs and apps, and had the system reboot. Instead of showing the waiting screen while that was happening, I got Cloudflare's "yo stuffs broke" screen. Once the system was up a few minutes later, I got an email, and refreshed the web UI page to log back in. It showed the login screen, and when I entered the credentials and clicked through, it went back to the Cloudflare server down page and the system rebooted again.

I'm not sure, but the first thing that came to mind was that the URL was still "/ui/system/reboot" or similar, and somehow logging back in had triggered another reboot. If that's the case, it's not great.

* edit: I know that the system had come back up, and did reboot again, because it's sitting under my desks and I can hear the fans all stop during reboot.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225

Backing up ix-applications​

I recently tried to run backups of my ix-application into the storage pool, and did it wrong a few times before (maybe) figuring things out.

Apps: application/ix-applications
Backups: storage/backups/...

My first attempt was to directly replicate the ix-applications dataset into the storage pool. I made a dataset inside of storage/backups and set up the GUI replication tasks. It completed, but only about 50MB of ~134GB was transferred. For a while I suspected that maybe the total just hadn't updated in the UI yet, or was compressed, but eventually it was clear that the transfer had just failed.

Eventually I heard about HeavyScript and read through the readme here: https://github.com/Heavybullets8/heavy_script

I set up the backups on a cron job, and then followed the instructions to make a slightly more complicated replication task. Suddenly 127GB was backing up into the storage dataset, probably because now there were snapshots or something, I don't know how this works and shouldn't be using TN.

Then I thought "gee, wouldn't it be great if I could back up my applications into an encrypted dataset?" and re-created the heavyscript dataset inside of storage/backups to be encrypted. After the replication failed, it returned this error:
Replication "Replicate heavyscript to backups" failed: Destination dataset 'storage/backups/heavyscript-enc' already exists and is its own encryption root. This configuration is not supported yet. If you want to replicate into an encrypted dataset, please, encrypt its parent dataset..

So I created a child dataset that inherited encryption from the parent, and tried again. This time the replication succeeded, but there was another alert:
The following datasets are not encrypted but are within an encrypted dataset: 'storage/enctest/heavyscript-testbkup' which is not supported behaviour and may lead to various issues.

And sure enough the replication has just obliterated the dataset's encryption, which seems like something it shouldn't do to me, but what do I know. So I don't know what that first encryption root error means, but it doesn't mean what I thought it mean'ed.

I thought briefly about setting the encryption option inside of the replication task settings, but had no clue what it would do, and read in the TN docs:
e. (Optional) Select Encryption to add a second layer of encryption over the already encrypted dataset.

That's complete with a dead hyperlink to read more about replication encryption. So I wound up not even testing that.

Set things back up with the unencrypted backups and it's chugging away.
 
Last edited:

ctag

Patron
Joined
Jun 16, 2017
Messages
225

My current backup config​


On my system, apps are on an SSD based pool, but I want backups to go to spinning disks and save some SSD space. My current understanding of HeavyScript is that it will backup some metadata and then take a snapshot to actually preserve the apps. It also looks to me that reverting to an older state with Heavyscript is only possible if you're willing to revert all apps / the entire system back to that point.

I used this guide: https://truecharts.org/manual/SCALE/guides/backup-restore/

1690475569456.png


1690475752637.png


1690475786238.png


Heavyscript is run as a cron job:
1690476005842.png


And then a replication task from that guide is set up:

1690476191095.png

1690476215913.png

1690476244535.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225

Recovering files from a deleted app​

As part of bringing my machine back up to date after the large TrueCharts re-write earlier this year, I deleted most of my apps and just re-installed them, opting to start over in most cases. But while doing so I forgot about a custom html file I had placed in a wordpress app, and I wanted that file back! I wound up using this guide: https://truecharts.org/manual/SCALE/guides/pvc-access/

Part of the replication settings for apps backup is including snapshots that begin with "ix-applications-backup..." to also copy. So I use this command to find snapshots that pertain to my old, now deleted, app:
# zfs list -t snap > snapshots.txt
# grep -i "ix-app" snapshots.txt | grep -i wordpress | less

Then, inside of `less` I look for snapshots that aren't empty and account for some data:
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes@ix-applications-backup-HeavyScript_2023_07_09_23_00_01 0B
- 205K -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes@ix-applications-backup-HeavyScript_2023_07_10_23_00_02 0B
- 205K -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes@ix-applications-backup-HeavyScript_2023_07_11_23_00_01 0B
- 205K -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes@ix-applications-backup-HeavyScript_2023_07_12_23_00_01 0B
- 205K -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes@ix-applications-backup-HeavyScript_2023_07_13_23_00_01 0B
- 205K -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_08_17_03_44 2.97M - 873M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_08_23_00_01 3.95M - 874M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_09_23_00_01 3.04M - 842M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_10_23_00_02 1.77M - 841M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_11_23_00_01 2.32M - 842M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_12_23_00_01 1.58M - 842M -
storage/backups/heavyscript/releases/bns-wordpress/volumes/ix_volumes/bns-wordpress-data@ix-applications-backup-HeavyScript_2023_07_13_23_00_01 0B - 847M -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_08_17_03_44 0B - 222K -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_08_23_00_01 0B - 222K -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_09_23_00_01 0B - 222K -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_10_23_00_02 0B - 222K -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_11_23_00_01 0B - 222K -
storage/backups/heavyscript/releases/wordpress-berocs@ix-applications-backup-HeavyScript_2023_07_12_23_00_01 0B

Once I had the snapshot name, I searched for it in the snapshot screen of Truenas UI, and then cloned it into a temporary dataset.
1690484017239.png
 
Last edited:

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I'm interested in setting up multi-report, but I get some output about partitions that winds up becoming an email.

# ./multi_report.sh
Multi-Report v2.4.3 dtd:2023-06-16 (TrueNAS Scale 22.12.3.1)
Checking for Updates
Current Version 2.4.3 -- GitHub Version 2.4.3
No Update Required
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Partition 3 does not start on physical sector boundary.
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Partition 3 does not start on physical sector boundary.
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Partition 3 does not start on physical sector boundary.
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
Partition 3 does not start on physical sector boundary.

I assume that's because the partitions are based on 512 instead of 4096 sector sizes?
When I have time I need to look into what command is making that output, what it means, and if there's a way to correct it.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225

Setting up tt-rss with docker-compose TrueCharts app​

In the spring of 2023 TrueCharts underwent a large re-write that required reinstalling apps and appears to even have broken some. For me, the tt-rss app no longer works. I didn't want to fully escape-hatch to a VM just for an RSS reader, so I turned to the docker-compose TrueCharts app as a stopgap.

Docker-Compose is also broken​

While the docker-compose app can be coerced into running, it doesn't seem to work out-of-the-box for me. The app will start but then will not launch the docker compose yaml app inside of it. It looks like many users watch this tutorial video and get confused, but that video is out of date now and not applicable.


Workaround Setup with tt-rss​

There are two key changes to get docker-compose working: don't use the UI field for the compose.yaml file, or the ingress fields when setting up.

And I found that I needed to fully delete the app, and re-create docker-compose a second time in order for these steps to work, so don't give up immediately if it doesn't all start working!

Set up the compose.yaml file manually​

You must already have the 'stable' TrueCharts catalog set up and synced on your TrueNAS instance.

In the TrueNAS UI, navigate to the 'Apps' page and then the 'Available Applications' tab. Click the 'Refresh All' button at the top if you haven't synced the catalog recently. Then search for 'docker' in the search bar at the top, and click 'Install' on docker-compose:
dc_install_1.PNG


In the app-setup pane that opens, it will ask for a docker compose file. Leave this field blank:
dc_1.png


Under "Networking and Service" enable expert config, and create a custom service:
dc_2.png


And then add your app's service port:
dc_3.png


Under "Storage and Persistence" add additional app storage to your docker yaml file:
dc_dockerfiles.PNG


I also added the following for /var/lib/docker but I don't remember why. Maybe try leaving this out, and see if things work.
dc_libdocker.PNG


Finish the dialog and click save. Wait for the app to finish installing and become 'active', then click on the triple-dot context menu and click "Shell"
dc_4.PNG


In the "Choose pod" dialog leave the default options and click the "Choose" button. From the shell, navigate to your compose.yaml file, and run 'docker compose -f /path/to/compose.yaml up -d'. It doesn't seem to work without the -f flag.
dc_shell.PNG


Now your compose service should be running, and will survive restarting the app!

Use external-service to access app​

This next step requires that you've set up ingress/traefik nonsense already.
I was not able to get the settings in docker-compose configured to let me access the app directly. What did work was setting up an external-service app and pointing that to the port we exposed above. Create an external-service, enable ingress, add a Host and set the HostName to your Cloudflare DNS subdomain for the app.
ext_1.png


Under "Networking and Services" configure the service IP to your TN IP and set the port.
ext_2.png


Save the dialog and the app should be accessible!
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Got the snapshots alert again:
1697050418409.png


It's reminding me that FreeNAS used to be a way for me to learn neat things and tinker with BSD, and now TrueNAS is a way for me to learn little monkey-imitation rituals to trick the UI into doing the thing I want. Weird to feel nostalgic over such a sharp delineation.

Anyway, I wound up using the zfs-prune-snapshots shell tool:

1697052145119.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Updated to TrueNAS-SCALE-22.12.4.2 yesterday, and it apparently didn't play super nice with heavyscript.. The script output looks OK though, updated all the apps.

1699536617940.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I tried following the directions in that alert, and manually ousting one snapshot, but it didn't seem to work. The alert just came back complaining about another snapshot timestamp. So I removed all of the snapshots from 11/8 and 11/9, which is frustrating and scary after a big system update.
2023-11-13_18-54.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
The alert came back again:
1699960426132.png



... And all of the snapshots I deleted yesterday are back?(!)
1699960623020.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I also got this doozy, but it cleared itself:

New alerts:
  • Failed to check for alert ScrubPaused: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker res = MIDDLEWARE._run(*call_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/client/client.py", line 292, in __init__ raise ClientException('Failed connection handshake') middlewared.client.client.ClientException: Failed connection handshake """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/alert.py", line 800, in __run_source alerts = (await alert_source.check()) or [] ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/alert/source/scrub_paused.py", line 18, in check for pool in await self.middleware.call("pool.query"): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call return await methodobj(*prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf res = await f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/service/crud_service.py", line 156, in query result = await self.middleware.call( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1341, in _call return await methodobj(*prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/read.py", line 169, in query result = await self._queryset_serialize( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/read.py", line 221, in _queryset_serialize return [ ^ File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/read.py", line 222, in await self._extend(data, extend, extend_context, extend_context_value, select) File "/usr/lib/python3/dist-packages/middlewared/plugins/datastore/read.py", line 235, in _extend data = await self.middleware.call(extend, data, extend_context_value) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1352, in _call return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/pool.py", line 188, in pool_extend pool |= self.middleware.call_sync('pool.pool_normalize_info', pool['name']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1420, in call_sync return self.run_coroutine(methodobj(*prepared_call.args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1460, in run_coroutine return fut.result() ^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result raise self._exception File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 44, in nf res = await f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/pool.py", line 152, in pool_normalize_info if info := await self.middleware.call('zfs.pool.query', [('name', '=', pool_name)]): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1398, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1349, in _call return await self._call_worker(name, *prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1355, in _call_worker return await self.run_in_proc(main_worker, name, args, job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ middlewared.client.client.ClientException: Failed connection handshake
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I guess I'm going to try and re-do the stunt from last time, turn off all the tasks and delete all the snapshots.

1700047816826.png
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I tried:
Code:
zfs list -t snap -o name -S creation | grep -i "heavy" | grep -i "storage/backups/heavyscript" | xargs -n 1 zfs destroy -rR

To remove all of the snapshots on the backed pool.

I should probably be a lot more careful here, there's plenty of warnings from the truecharts account that deleting snapshots (e.g. as instructed in the alert?) is going to break things. But those warnings are also a year old, which is from the before-times in Truechartopia.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
I got some advice from the truecharts discord:
1700260524411.png


And deleted the backup:
1700260480173.png


Will wait to see if I need to delete either the ones older or newer than that specific entry.
 

ctag

Patron
Joined
Jun 16, 2017
Messages
225
Several days and no new alerts about the snapshots. That seems to have fixed it.
 
Top