Replication between two TrueNas Scale systems using cloudlfared tunnel

gorrunyo

Dabbler
Joined
Sep 12, 2022
Messages
17
Hi,

I want to set up two TrueNas Scale boxes to perform replication tasks among themselves using a Cloudflare tunnel.

For that purpose, I'm running a cloudflared pod on each machine.

So, the goal now is to persuade ssh to direct traffic through the cloudflared pods.

I have archived that by setting a ProxyCommand in ~/.ssh/config file (and setting up key pairs, etc.).
Now I can ssh from one machine to another without any problem.

But when configuring a replication task in the GUI, specifying an SSH connection, and using the corresponding keys, I always get a timeout error.

Do the ssh commands used by TrueNas/zfs use the settings in ~/.ssh/config file?

Any help or hint would be appreciated.

Cheers
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
Do the ssh commands used by TrueNas/zfs use the settings in ~/.ssh/config file?
replication uses the keys setup under system. I do not believe what you are trying to do is possible. Truenas is an appliance and custom configs like this typcially do not work well.
 

gorrunyo

Dabbler
Joined
Sep 12, 2022
Messages
17
Thanks for the reply,

I'm not messing up with the keys; I'm just setting up a proxy command in /root/.ssh/config. The SSL handshake should proceed as expected.
Perhaps the replication tasks are not running as root?
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
the replication tasks run as the user you configure in the webUI settings. the webUI settings do not use .ssh user file configs at all.
system>ssh connections
system>ssh keypairs

if you run the commands by hand, they will use the users .ssh, but this doesn't apply to the webUI replication. or at least, i do not know of a way to get it to.

I'm about 98% sure that you would have no choice but to construct your own replication stack via cron.

I think what you are trying to do is genuinely impossible, though perhaps if you, like, hacked it to tunnel to itself and then through the proxy but I have no idea.
 

gorrunyo

Dabbler
Joined
Sep 12, 2022
Messages
17
I see,

the replication tasks run as the user you configure in the webUI settings
I couldn't find where to specify the user for replication tasks. Can you provide a hint?

if you run the commands by hand, they will use the users .ssh, but this doesn't apply to the webUI replication
What do you think about setting the ProxyCommand in /etc/ssh/ssh_config? This is a system-wide configuration point.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
/etc/ssh/ssh_config?
I didn't know that was an option, however, TrueNAS is still an appliance and very possibly doesn't even read/use that.

aditionally, changes to that file (if it even exists) will likely not persist through an update, the same as any other config changes that bypass the webUI.

it really sounds like what you want to do is use a full OS, not an appliance.
 
Last edited:

KevinDood

Cadet
Joined
Dec 4, 2023
Messages
2
I can confirm, I have two methods working to use a SSH jumphost on Freenas 11.3, which should work in Truenas just the same.


If you select "Legacy" Transport option, then the replication uses traditional config files and SSH binary file.
Newer replication methods use Python library "Paramiko" and without extensive hacking, there isn't a way to get paramiko to use a jump host directly.

The side-effect of using Legacy mode, is the log is very noisy, which is annoying if you get email alerts. The newer replication methods give you a lot more control over schedule and such.

If you still go Legacy route, use this in /root/.ssh/config on freenas host (persistent config file):

Host jumphost
User root
Hostname some.public.ip.address
Port 1234

Host offsite
User root
Hostname 192.168.1.100
Port 4321
ProxyJump root@jumphost

Steps:
via shell, "ssh jumphost", accept the host keys, then add your SSH public key to ~/.ssh/authorized_keys on JumpHost.
Try again, jumphost should now be passwordless
"ssh offsite", accept the host keys, then add the same SSH public key to authorized keys again, this time on offsite
try again, offsite should be passwordless.
you may have to manually find the hostkey of your target server to populate in the settings.
ssh-keyscan -p 4321 offsite
Then add "offsite" in System->SSH connections, and use it in replication task.



Keeping the modern "SSH" replication transport is still possible, if we are more clever. Keep a persistent SSH tunnel to your offsite host, with automatic retries if the connection drops.

assuming you already have the persistent /root/.ssh/config file from earlier steps.
Create a script in /root/jump.sh
#!/bin/bash

# some simple check to see if the connection successfully made it to offsite, instead of just jumphost
if ! ssh -p 1234 127.0.0.1 zfs version 2>&1 | grep -q kmod; then
# offsite does not have ZFS, so if this works, the tunnel must be up
echo Restart SSH tunnel to offsite
ssh -f -N -L 1234:192.168.1.100:4321 jumphost
fi
And a Cronjob * can enable the output/errors if you wish to get email alerts when the connection drops
1701747561695.png


Now your system->SSH connection is easy... and the "remote host key" button works.
1701747641612.png

Python paramiko will not know any the wiser. Logs and email alerts are nice and clean. see /var/log/zettarepl.log
 
Top