FreeNAS cryptovirus resiliance

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
I'm new to FreeNAS and considering it very seriously as the file system of choice due to it's ability to create snapshots and replicate them offsite. The snapshots are a great way to protect against cryptoviruses on the local network (barring an root compromise on the FreeNas obviously) that could delete my snapshots

So my question is. If my local FreeNAS was root compromised by something sitting on my network that captures my root credentials and I've setup remote snapshot replication to a remote FreeNAS, does the local root access give enough access to the remote Freenas to destroy the backup?

What happens if my pool was destoryed, am I still able to recover the remote snapshots back into a new recreated pool?
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
If your local system is sending snapshots to a remote system and the local root is compromised, yes the remote duplicate could be destroyed. The local credentials would be enough to connect to the remote and destroy the snapshots.

Another way to do this would be to have the remote system connect and pull the snapshots. Your local system wouldn't have a way to initiate the connection to the remote so it would, at worst, have to wait for the connection and try to ride it back to destroy the backups. Nothing is impossible, but I would think the remote side initiating the snapshot transfer would be harder to compromise.

If your pool is destroyed and the remote snapshots are available, you would absolutely be able to rebuild the local system and then restore the snapshots to your new local pool.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Best way to protect from Crypto is remove local admin from all systems/users. For a windows shop use LAPS. This removes 80%+ of attacks alone.
Then use a on-click virus/malware scanner like Bitdefender, and a good SPAM filter like Mimecast or Proofpoint.

Any online backup can be compromised if a root account is compromised.
This requires offline backups, best bet for larger datasets is tape, LTO-8 or LTO-M8 (lower cost per TB).
 

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
If your local system is sending snapshots to a remote system and the local root is compromised, yes the remote duplicate could be destroyed. The local credentials would be enough to connect to the remote and destroy the snapshots.

thxs for comfirming

Another way to do this would be to have the remote system connect and pull the snapshots. Your local system wouldn't have a way to initiate the connection to the remote so it would, at worst, have to wait for the connection and try to ride it back to destroy the backups. Nothing is impossible, but I would think the remote side initiating the snapshot transfer would be harder to compromise.

lovely idea, I didn't know this would be a possibility to do, but that seems like a fairly bullet proof way to approach things

If your pool is destroyed and the remote snapshots are available, you would absolutely be able to rebuild the local system and then restore the snapshots to your new local pool.

thxs for confirming this as well

Best way to protect from Crypto is remove local admin from all systems/users. For a windows shop use LAPS. This removes 80%+ of attacks alone.

remove local admin, as in not have cached admin creds on a workstation?
LAPS is new to me, I shall investigate thanks

Then use a on-click virus/malware scanner like Bitdefender, and a good SPAM filter like Mimecast or Proofpoint.

just rolling out BitDefender

This requires offline backups, best bet for larger datasets is tape, LTO-8 or LTO-M8 (lower cost per TB).

I've come to the same conclusion, but I don't like the amount of work tape will take . . . but it may be the only way.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,545
Depending on what the workflow in your environment looks like, you can set vfs_worm on SMB shares. https://www.samba.org/samba/docs/current/man-html/vfs_worm.8.html
The vfs_worm module controls the writability of files and folders depending on their change time and a adjustable grace period.

If the change time of a file or directory is older than the specified grace period, the write access will be denied, independent of further access controls (e.g. by the filesystem).

In the case that the grace period is not exceed, the worm module will not impact any access controls.
I wish the unit of time was larger than seconds (because you can end up with some obnoxiously large numbers).
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
remove local admin, as in not have cached admin creds on a workstation?

No, it's do not allow local login with admin accounts. Require user accounts to be non-admin.
https://blog.stealthbits.com/running-laps-in-the-race-to-security/

For our shop IT admins have 3 accounts, user/elevated (local admin)/domain admin.

User for everyday tasks, email helpdesk, etc. on personal workstation.
Elevated for installing or otherwise needing elevated, UAC, never used to login to any systems
Domain Admin for tasks requiring it, AD, cluster tasks, vCenter, etc. Only used to login to servers.
  • Domain admins are all MFA using hardware tokens, YubiKey
 

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
so if we split out management to a separate VLAN for root access to FreeNAS where the production network can't get root, with a dedicated PC on that VLAN plus pulling backups to a remote location that is not connected by any VPNs we should have a fairly good plan in place? I'm not very keen to get into tape backups due to the constant labour required.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,464
something sitting on my network that captures my root credentials
If you're even mentioning this possibility, anything that would transmit the root credentials should be encrypted. Setting up the GUI to be served over HTTPS isn't difficult--see:
 

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
yeah, I'm just thinking out loud.

If my own PC was compromised and I wasn't aware of it, my SHH / HTTPS session to FreeNAS could be captured. Also if my network was compromised and a SOCKS proxy was on the network, it could capture my traffic.

If I separate out root access to FreeNAS to a separate VLAN that has only one PC on it, used soley for management of this stuff it would isolate the chance of my FreeNAS getting root compromised / snapshots deleted pretty much to 0

Do you see any flaws in that logic?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Kenster,

This situation is the very case for which the 3rd copy is required (see the 3-copies rule in my signature).

Because copies 1 and 2 are Online, they can both be affected by common Online event. By keeping a copy Offline, that one is immuned to any and every Online event. It would require to bring the Offline copy Online for a sync after the common incident destroyed both copies 1 and 2. So for that, I just ensure both 1 and 2 are fine before booting up my 3rd server and power it back off once sync is done.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
This is why tape is coming back strong in the industry.
We just bought enough tapes to hold 90 days offline to meet the offline requirements. We already had onsite backups, offsite backups, and quarterly tape. Now we are moving to daily tape as well as offline storage for all configuration data (switch configs, etc.).

If this is for home just use (2) accounts. One for everyday use and one for elevation (admin). The issue is that most malware will use your logged in creds to install or move laterally. If your logged in account doesn't have privileges it can't do much which is why that step alone can stop 80%+ of attacks.
 

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
yeah, this is where I flip flop

if the root user is never logged in to FreeNAS that means my snapshots are basically safe, especially with a "pulled" copy of the snapshots at another location.

to have a shut down backup that I turn on, once a month on the same network is basically as risky imho as the scenario above. If the shut down server was on a remote network / location pulling backups then it's protected against anything on the source network that would try to compromise it when it starts up.

Tapes seens like the only 100% sure way, but I really don't like the idea much at all.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
then it's protected against anything on the source network that would try to compromise it when it starts up
It occurs to me that the suggestion to have the remote server pull instead of the local server push, just pushes the vulnerability to the remote side instead of local. Anything that compromises that remote server can use the pull credentials to attack the local server.

Ultimately, proper security is unattainable. Excuse me while I disable my firewall ;-)

Really, every piece is still just a link in the chain that can fail. Best practices are always going to be to minimize the attack surface and protect your keys. Even tape can be written over. There was even that example that spun the tape back and forth very quickly, causing the reel to combust.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
if the root user is never logged in to FreeNAS that means my snapshots are basically safe

Unfortunately, your root account is always logged in, by definition. Root is the representation of the kernel. So because your kernel is running, there are root processes that are running. A vulnerability in SSH will lead to a compromised root account even if you did not logged yourself interactively.

As @fracai said, no security is perfect. The offline backup on the same site is exposed to the same physical threats as the main copy. But here, your question was about protecting the copy against a logical threat. A single copy can hardly be protected against both physical and logical threats.

Your second copy needs to be as close as possible to the first one so your recovered data are as close as possible as the ones you loss in the incident. Also, the first FreeNAS is very robust by itself and will survive many logical incidents with ZFS and snapshots. So to protect against physical incident is also of higher priority. An online copy is by far the easiest one to sync and to have it offsite still keep it manageable remotely. That is why my second copy is online and offsite.

Only once a logical incident propagates to both of these copies that I will need my third one. This case is so remote that to loose a few days of data is good enough compared to loosing everything. Also, that single threat that will destroy both 1st and 2nd copy must be logical, so no need to mitigate physical threats with that 3rd copy. This is why my 3rd copy is offline but onsite. Still easy to manage because it is onsite but kept offline and turned On only after I checked that first 2 copies are still good.

To have a copy that is both offline and offsite would be even better, but again, nothing is perfect. Here, should someone come in my place, use physical access to gain root on the first system, destroy the second remotely and than the third one, yes, that single incident will have destroyed everything. If you fear such a targeted attack, you need to protect yourself against Government agencies like the NSA. It is doable, but you will live a miserable life... And for me, my iPhone and iPad that I always keep with me hold the very minimum I need like a local copy of my password manager and some docs. So even after loosing all of my servers, some data will survive...
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
TL,DR version:
You are not that important, the only way you are going to get crypto is something YOU do.

Cryptovirus are automated attacks not manual. There wont be a person sitting on the other end attempting to circumvent controls.

I don't know of any current automated crypto attacks that work without local admin. This is NOT local admin on FreeNAS, no-one is likely to be writing FreeBSD attack code and best practice would be to not have FreeNAS on the internet, certainly not the management. Crypto attacks are almost all behavioural attacks, so phishing emails, dodgy websites, etc. First line of defence is safe behaviour.

Where you are most likely to be compromised is Windows. Do not run local admin on windows, run as a normal user account. Use UAC for any admin tasks like installing software with a separate account with admin privileges . Management login to FreeNAS should also be an admin account, ideally again different, however it likely could be the same account name/password as your windows admin account.

Even this wouldn't be 100% effective, mostly as you could still be tricked into doing an install of the crypto yourself in dodgy software. Hence the requirement for backups.

Then as noted by [U]Heracles[/U] none of this protects against physical attack.

Security is a defence in depth and will never be 100% effective.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
easiest way would be to use a dedicated user for replication task and an airgap`d laptop to administrate the important freenas.
And of course a root password which is not used elsewhere in your network.
 

kenster

Dabbler
Joined
Nov 15, 2019
Messages
38
This is why my 3rd copy is offline but onsite. This makes perfect sense. Thanks


TL,DR version:

You are not that important, the only way you are going to get crypto is something YOU do.

Thanks for that definition, also makes lots of sense. In that regards snapshots will go a long way as one layer of protection.


easiest way would be to use a dedicated user for replication task and an airgap`d laptop to administrate the important freenas.

And of course a root password which is not used elsewhere in your network.


my thoughts as well
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The "offline" or third backup can still be on disk. That is what I do. The offline nature of the backup doesn't require tape, only that the media can't be accessed outside of the backup process. Three separate servers withe three separate pools of disks. First server is shared to the network, second could be shared read-only, third server only pulls a backup and doesn't even have it's pool connected to a network share.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
You could schedule the remote backup to only be accessible at certain intervals. Like once a week or once a month depending on when the backup is deeded to be done. Then the only thing you need to do to prevent it's "destruction" is not letting it get contacted. I doubt a crypto virus is going to sit around and figure out to wait for the exact time to do damage. If you want you can honey pot the remote site to look for login from root at times where it should not be happening.

If your local system is sending snapshots to a remote system and the local root is compromised, yes the remote duplicate could be destroyed. The local credentials would be enough to connect to the remote and destroy the snapshots.

That seems like a giant design flaw of FreeNAS and should be patched. The recieving system shuold be in control of how and when snapshots are destroyed, and there is no reason why snapshots should be sent by root account anyway as long as both systems agree on the authentication needed for the snapshots to be made.
 

subhuman

Contributor
Joined
Nov 21, 2019
Messages
121
fracai said:
It occurs to me that the suggestion to have the remote server pull instead of the local server push, just pushes the vulnerability to the remote side instead of local. Anything that compromises that remote server can use the pull credentials to attack the local server.
Would it?
If the remote server only had read permissions and not write on the local server, there's not much someone could do even if they got the remote server's credentials. Or am I missing something?
And as no_connection just mentioned right above me, there's no need for the local system to have root on the remote even if it does push out snapshots.
We can go a little broader and say that any automated root login is probably a bad idea.
 
Top