sshfs mounts

NASbox

Guru
Joined
May 8, 2012
Messages
650
I keep my music collection on my FreeNAS, and I am looking for a "semi-permanent" way to have it available on my linux desktop machine.

I would prefer not to use CIFS (because I don't really trust the security since it has been the vector for a number of recent attacks). I was thinking an easy way would be to simply do a read only sshfs mount in my longin profile. Does anyone see any problems with doing that?

Any constructive feedback/suggestions would be much appreciated.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Why not use NFS? You could probably get sshfs to work, but why go through that hassle?
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Why not use NFS? You could probably get sshfs to work, but why go through that hassle?
thanks @Samuel Tai for the reply....I have been using sshfs mounts for most of my FreeNAS access. I have scripts that do the mounts to predetermined mount points. I tend to use FN mostly as a library, so when I want to add stuff, I do a quick sshfs, up load my stuff, make a snapshot and then disconnect the mount.
I took a look at NFS a while back, and I found it hard to understand... mostly didn't seem to have a logon, so I wondered how to keep it secure.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A generation of computer people ruined by Microsoft and their idiotic network share per-user authentication design.

NFS is the classic UNIX method of sharing files. It is literally no harder than mounting a disk. Your client users gets the same permissions you would get on the server, unless you specify otherwise as part of the mount options. Your client itself needs to be authorized to access the mount, which is typically done by IP or network.

You can add the mount to your Linux desktop machine, probably in /etc/fstab, though it varies, and you can export it from the NAS as read-only if you wish. It will just always be there, it will just always work. The software is stupid-simple so there are no significant threat vectors. It works on UNIX, Linux, Mac, and Windows, though Windows is a bit marginal IMO.

NFS, sharing files on networks since 1984.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
By the way, sshfs is insanely risky. To make sshfs work, you need ssh access to the server, which can potentially grant root access to the whole thing.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Hi @jgreco thanks for the reply...


A generation of computer people ruined by Microsoft and their idiotic network share per-user authentication design.

NFS is the classic UNIX method of sharing files. It is literally no harder than mounting a disk. Your client users gets the same permissions you would get on the server, unless you specify otherwise as part of the mount options. Your client itself needs to be authorized to access the mount, which is typically done by IP or network.

You can add the mount to your Linux desktop machine, probably in /etc/fstab, though it varies, and you can export it from the NAS as read-only if you wish. It will just always be there, it will just always work. The software is stupid-simple so there are no significant threat vectors. It works on UNIX, Linux, Mac, and Windows, though Windows is a bit marginal IMO.

NFS, sharing files on networks since 1984.
When I looked at NFS (a long time ago), there were two issues that confused me:
1. The apparent lack of authorization other than by IP Address (range)
2. Dealing with permissions when the Server used different UIDs/GIDs than the clients.

By the way, sshfs is insanely risky. To make sshfs work, you need ssh access to the server, which can potentially grant root access to the whole thing.
I'm assuming that "which can potentially grant root access' this is is only if the is set up as sshfs mount sshfs root@freenas:/mountpiont /localmount
if the mount is sshfs lowprivlidgeuser@freenas:/mountpoint /localmount then this wouldn't be an issue. Or am I missing something?
SSHFS is only available to the user who mounted it, so a non-root malicious process (if one were to exist) would also not have access.

Given that my question related to how I am going to handle a share for readonly media, I will take a look at NFS which I know nothing about at this point.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
1. The apparent lack of authorization other than by IP Address (range)
2. Dealing with permissions when the Server used different UIDs/GIDs than the clients.
These are features. NFS was built in a world where all servers and clients could be assumed to be in the same administrative domain. And all computers were multi-user and users did not in general have root access. Second all systems shared a common uid/gid database like NIS. Which was developed at the same time as NFS precisely for NFS.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hi @jgreco thanks for the reply...


When I looked at NFS (a long time ago), there were two issues that confused me:
1. The apparent lack of authorization other than by IP Address (range)
2. Dealing with permissions when the Server used different UIDs/GIDs than the clients.

Those are potential problems, but only if you choose to make them such.

1) When you add a hard disk to your computer, where does the authorization to access it lie? (Hint: permissions on the filesystem, just like NFS)

2) Assigning different UID's is a trainwreck, but you can handle that in various ways, including the obvious "use the same UID" answer.

I'm assuming that "which can potentially grant root access' this is is only if the is set up as sshfs mount sshfs root@freenas:/mountpiont /localmount
if the mount is sshfs lowprivlidgeuser@freenas:/mountpoint /localmount then this wouldn't be an issue. Or am I missing something?
SSHFS is only available to the user who mounted it, so a non-root malicious process (if one were to exist) would also not have access.

Your typical desktop environment plays host to a wealth of threat vectors, ranging from web browsers and e-mail clients to other more esoteric threats.

If you let something log in on your FreeNAS via SSH, chances are good that sooner or later there will be a root exploit that the FreeNAS host could find exploited. You are effectively using a relatively obscure FUSE client and relying on it.

By way of comparison, NFS has been around since 1984, and really has had what few security issues existed shook out of it. If you export a filesystem as read-only to a client, there is an incredibly high level of confidence that this is not risky to the files on the fileserver.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Thanks @Patrick M. Hausen / @jgreco I managed to get NFS working for read only shares.
These are features. NFS was built in a world where all servers and clients could be assumed to be in the same administrative domain. And all computers were multi-user and users did not in general have root access. Second all systems shared a common uid/gid database like NIS. Which was developed at the same time as NFS precisely for NFS.
Networks were also more secure in that access was limited to a much more restricted group of people. Given my circumstances I'm not sure that I am up for something like NIS to allow authentication. Without NIS, it appears that there is no "authentication" -- which in this case isn't an issue since there is nothing confidential on the shares that I set up, and since I have them set up as "Read Only" nothing can be modified by any other system unless it can actually log into FreeNAS.
Those are potential problems, but only if you choose to make them such.

1) When you add a hard disk to your computer, where does the authorization to access it lie? (Hint: permissions on the filesystem, just like NFS)
2) Assigning different UID's is a trainwreck, but you can handle that in various ways, including the obvious "use the same UID" answer.
Re #1, Yes, and there is a step before that... logging on to the system. NFS depends on a "closed network" anything on the authorized network could theoretically just log in and claim to be UID 1000 or whatever, couldn't it?
As for #2, I set up my users over a decade ago when I was a Windows user and UID/GID had no significance. I'm not sure how I would bring all my systems into alignment. FreeNAS starts at UID 1001, most Linux distros set up a UID 1000 for the main user which means I have UID 1000 duplicated in different contexts, and woulld have to move everything to a different UID.
Your typical desktop environment plays host to a wealth of threat vectors, ranging from web browsers and e-mail clients to other more esoteric threats.
Agreed!
If you let something log in on your FreeNAS via SSH, chances are good that sooner or later there will be a root exploit that the FreeNAS host could find exploited. You are effectively using a relatively obscure FUSE client and relying on it.

By way of comparison, NFS has been around since 1984, and really has had what few security issues existed shook out of it. If you export a filesystem as read-only to a client, there is an incredibly high level of confidence that this is not risky to the files on the fileserver.
I definitely agree with you for the Read Only shares that I originally asked about, and I have set them up. It appeared to be very easy unless I have missed something important for proper security. I allowed trusted sub-networks access, and made the shares read only. As I said there is nothing confidential, so if an unauthorized program was able to read the shares (reading the NFS shares would be the least of my problems, and it would need to steal other credentials before it could modify/delete anything on the NFS shares).

The only thing that I find a bit inconvenient is that I need 4 shares instead of 1. I have 4 datasets that are symlinked under a single directory. Using sshfs I am able to mount this directory read only, and read anything from any of the 4 datasets. I can also mount the the whole pool r/w and add/remove or copy files. Naturally this carries a higher risk, but I only keep a r/w share connected when it is in use.

If I put the mounts in the fstab on my Linux system, what happens if my FreeNAS goes down/isn't up?

Does it cause the Linux machine to crash or prevent logon?

Is there a simple way to do proper authentication?
It would be useful to be able to have automatic access a work dataset on FreeNAS.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Networks were also more secure in that access was limited to a much more restricted group of people. Given my circumstances I'm not sure that I am up for something like NIS to allow authentication. Without NIS, it appears that there is no "authentication" -- which in this case isn't an issue since there is nothing confidential on the shares that I set up, and since I have them set up as "Read Only" nothing can be modified by any other system unless it can actually log into FreeNAS.
I do something similar with my media. R/W access via SMB, R/O via NFS - this way I can easily mount the shares from the Infuse player on my Apple TV and iPad for playback.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Re #1, Yes, and there is a step before that... logging on to the system.

Really? Because I'm pretty sure no one logs into most of the systems on the networks I manage, and yet they make use of NFS. Not extensively. but it is definitely a thing.

The point I'm making is that the strategy behind NFS shares a lot of commonalities with the general UNIX design mindset. This is an inescapable conclusion. NFS does make allowances if you need different behaviours, for example with maproot or mapall.

NFS depends on a "closed network" anything on the authorized network could theoretically just log in and claim to be UID 1000 or whatever, couldn't it?

Sure, if you've authorized it. Just like if you are on a UNIX box and you set the permissions on a directory to uid 1000, anyone holding uid 1000 can get into that directory.

If you apply a blanket policy of "allow hosts on this network to access the NFS", then obviously you get the behaviour you configured.

However, you can limit mounts to specific hosts, and you can make specific mountpoints read-only or read-write to specific hosts.

Paranoid admins might even take to the more extreme measures of locking down ARP address mappings, in the server, or in the switching architecture, or both.

This is incredibly simple technology compared to the complexities of CIFS and AD.

If I put the mounts in the fstab on my Linux system, what happens if my FreeNAS goes down/isn't up?

Does it cause the Linux machine to crash or prevent logon?

If you use a hard mount, accesses will block until the mount recovers. This is designed for situations where traditional filesystem semantics are required.

You can use a soft mount, which allows interruption of accesses, which is friendlier to users but changes the behaviour a bit.

Is there a simple way to do proper authentication? It would be useful to be able to have automatic access a work dataset on FreeNAS.

Of course! None of this lets you magically access the share without authentication, it's just that the authentication happens on the client, not on the fileserver. You can use standard UNIX auth, YP, LDAP, etc. etc.

You can also set up stuff like the automounter to automatically mount an NFS share only when someone is trying to access it.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Thanks @Patrick M. Hausen and @jgreco ... I am really enjoying the RO NFS mounts, makes things a lot easier... So far still mounting manually, so I've got to go a bit deeper on this.

Really? Because I'm pretty sure no one logs into most of the systems on the networks I manage, and yet they make use of NFS. Not extensively. but it is definitely a thing.

The point I'm making is that the strategy behind NFS shares a lot of commonalities with the general UNIX design mindset. This is an inescapable conclusion. NFS does make allowances if you need different behaviours, for example with maproot or mapall.

Sure, if you've authorized it. Just like if you are on a UNIX box and you set the permissions on a directory to uid 1000, anyone holding uid 1000 can get into that directory.

If you apply a blanket policy of "allow hosts on this network to access the NFS", then obviously you get the behaviour you configured.

However, you can limit mounts to specific hosts, and you can make specific mountpoints read-only or read-write to specific hosts.

Paranoid admins might even take to the more extreme measures of locking down ARP address mappings, in the server, or in the switching architecture, or both.

This is incredibly simple technology compared to the complexities of CIFS and AD.
Of course! None of this lets you magically access the share without authentication, it's just that the authentication happens on the client, not on the fileserver. You can use standard UNIX auth, YP, LDAP, etc. etc.

@jgreco ... I rearranged your post to group similar items together to make it easier to reply.

it's just that the authentication happens on the client.
I get it now... I think for my important media collection I like what @Patrick M. Hausen is doing... R/O NFS shares and do the R/W with sftp (which the Linux file browser natively supports) access when active file management is required. I would think that sftp should be better for security than Samba... Samba has to be compatible with an old/insecure MS protocol.

Maybe I'm a bit naive, but it seems to me that R/W NFS shares (or any NFS share with sensitive data) make life easier for a malicious actor/ransomware to move laterally through the network. If each movement requires a password that improves security, and if logon time is minimal, exposure is moe limited. Similar to the reason for not working as root and using sudo only when necessary.

I'm thinking that a R/W dataset might be a convenient way to offload a lot of work files like install ISOs and similar junk that isn't critial. MAPALL, an isolated dataset, and tight IP control should be all the security that I need. It should also be OK for work space for sorting large files.

If you use a hard mount, accesses will block until the mount recovers. This is designed for situations where traditional filesystem semantics are required.

You can use a soft mount, which allows interruption of accesses, which is friendlier to users but changes the behaviour a bit.

You can also set up stuff like the automounter to automatically mount an NFS share only when someone is trying to access it.
Are either of these difficult to implement? Any idea where can I get more info--not sure exactly what the correct search terms would be?

@jgreco thanks again for taking the time to write such a lengthy reply... I'm not a professional admin, but I'm learning a lot.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I get it now... I think for my important media collection I like what @Patrick M. Hausen is doing... R/O NFS shares and do the R/W with sftp (which the Linux file browser natively supports) access when active file management is required. I would think that sftp should be better for security than Samba... Samba has to be compatible with an old/insecure MS protocol.

Looking at it from a protocol, code, and design strategy, I am not super-comfortable with SMB. SSH wasn't designed for this either, so while it is probably better code, you are abusing a protocol for something it wasn't meant for, and it is literally designed to spawn a shell. There are different issues with each. It may be hard to do a fair comparison.

Maybe I'm a bit naive, but it seems to me that R/W NFS shares (or any NFS share with sensitive data) make life easier for a malicious actor/ransomware to move laterally through the network.

Data can be examined on any mounted filesystem. I don't know how you'd use that to move laterally through the network though, unless maybe you were using it as homedir mounts and messing with the contents of ~/.ssh/ or store plaintext passwords on it for all your hosts.

Much of the purpose of NFS is simply to store data. For example, we have a wide-open ISO library here that maintains OS images and a bunch of other useful stuff. It's read-only everywhere except for one FreeBSD host that's allowed to run a web browser to download updates.

If each movement requires a password that improves security, and if logon time is minimal, exposure is moe limited.

Not really. You can already do that with scp. I have no idea how I'd do something that came anywhere near replicating a busy NFS server with SSHFS.

Are either of these difficult to implement? Any idea where can I get more info--not sure exactly what the correct search terms would be?

@jgreco thanks again for taking the time to write such a lengthy reply... I'm not a professional admin, but I'm learning a lot.

Read your OS docs for how to do soft NFS mounts. This varies on some of the Linux variants.

The automounter is unfortunately available in a number of forms on Linux, ranging from am-utils to autofs to linux-amd. It's also a little bit bewildering to set up if you haven't set it up before, but there's better documentation out there now than there was in the late '80's.
 
Top