Kerberised NFS help

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi All,

I am trying to get Kerberised NFS working in my environment but am struggling to put the final pieces of the puzzle together.

Environment: Win 2012R2 AD DCs, FreeNAS 11.3 server exporting over NFS with Kerberos enabled, RHEL 7 Linux clients.

Before joining the FreeNAS box to the domain I enabled Kerberised NFS and I can see that the machine account created in the AD has the relevant SPNs:

HOST/NFSSERVER
HOST/NFSSERVER.fqdn
nfs/NFSSERVER
nfs/NFSSERVER.FQDN
RestrictedKrbHost/NFSSERVER
RestrictedKrbHost/NFSSERVER.fdqd

On the FreeNAS box I then ran the following to add the nfs principal to the keytab file

root@nfsserver[~]# net -k ads keytab add nfs
root@nfsserver[~]# ktutil -k /etc/krb5.keytab list
/etc/krb5.keytab:

Vno Type Principal Aliases
1 des-cbc-crc restrictedkrbhost/nfsserver.fqdn@REALM
1 des-cbc-crc restrictedkrbhost/NFSSERVER@REALM
1 des-cbc-md5 restrictedkrbhost/nfsserver.fqdn@REALM
1 des-cbc-md5 restrictedkrbhost/NFSSERVER@REALM
1 aes128-cts-hmac-sha1-96 restrictedkrbhost/nfsserver.fqdn@REALM
1 aes128-cts-hmac-sha1-96 restrictedkrbhost/NFSSERVER@REALM
1 aes256-cts-hmac-sha1-96 restrictedkrbhost/nfsserver.fqdn@REALM
1 aes256-cts-hmac-sha1-96 restrictedkrbhost/NFSSERVER@REALM
1 arcfour-hmac-md5 restrictedkrbhost/nfsserver.fqdn@REALM
1 arcfour-hmac-md5 restrictedkrbhost/NFSSERVER@REALM
1 des-cbc-crc host/nfsserver.fqdn@REALM
1 des-cbc-crc host/NFSSERVER@REALM
1 des-cbc-md5 host/nfsserver.fqdn@REALM
1 des-cbc-md5 host/NFSSERVER@REALM
1 aes128-cts-hmac-sha1-96 host/nfsserver.fqdn@REALM
1 aes128-cts-hmac-sha1-96 host/NFSSERVER@REALM
1 aes256-cts-hmac-sha1-96 host/nfsserver.fqdn@REALM
1 aes256-cts-hmac-sha1-96 host/NFSSERVER@REALM
1 arcfour-hmac-md5 host/nfsserver.fqdn@REALM
1 arcfour-hmac-md5 host/NFSSERVER@REALM
1 des-cbc-crc NFSSERVER$REALM
1 des-cbc-md5 NFSSERVER$REALM
1 aes128-cts-hmac-sha1-96 NFSSERVER$REALM
1 aes256-cts-hmac-sha1-96 NFSSERVER$REALM
1 arcfour-hmac-md5 NFSSERVER$REALM
1 des-cbc-crc nfs/nfsserver.fqdn@REALM
1 des-cbc-crc nfs/NFSSERVER@REALM
1 des-cbc-md5 nfs/nfsserver.fqdn@REALM
1 des-cbc-md5 nfs/NFSSERVER@REALM
1 aes128-cts-hmac-sha1-96 nfs/nfsserver.fqdn@REALM
1 aes128-cts-hmac-sha1-96 nfs/NFSSERVER@REALM
1 aes256-cts-hmac-sha1-96 nfs/nfsserver.fqdn@REALM
1 aes256-cts-hmac-sha1-96 nfs/NFSSERVER@REALM
1 arcfour-hmac-md5 nfs/nfsserver.fqdn@REALM
1 arcfour-hmac-md5 nfs/NFSSERVER@REALM

The linux client 'linclient' has been configured with the help of a Red Hat engineer and is working to the point that we can do a manual mount of the NFS share are root and get it mounting:

[root@linclient ~]# mount -t nfs nfsserver.fqdn:/mnt/store/home /mnt/nfsserver_nfs -o vers=4.0,sec=krb5
[root@linclient ~]# mount | grep nfsserver
nfsserver.fqdn:/mnt/store/home on /mnt/nfsserver_nfs type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=krb5,clientaddr=CLIENTIP,local_lock=none,addr=SERVERIP)

root on the client has no permissions to see any of the datasets under /mnt/nfsserver_nfs/ - I assume because none of the datasets are actually owned by root.

When I try to use automount maps so that users can log in to the client with the home directory I also get permission denied errors.

For the record:

root@nfsserver[~]# ls -l /mnt
total 5
-rw-r--r-- 1 root wheel 5 Jun 10 16:22 md_size
drwxr-xr-x 5 root wheel 5 Jun 24 13:37 store
root@nfsserver[~]# ls -l /mnt/store
total 162
drwxrwxrwx 61 root wheel 61 Jun 15 11:58 group
drwxrwxrwx 606 root wheel 606 Jun 15 11:58 home
drwxr-xr-x 9 root wheel 11 Jun 25 11:23 iocage

On the linux client I see the following error:
Jul 14 13:33:50 linclient automount[2530]: >> mount.nfs4: Operation not permitted
Jul 14 13:33:50 linclient automount[2530]: mount(nfs): nfs: mount failure nfsserver.fqdn:/mnt/store/home/username on /homes/username

The Red Hat engineer suspects that FreeNAS is getting in the way. I can't find any useful logs on the FreeNAS box that tell me whether or not this is true.

Any help you can provide would be much appreciated.

Many thanks,
Fab
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi All,

A sniff off the network traffic reveals the following error: NFS4ERR_WRONGSEC

If I understand this correctly, the client tries to access my NFS share and is told that the security flavor for the share is not the same as for the parent directories.

My exports look like this:
V4: / -sec=krb5:krb5i:krb5p
/mnt/store/home -alldirs -sec=krb5:krb5i:krb5p -network SUBNET1/24
/mnt/store/home -alldirs -sec=krb5:krb5i:krb5p -network SUBNET2/24
/mnt/store/home -alldirs -sec=krb5:krb5i:krb5p -network SUBNET3/24

The home directories I'm trying to automount live under /mnt/store/home/, e.g., /mnt/store/home/user1. I can mount /mnt/store/home but not /mnt/store/home/user1.

Can anyone help me debug this?

I came across this link but there's no solution given here.


Thanks,
Fab
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi all,

Naively, I assumed that checking the "All Dirs" option would mean that I could automount individual users home directories. It seems I was wrong.

Explicitly setting the exports

V4: / -sec=krb5:krb5i:krb5p
/mnt/store/home/user1 -sec=krb5:krb5i:krb5p -network SUBNET1/24
/mnt/store/home/user1 -sec=krb5:krb5i:krb5p -network SUBNET2/24
/mnt/store/home/user1 -sec=krb5:krb5i:krb5p -network SUBNET3/24

Seems to make this work. With 700+ users is somewhat less than ideal. Can anyone tell me if this is how it is meant to work? If so, can this be scripted?

Edit: I now understand that I was misinterpreting 'dirs' for datasets. Silly mistake. Still, it would be nice if there was a way to achieve what I want without the above entries in /etc/exports for each user. It's slightly sub-optimal.
 
Last edited:

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Dear All,

I just wanted to follow up on this to try and understand why, when using kerberised NFS (sec=krb,krb5i,krb5p), I can't just have an NFS share at the top level and have users drop into their own datasets below that.

This works when the security is set to sys. I don't see why it can't work with the sec set to one of the krb options. What am I missing?

Thanks,
Fab
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I'm not an IT specialist, but as a neutral observer I'm interested in the answer to your question. I'd note two points on reading https://www.freebsd.org/cgi/man.cgi?exports(5)

1. The --all-dirs flag doesn't seem relevant to NFSv4
2. Quote " All ZFS file systems in the subtree below the NFSv4 tree root must be exported."

Point two is echoed in the FreeNAS guide - 13.3 -
Each pool or dataset is considered to be a unique filesystem. Individual NFS shares cannot cross filesystem boundaries. Adding paths to share more directories only works if those directories are within the same filesystem.

Personally, I don't think the FreeNAS guide makes the distinction between NFSv3 and NFSV4 particularly clear.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Thanks for this. Perhaps I should RTM a little more to see if I can figure this out.
 

statalently

Dabbler
Joined
Oct 11, 2019
Messages
35
Looking forward to hearing any developments as I am keen to test out kerberos NFS myself. Good luck!
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hey guys.

I'm not making much progress but wondered if anyone could answer the following.

Is it a bad idea to set 'sharenfs=on' using the zfs command via the CLI as well as using the UI to create exports? I've being doing the former when creating the dataset because I've been using a script from another unix box but I've read that one should either use /etc/exports or zfs's sharenfs option. It wouldn't really explain my issue with sec=sys vs sec=krb5 but I figure I should do things correctly at the very least.

Fab
 

csj

iXsystems
iXsystems
Joined
Oct 20, 2017
Messages
18
Hi @Fab Sidoli , I've responded to your Jira ticket but I'll also respond here for posterity sake.

You have created zfs datasets for each home user. This means NFSv4 sees them as a separate file system. NFSv4 protocol does not cross filesystem boundaries, so this is expected behavior. Here is the documentation that describes this behavior straight out of the freeBSD man pages.
All ZFS file systems in the subtree below the NFSv4 tree root must be exported. NFSv4 does not use the mount protocol and does permit clients to cross server mount point boundaries, although not all clients are capable of crossing the mount points.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi,

As I said in my ticket, this seems to work for sec=sys. I'm not sure why this would be.

Thanks,
Fab
 

csj

iXsystems
iXsystems
Joined
Oct 20, 2017
Messages
18
NFSv3 and NFSv4 are similar in name alone. They are dramatically different protocols, and I almost always recommend against NFSv4 unless this is a "test" system and/or you're fully competent in all the changes that come with the protocol differences. I'll try my best to answer the question without muddying the water even more.

When you use NFSv4, if you use sec=sys, you're doing the following:
1. you're no longer authenticating the client
2. you're no longer authorizing using username@domain.name (you're resolving ONLY local users/groups to uid/gid values)
--this means permissions are still based off local uid/gid values on the truenas server

When you use NFSv4, if you use sec=krb5, you're doing the following:
1. you're authenticating the client (you're now validating the client using an external resource (kerberos))
2. you're authorizing access by using username@domain.name
--this means permissions are being based off username@domain.name

Making wild guesses:
1. krb5 could be failing because kerberos authentication isn't actually working with the truenas and/or the clients
2. when using ZFS and NFSv4, you have to export all sub-zfs datasets since NFSv4 doesn't use the traditional mount protocol found in NFSv3.
3. your permissions are incorrect at the top-level dataset and/or the sub-datasets.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi,

As I said in my ticket, this seems to work for sec=sys. I'm not sure why this would be.

Thanks,
Fab

OK. I understand what's happening now. My datasets are being created with the ZFS sharenfs option turned on which give the impression that sec=sys doesn't require individual mount points when shares are configured via the UI. I hadn't appreciated this was the case. Turning sharenfs=off breaks my ability to NFS mount using sec=sys.

I assume it's possible to set the security type with the sharenfs option? It's far simpler to script this way.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
NFSv3 and NFSv4 are similar in name alone. They are dramatically different protocols, and I almost always recommend against NFSv4 unless this is a "test" system and/or you're fully competent in all the changes that come with the protocol differences. I'll try my best to answer the question without muddying the water even more.

When you use NFSv4, if you use sec=sys, you're doing the following:
1. you're no longer authenticating the client
2. you're no longer authorizing using username@domain.name (you're resolving ONLY local users/groups to uid/gid values)
--this means permissions are still based off local uid/gid values on the truenas server

When you use NFSv4, if you use sec=krb5, you're doing the following:
1. you're authenticating the client (you're now validating the client using an external resource (kerberos))
2. you're authorizing access by using username@domain.name
--this means permissions are being based off username@domain.name

Making wild guesses:
1. krb5 could be failing because kerberos authentication isn't actually working with the truenas and/or the clients
2. when using ZFS and NFSv4, you have to export all sub-zfs datasets since NFSv4 doesn't use the traditional mount protocol found in NFSv3.
3. your permissions are incorrect at the top-level dataset and/or the sub-datasets.

Thanks for this. I understand how NFSv3 and v4 differ, hence my desire to do sec=krb5 with NFSv4.

As for your wild guesses, 1 and 3 are not the issues. krb5 authentication isn't failing - I see that when I explicitly set the mount point in the UI. As I eluded to above, the issue is that I hadn't appreciated that sharenfs operates separately from the UI; that is to say, I should use one or the other but not both since it's that that muddies the water. On my old solaris boxes I simply used the sharenfs option and instinctively went with this because it's what I knew. Fool me.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi,

Having understood what my issue is I wondered whether you had any advice about how to manage the NFS shares? I have 700+ users which makes manually adding entries in the UI unfeasible.

It seems to me that using zfs sharenfs is nice because any options I set get replicated across to another box when I do replication.

If I use the UI route instead (ie. set sharenfs=off for all datasets) how do I go about doing that?

Also, I believe I can use something like the following to "script" for each user but is this robust and safe?

midclt call sharing.nfs.create '{"comment": "user nfs export", "security": ["KRB5", "KRB5I", "KRB5P"], "enabled": true, "paths": ["/mnt/path/to/user/homedir"], "networks": ["IP1/24", "IP2/24"]}'

How do I delete a share if I delete a users home directory? If it's set on the dataset then destroying the dataset automatically deletes the share entry. This isn't the case if it's in the UI, again making management a pain. I either have to search in the UI and manually delete or figure out how to do this via the terminal, which isn't easy as you need to use the id to delete not the share path. To get the id I'd need to do something like the following.

midclt call sharing.nfs.query | jq '.[] | .id,.comment' | grep -B 1 ${username} | head -n 1

or

midclt call sharing.nfs.query | jq '.[] | .id,.paths' | grep -B 2 ${username} | head -n 1

Again, I'm not sure how robust this is.

I really would appreciate some advice.
 

csj

iXsystems
iXsystems
Joined
Oct 20, 2017
Messages
18
Since you're using Kerberized NFSv4, you have access to the NFSv4 style ACLs which allow some pretty convenient avenues to get what you're doing done quite simply.

I would have tried something like this:

1. create top-level zfs dataset (i.e. /mnt/tank/nfs/homes)
2. change the ACLs on the top-level dataset so that administrator users/groups have a full-control inheriting bit for any file/folder that gets created underneath and at the same time block access for everyone else (understanding that you at least have to have a posix execute bit so users can traverse the directory structure)
3. share out a single NFS share via the webUI pointing to the top-level zfs dataset
3. create sub-dirs (not zfs datasets) underneath the top-level dataset for each user's home directory (i.e. mkdir /mnt/tank/nfs/homes/userA)
4. when you create the sub-dir (userA), I would add permissions ONLY allowing that user to see that directory

The idea is that the user would be able to mount directly to the sub-dir.

If I'm understanding what you're trying to accomplish, I believe that setup would get you where you need to be.

Using the sharenfs parameter is something that TrueNAS just doesn't use. Simply put when FreeNAS was first written the sharenfs property wasn't tied into the kernel NFS daemon at the time so we used /etc/exports as the path of least resistance. Maybe we should revisit the idea of using that property as it clearly has some convenient aspects to it, but holy crap would that cause a lot of non-trivial, potentially breaking changes :eek:

Anyways, that's what I would have done if I was in your spot.
 
Last edited:

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi,

Yes, that's what I want to achieve except that I'd have no quota's, which is why I went for datasets. It's also what I was used to in the old Solaris days.

Really, if there were nice script to safely manage the exports in the UI I'd be happy with that - I don't particularly care that there would be 700+ entries in the UI.

How robust are my midctl commands above?

Else, is there a way of putting FW restrictions in place in terms on the FreeNAS box to limit what can and can't get to my NFS shares? If so, I'd just set the "sharenfs=sec=krb5,krb5i,kr5p" option to get kerberised NFS and then do firewalling some other way. I've tired various was to specify multiple networks but this doesn't seem to work with sharenfs.

Thanks,
Fab
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
I just tried this midctl method and it seems to me that keeping track of the ids is going to be a pain in the butt. It feels like little thought has been given to NFS users....
 

csj

iXsystems
iXsystems
Joined
Oct 20, 2017
Messages
18
Ahh, so quotas are needed as well. Then yes, puts a damper on my idea. However, you should be able to use the API to manage the exports. The api documentation can be found at: <ip of server>/api/docs

If you're okay with writing scripts to call midclt based commands, then you should go ahead and write a script to talk directly to API. If you're using python, for example, using the requests module and running the sharing.nfs.query method will build a list of dicts that you're able to iterate over very easily. It becomes trivial to pull the "id" out of the returned data based on other qualifying filters.

The reason why the network/ip address FW doesn't work is because you're using NFSv4 with Kerberos :smile: which is expected behavior. NFSv4 with Kerberos expects that the ACLs on the filesystem prevent unauthorized access. Another little "gotcha" that not many people understand when they choose to use NFSv4 with kerberos.
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
I don't speak python (are you beginning to wonder what I'm doing here? :smile:) so I'm not sure how this works in practise. Do you have an example?

I'm happy with writing bash type scripts that call midctl and if I have some way of building a list of shares vs id that ought to be good enough for now.

As for the FW issue, is this really the case? Do you mean a literal FW (ie IPFW) or do you mean "FW" with the exportfs options? For example, I can set what networks I share to using the UI and this seems to work. Not a strict FW in the true sense but does that job. Surely an IPFW rule to restrict the NFS protocol to certain IPs is possible or am I yet again off the mark? :frown:
 

Fab Sidoli

Contributor
Joined
May 15, 2019
Messages
114
Hi All,

Apologies for dragging this up again, but kerberised NFS mounts have stopped working for me again and I need to urgently get this working.

I had to temporarily uncheck the "Require Kerberos for NFSv4" option in Services -> NFS and I'm not sure if this has broken things for me when I went to reset it. I'm not sure if this requires me to leave the domain and rejoin.

Attached is a screen shot showing the config for Services->NFS (note, I've redacted the BIND address IP but this is set).

I have specific mount points in the UI set up to allow KRB5 mounts to each users' home share. On a linux client as root, I can mount /mnt/store/home using the sec=krb5 (I/O error on ls) but I can't manually mount /mnt/store/home/username. I get permission denied from the server. Automounts for the users obviously don't work as a result.

#mount server.fqdn:/mnt/store/home /mnt/server_nfs/ -o vers=4.0,sec=krb5

mounts 'OK'

#umount
#mount server.fqdn:/mnt/store/home/username /mnt/server_nfs/ -o vers=4.0,sec=krb5
mount.nfs: access denied by server while mounting quark.lcn.ucl.ac.uk:/mnt/store/home/username

# cat /etc/exports (redacted)
V4: / -sec=krb5:krb5i:krb5p
/mnt/store/home -alldirs -sec=krb5:krb5i:krb5p -network SUBNET1
/mnt/store/home/username -sec=krb5:krb5i:krb5p -network SUBNET1
/mnt/store/home/username -sec=krb5:krb5i:krb5p -network SUBNET2
/mnt/store/home/username -sec=krb5:krb5i:krb5p -network SUBNET3
/mnt/store/home/username -sec=krb5:krb5i:krb5p -network SUBNET4

I have entries in /etc/zfs/exports that come from setting sharenfs, but this is specifically turned off for the user I'm testing for.

I can't see what is going on from the logs on this box so I don't know what is broken.

Any ideas?

Thanks,
Fab
 

Attachments

  • Screenshot 2020-09-21 at 13.53.31.png
    Screenshot 2020-09-21 at 13.53.31.png
    430 KB · Views: 191
Top