NFSD Cache

KernelPanic

Dabbler
Joined
Apr 21, 2016
Messages
16
Running FreeNas 11.2U5, serving ISCSI and NFS from 12 2TB Samsung 850's, mirrored. Performance is generally excellent.
ISCSI to VMware is flawless.
However, NFS is struggling - serving to various Redhat clients from EL3 through to EL8, using either NFS3, 4 or 4.1 - depending on client support.
Previously this was handled by a nexenta machine which occassionally also had nfs hiccups.

Our issue? The Cache keeps filling to rediculous levels.
From nfsstat -e:
Server Cache Stats:
Inprog Idem Non-idem Misses CacheSize TCPPeak
0 0 0 93635166 203997 204669

It simply keeps climbing, and never comes back down. After a while, some of the NFS4 machines then start reporting I/O Errors when executing files.
To allow it to operate this high, we set
vfs.nfsd.tcpcachetimeo: 300
vfs.nfsd.tcphighwater: 300000

Clearly that wasnt a long term fix, so we disabled the cache:
vfs.nfsd.cachetcp: 0

However, its still climbing. All of those options are still set, and the full nfsstat -s -e is:

Server Info:
Getattr Setattr Lookup Readlink Read Write Create Remove
89397989 1023424 2164520 21460 21845272 7640685 173838 264205
Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access
104331 9312 87 4840 4572 876031 3340341 11702632
Mknod Fsstat Fsinfo PathConf Commit LookupP SetClId SetClIdCf
4 159823 525 157 110736 0 173 173
Open OpenAttr OpenDwnGr OpenCfrm DelePurge DeleRet GetFH Lock
7682307 0 613 214874 0 0 1904287 4275483
LockT LockU Close Verify NVerify PutFH PutPubFH PutRootFH
3680480 4262033 7535249 0 0 50975993 0 592
Renew RestoreFH SaveFH Secinfo RelLckOwn V4Create
29520 132901 145267 0 38188 25087
Server:
Retfailed Faults Clients
0 0 20
OpenOwner Opens LockOwner Locks Delegs
203994 294183 4 6 0
Server Cache Stats:
Inprog Idem Non-idem Misses CacheSize TCPPeak
0 0 0 94207470 203996 204669




Any ideas on how to properly disable the NFS cache, to stop the NFS cache climbing to rediculous levels?
 
D

dlavigne

Guest
This is probably worth reporting at bugs.ixsystems.com. If you do, post the issue number here.
 
Top