CIFS takes at least 30 seconds from login to present list of shares

Status
Not open for further replies.

MrAkai

Dabbler
Joined
Jun 30, 2014
Messages
23
We've had this problem since 9.2 but were waiting until we updated to 9.3 to ask about the issue in case something had changed.

We are running FreeNAS-9.3-STABLE-201503200528 with 2x10GB Intel nics, dual E5-2650 (16 real cores total) and 256GB of Ram.

We have around 200TB of data being served.

When a user logs into SMB (with a Windows 2008 AD server running auth, using the ad idmap backend), there is a delay of 30 or more seconds before a list of usable shares are returned to the user.

I've watched the transaction with wireshark and tcpdump and there is not network traffic between the client and server while this is running, so I'm assuming that the delay is in in Samba determining which shares the user has permissions/etc for.

Is there a way to determine if a certain set of file permissions (or a certain set of AD data) is causing this delay?

Thanks!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
We've had this problem since 9.2 but were waiting until we updated to 9.3 to ask about the issue in case something had changed.

We are running FreeNAS-9.3-STABLE-201503200528 with 2x10GB Intel nics, dual E5-2650 (16 real cores total) and 256GB of Ram.

We have around 200TB of data being served.

When a user logs into SMB (with a Windows 2008 AD server running auth, using the ad idmap backend), there is a delay of 30 or more seconds before a list of usable shares are returned to the user.

I've watched the transaction with wireshark and tcpdump and there is not network traffic between the client and server while this is running, so I'm assuming that the delay is in in Samba determining which shares the user has permissions/etc for.

Is there a way to determine if a certain set of file permissions (or a certain set of AD data) is causing this delay?

Thanks!
I'm not seeing that sort of delay on a AD member server with significantly less resources, but I also have significantly less data. This means I'll wave my hands around.
  1. How many shares?
  2. How is your pool configured (how many vdevs, what type of vdevs, how many disks per vdev)?
  3. Post /usr/local/etc/smb4.conf.
  4. Wireshark can be rather unhelpful for these sorts of things. You should try increasing samba's logging verbosity to 'debug' and seeing what happens and review the log file generated. The log file will be massive, so get some coffee and be ready to read something almost as uninteresting as an Ayn Rand novel.
  5. Get some figures on your ARC performance as well. (arc_summary, arcstat.py, zpool iostat) You may benefit from an L2ARC device.
  6. If you feel like living on the edge you can try disabling "store DOS attributes" per instructions here (#4): https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/ Things shouldn't go haywire, but Murphy hates people who skip straight to production. This problem typically wouldn't manifest itself in a simple listing of shares, but smbd tends to aggressively pre-cache things in a slightly neurotic way (because you really need to cache some random folder 3 directories deep in a share you rarely access). With 200TB of data, a little neurosis might add up to a big performance hit.
  7. Once you have more data, go stalk and harass @cyberjock in IRC. :D
 

MrAkai

Dabbler
Joined
Jun 30, 2014
Messages
23
Hand Waving is appreciated, thanks :)

1> 10 Shares configured
2> The pool has 9 Raidz2 VDEV with 10 HDD Each in them. ZIL is 2x8GB Zeusram, L2ARC is 256GB SSD
3> Scrubbed smb4.conf as follows:
[global]
server min protocol = NT1
server max protocol = NT1
encrypt passwords = yes
dns proxy = no
strict locking = no
oplocks = yes
deadtime = 15
max log size = 51200
max open files = 7548041
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
getwd cache = yes
guest account = nobody
map to guest = Bad User
obey pam restrictions = yes
directory name cache size = 0
kernel change notify = no
dfree command = /usr/local/libexec/samba/dfree
panic action = /usr/local/libexec/samba/samba-backtrace
nsupdate command = /usr/local/bin/samba-nsupdate -g
server string = FreeNAS Server
ea support = yes
store dos attributes = yes
unix extensions = no
acl allow execute always = false
acl check permissions = true
dos filemode = yes
domain logons = no
idmap config *: backend = tdb
idmap config *: range = 90000001-100000000
server role = member server
netbios name = JUPITER
workgroup = XXXXXXXX
realm = AD.XXX.XXX
security = ADS
client use spnego = yes
cache directory = /var/tmp/.cache/.samba
local master = no
domain master = no
preferred master = no
winbind cache time = 7200
winbind offline logon = yes
winbind enum users = yes
winbind enum groups = yes
winbind nested groups = yes
winbind use default domain = no
winbind refresh tickets = yes
idmap config XXXXXXXX: backend = ad
idmap config XXXXXXXX: range = 1000-90000000
idmap config XXXXXXXX: schema mode = rfc2307
allow trusted domains = no
client ldap sasl wrapping = plain
template shell = /bin/sh
template homedir = /home/%D/%U
pid directory = /var/run/samba
smb passwd file = /var/etc/private/smbpasswd
private dir = /var/etc/private
create mask = 0664
directory mask = 0775
client ntlmv2 auth = yes
dos charset = CP437
unix charset = UTF-8
log level = 2
idmap config XXXXXXXX: range = 1000-89999999
reset on zero vc = yes
force create mode = 0664
force directory mode = 0775
acl group control = yes
nt acl support = no
aio write size = 16384
aio read size = 16384
aio write behind = true
min receivefile size = 16384
#speedup tuning, remove if any issues
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no


[VideoOffload]
path = /mnt/vol1/videooffload
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[departments]
path = /mnt/vol1/departments/Departments
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare
veto oplock files = /*.doc/*.docx/*.xls/*.xlsx/*.pptx/*.ppsx/*.ppt/*.pps


[ftp]
path = /mnt/vol1/ftp/ftp
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[public]
path = /mnt/vol1/public/Public
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series]
path = /mnt/vol1/series/series
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series_archive]
path = /mnt/vol1/series/series_archive
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series_new]
path = /mnt/vol1/series/series_new
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[video_archive]
path = /mnt/vol1/videoarchive/video_archive
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[vmbackups]
path = /mnt/vol1/vmbackups
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = no
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[webimages]
path = /mnt/vol1/webimages
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare

4> I have plans to come in after hours/weekend to do the debug dance. I can't pull it off during the day due to client load. I will report back what I can figure out.
5> It looks that the ARC is very efficient (hit rates around 95%) but the L2ARC has about a 98% cache miss. Is there specific data that might help (demand vs perfect, data vs metadata) figure out how best to tune this?
6> We're a mixed mac and windows house (mostly mac at this point) and I successfully turned off the dos stuff a month or so ago and it seems to be fine. The one big quirk we have is the forcing of the NT1 protocol. We do this because when we first switched to FreeNAS, our adobe apps were saving corrupted files over SMB2. This seems to be a well known issue with Adobe (they basically don't support saving to network shares). Forcing SMB1/NT1 on the Macs and the server worked around the problem.
7> After I've read The Fountainhead of debug logs I'll post what I find here and also harass (politely of course) cyberjock.

Thanks!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
We do this because when we first switched to FreeNAS, our adobe apps were saving corrupted files over SMB2.

I'll have to remember that for the next poor sucker who ends up in that situation. It's been occasionally rearing its ugly head since I joined this forum.

Still doesn't explain what the hell is causing it, though.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Hand Waving is appreciated, thanks :)

1> 10 Shares configured
2> The pool has 9 Raidz2 VDEV with 10 HDD Each in them. ZIL is 2x8GB Zeusram, L2ARC is 256GB SSD
3> Scrubbed smb4.conf as follows:
[global]
server min protocol = NT1
server max protocol = NT1
encrypt passwords = yes
dns proxy = no
strict locking = no
oplocks = yes
deadtime = 15
max log size = 51200
max open files = 7548041
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
getwd cache = yes
guest account = nobody
map to guest = Bad User
obey pam restrictions = yes
directory name cache size = 0
kernel change notify = no
dfree command = /usr/local/libexec/samba/dfree
panic action = /usr/local/libexec/samba/samba-backtrace
nsupdate command = /usr/local/bin/samba-nsupdate -g
server string = FreeNAS Server
ea support = yes
store dos attributes = yes
unix extensions = no
acl allow execute always = false
acl check permissions = true
dos filemode = yes
domain logons = no
idmap config *: backend = tdb
idmap config *: range = 90000001-100000000
server role = member server
netbios name = JUPITER
workgroup = XXXXXXXX
realm = AD.XXX.XXX
security = ADS
client use spnego = yes
cache directory = /var/tmp/.cache/.samba
local master = no
domain master = no
preferred master = no
winbind cache time = 7200
winbind offline logon = yes
winbind enum users = yes
winbind enum groups = yes
winbind nested groups = yes
winbind use default domain = no
winbind refresh tickets = yes
idmap config XXXXXXXX: backend = ad
idmap config XXXXXXXX: range = 1000-90000000
idmap config XXXXXXXX: schema mode = rfc2307
allow trusted domains = no
client ldap sasl wrapping = plain
template shell = /bin/sh
template homedir = /home/%D/%U
pid directory = /var/run/samba
smb passwd file = /var/etc/private/smbpasswd
private dir = /var/etc/private
create mask = 0664
directory mask = 0775
client ntlmv2 auth = yes
dos charset = CP437
unix charset = UTF-8
log level = 2
idmap config XXXXXXXX: range = 1000-89999999
reset on zero vc = yes
force create mode = 0664
force directory mode = 0775
acl group control = yes
nt acl support = no
aio write size = 16384
aio read size = 16384
aio write behind = true
min receivefile size = 16384
#speedup tuning, remove if any issues
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no


[VideoOffload]
path = /mnt/vol1/videooffload
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[departments]
path = /mnt/vol1/departments/Departments
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare
veto oplock files = /*.doc/*.docx/*.xls/*.xlsx/*.pptx/*.ppsx/*.ppt/*.pps


[ftp]
path = /mnt/vol1/ftp/ftp
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[public]
path = /mnt/vol1/public/Public
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series]
path = /mnt/vol1/series/series
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series_archive]
path = /mnt/vol1/series/series_archive
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = no
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[series_new]
path = /mnt/vol1/series/series_new
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[video_archive]
path = /mnt/vol1/videoarchive/video_archive
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[vmbackups]
path = /mnt/vol1/vmbackups
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = no
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare


[webimages]
path = /mnt/vol1/webimages
printable = no
veto files = /.snapshot/.windows/.mac/.zfs/
writeable = yes
browseable = yes
recycle:repository = .recycle/%U
recycle:keeptree = yes
recycle:versions = yes
recycle:touch = yes
recycle:directory_mode = 0777
recycle:subdir_mode = 0700
shadow:snapdir = .zfs/snapshot
shadow:sort = desc
shadow:localtime = yes
shadow:format = auto-%Y%m%d.%H%M-1w
shadow:snapdirseverywhere = yes
vfs objects = shadow_copy2 zfsacl aio_pthread streams_xattr
hide dot files = yes
guest ok = no
nfs4:mode = special
nfs4:acedup = merge
nfs4:chown = true
zfsacl:acesort = dontcare

4> I have plans to come in after hours/weekend to do the debug dance. I can't pull it off during the day due to client load. I will report back what I can figure out.
5> It looks that the ARC is very efficient (hit rates around 95%) but the L2ARC has about a 98% cache miss. Is there specific data that might help (demand vs perfect, data vs metadata) figure out how best to tune this?
6> We're a mixed mac and windows house (mostly mac at this point) and I successfully turned off the dos stuff a month or so ago and it seems to be fine. The one big quirk we have is the forcing of the NT1 protocol. We do this because when we first switched to FreeNAS, our adobe apps were saving corrupted files over SMB2. This seems to be a well known issue with Adobe (they basically don't support saving to network shares). Forcing SMB1/NT1 on the Macs and the server worked around the problem.
7> After I've read The Fountainhead of debug logs I'll post what I find here and also harass (politely of course) cyberjock.

Thanks!

On a side-note, you can remove the "aio-pthread" vfs object. Per Jeremy Allison, "aio_pthread won't hurt on FreeNAS but won't help as it only provides async opens where each thread can have an independent user token (which isn't the case on the *BSD's, only on Linux at the moment). The basic pthread-based AIO path is now built into smbd directly." See here: https://lists.samba.org/archive/samba/2014-September/184928.html My default is don't enable something that doesn't work. :)

Also, is there a particular reason why you have set "shadow:snapdirseverywhere = yes"? This might incur a performance penalty.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Few comments:

If the cache is missing 98% of the time (which means hitting just 2% of the time) you've done something very very wrong. I've never even seen values below about 40%, when doing deliberate workloads that would miss!
Get rid of any CIFS settings you added. Use the defaults. snapsdirseverywhere is not the default, so clearly you have at least one. ;)
I've seen massive servers that have been fast, even listing directories with more than 100k files in less than 10 seconds flat.
ZIL is totally useless and should be removed.
L2ARC is almost always useless for CIFS, and is definitely useless if you are hitting just 2% of the time anyway.

Have you tried accessing a SMB share using smbclient from the CLI on FreeNAS? If it delays, then you know the problem is not the network or client. With a delay of that long, if there isn't disk activity to match, I'd say you should look at your AD setup. I'm betting the FreeNAS machine is trying to access some machine or resource in AD that is unavailable or overloaded.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Few comments:

If the cache is missing 98% of the time (which means hitting just 2% of the time) you've done something very very wrong. I've never even seen values below about 40%, when doing deliberate workloads that would miss!
Get rid of any CIFS settings you added. Use the defaults. snapsdirseverywhere is not the default, so clearly you have at least one. ;)

To give a specific example: remove all aio parameters from your [global] config. Especially "aio write-behind = true", which is syntactically incorrect. :)

Overall, most samba 'tuning' people espouse on the interweb-o-tubes is voodoo.
 

MrAkai

Dabbler
Joined
Jun 30, 2014
Messages
23
Thanks for the pointers, I'll try them out.

I'm not sure how shadow:snapdirseverywhere = yes got in there, it's certainly not listed in my extra params in the GUI for either CIFS or the shares themselves..

They are only (for CIFS Service):
idmap config XXXXXXX: range = 1000-89999999
reset on zero vc = yes
force create mode = 0664
force directory mode = 0775
acl group control = yes
nt acl support = no
aio write size = 16384
aio read size = 16384
aio write behind = true
min receivefile size = 16384
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no

I'll start pulling these out and see if anything improves.

I will try the local connection test as well.

For the VFS Modules, I did not manually add or remove any, they came over from the 9.2 config I imported (I had to reinstall rather the update because I was going from a GEOM Mirror boot to the ZFS boot and when I posted here about it, the recommendation was to wipe and restore config). Should I go ahead and pull the aio_pthread and streams_xattr (which doesn't seem to be an option in 9.3 at all)? Are there any vfs recommended beyond the default of zfsacl and shadow2?

Any idea how I can disable the shadow:snapdirseverywhere without just negating it in the extra params field? I tested by adding a new share, and the shadow:snapdirseverywhere=yes was in the new share's configuration as well.

Thanks
 

MrAkai

Dabbler
Joined
Jun 30, 2014
Messages
23
Also sorry to double reply, but your comment about ZIL is useless, is that in reference to and SMB centric workload or ZFS/FreeNAS in general?


Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
CIFS has no "sync write" in its spec. So you, by definition, cannot have a sync write if you can't even tell the Samba server it is a sync write.

Determining whether a ZIL is useful or not depends on what would request the sync write, what *could* acknowledge the sync write, and whether a sync write is enabled.

There's a bunch of yes/nos for what is and isn't supported for various features of FreeNAS. You pretty much have to know and understand your protocol and client machines to figure out if it is even supported, let alone actually using it.

This is why I tell people "if you don't know if you need it, you probably don't". If you did need it, you'd *know* that you are doing sync writes, and therefore you'd *know* you'd need a ZIL.
 

MrAkai

Dabbler
Joined
Jun 30, 2014
Messages
23
Thanks. We do have sync workload as well (NFS) so I'll plan on keeping the ZIL in this particular system.

Any ideas on the shadow:snapdirseverywhere I'd like to try removing it but as far as I can tell it's on by default in generate_smb4_conf.py:
if task:
confset1(smb4_shares, "shadow:snapdir = .zfs/snapshot")
confset1(smb4_shares, "shadow:sort = desc")
confset1(smb4_shares, "shadow:localtime = yes")
confset1(smb4_shares, "shadow:format = auto-%%Y%%m%%d.%%H%%M-%s%s" %
(
task.task_ret_count, task.task_ret_unit[0]))
confset1(smb4_shares, "shadow:snapdirseverywhere = yes")

Thanks
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks. We do have sync workload as well (NFS) so I'll plan on keeping the ZIL in this particular system.

Ok, but does your client actually do NFS sync writes?

Like I said above, this is NOT something you can just say "yep, i'm using it".
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Thanks. We do have sync workload as well (NFS) so I'll plan on keeping the ZIL in this particular system.

Any ideas on the shadow:snapdirseverywhere I'd like to try removing it but as far as I can tell it's on by default in generate_smb4_conf.py:
if task:
confset1(smb4_shares, "shadow:snapdir = .zfs/snapshot")
confset1(smb4_shares, "shadow:sort = desc")
confset1(smb4_shares, "shadow:localtime = yes")
confset1(smb4_shares, "shadow:format = auto-%%Y%%m%%d.%%H%%M-%s%s" %
(
task.task_ret_count, task.task_ret_unit[0]))
confset1(smb4_shares, "shadow:snapdirseverywhere = yes")

Thanks

Odd. Didn't notice that was added to the generate smb4 script (still on 9.2.x for my production server).

Git commit is here:
https://github.com/freenas/freenas/...4fa2e6a#diff-142c75298569a3d6a5c1dcb5b7109845

I'm not sure how that parameter relates to binding IPs. You can test disabling it temporarily as follows:

  • log in via ssh
  • Comment out parameter
  • Type " service samba-server restart" (I think that's the right command).

Based on the fact you were having problems before the git commit, I'd say this isn't the bug you're looking for.
 
Status
Not open for further replies.
Top