SMBD process goes 100% and copy speed collapse, ARC request demand metadata issue ?

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
I read you talk about sync :
I actually have strict syn = no in smbd service ? can this change anything ?
also does the "sync" parameter in pools change something about that ?
thanks for helping
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Yes, it can change things. But it's not the change I wanted.

The SMB-Option strict sync with parameter yes calls fsync(), which causes a ZIL commit unless (ZFS-)sync is disabled at the dataset level. (It's not enough to set it to standard.)

My thinking now, why I didn't fiddle with the (still crappy) MacOS implementation of SMB, was this: I want to use ZFS to ensure a certain level of security and availability of my data. (That is also one of the reasons why I make backups.) If I now manipulate the parameters that partly guarantee this security and availability for performance reasons, then I don't need a ZFS system at all. On the other hand: ZFS provides the possibility to accelerate sync writes by using a (low latency!) SLOG device. This is still nowhere near as fast as async writes, but it is faster than sync on a HDD pool and it brings an additional level of security.
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
I start the server and the mac, so zfs cache is empty at this point.
I've done the command
erase a 8300 item folder erase
here the screenshot
Capture d’écran 2023-03-18 152102.png


Capture d’écran 2023-03-18 152256.png


1679150070427.png

Nothing really appears when erasing. these appears while conecting. Maybe I should enable some logging option somewhere (everything is almost default settings on this install)
4 minutes to erase 180 GB 8300 files...
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
During the erase sequence smbd processes goes very high cpu usage randomly as usual
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
The problem with MAC is that you don't really have choice... AFP is deprecated and anything else than SMB is not a user level of integration.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
How long does it take to remove those 8300 files at the system level via shell? Sorry for asking, but I don't have a clue of your system and don't use those fancy graphical tools to evaluate mine. I use tools like zpool/zfs, iperf, vmstat, time, iostat, fio, arc_summary and top/htop.

And apart from application evaluation and since you are using an 8003 chip ... do you have some active cooling onboard?
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
the machine is a super micro rack server with a good bunch of fans :) it's a LSI 3008 IT mode
Yes I understand, I don't know how to use these tools. But I try to learn and to get it work.
Is there a command to test local pool performance ?
I'll try to erase locally and tell you
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
As you seams to know smb and mac, are the "SMB2/3 apple extentions" option is usefull or not ? It's enable as I work with mac but can it mess something ?
Also is there other things to add for mac shares to work ? fruits... anything ?
Thanks
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
It took about 30 seconds to rm the 8300 item folder locally
As it was already on the HDD from yesterday copy I saw HDD activity during this process
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
I've tried another test :
Copy the 180GB 8300 files from the mac to the server so it goes entirely in zfs cache, took 5minutes approx.
When I then RM this new folder with those files locally on the server, the RM command is almost instantly and the Ram used by zfs cache (180GB) goes into services used. It add up to the already allowed ram to services. That explain certainly why the Service ram fill up during the very long delete process when I do it via SMB (from mac os finder).
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
That sounds reasonably well.

Concerning parameters: I am trying hard, not to mess with implementation. I'm using plain defaults (apart from enabling multichannel support and enforcing smb3, but that's "officially supported" on the mac side).

For local performance tests I'd use fio:

Code:
cd /mnt/$ZROOT ;
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=50g --io_size=1500g --blocksize=128k --iodepth=16 --direct=1 --numjobs=16 --runtime=120 --group_reporting


Please bear in mind, my blocksize ist 128k (mixed usage). With the command above (8 cores, 128GB RAM, 10 2.5" SATA disks raidz2 dev pool) I get:

Code:
...
fio-3.28
Starting 16 processes
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
TEST: Laying out IO file (1 file / 51200MiB)
Jobs: 16 (f=16): [W(16)][5.8%][w=2219MiB/s][w=17.8k IOPS][eta 01m:53s]
Jobs: 16 (f=16): [W(16)][10.0%][w=1765MiB/s][w=14.1k IOPS][eta 01m:48s]
Jobs: 16 (f=16): [W(16)][14.2%][w=2774MiB/s][w=22.2k IOPS][eta 01m:43s]
Jobs: 16 (f=16): [W(16)][18.3%][w=2145MiB/s][w=17.2k IOPS][eta 01m:38s]
Jobs: 16 (f=16): [W(16)][22.5%][w=1043MiB/s][w=8344 IOPS][eta 01m:33s]
Jobs: 16 (f=16): [W(16)][26.7%][w=590MiB/s][w=4722 IOPS][eta 01m:28s] 
Jobs: 16 (f=16): [W(16)][30.8%][w=853MiB/s][w=6826 IOPS][eta 01m:23s]  
Jobs: 16 (f=16): [W(16)][35.0%][w=2477MiB/s][w=19.8k IOPS][eta 01m:18s]
Jobs: 16 (f=16): [W(16)][39.2%][w=281MiB/s][w=2247 IOPS][eta 01m:13s] 
Jobs: 16 (f=16): [W(16)][43.3%][w=797MiB/s][w=6376 IOPS][eta 01m:08s]  
Jobs: 16 (f=16): [W(16)][47.5%][w=453MiB/s][w=3626 IOPS][eta 01m:03s] 
Jobs: 16 (f=16): [W(16)][51.7%][w=618MiB/s][w=4944 IOPS][eta 00m:58s]  
Jobs: 16 (f=16): [W(16)][55.8%][w=403MiB/s][w=3221 IOPS][eta 00m:53s] 
Jobs: 16 (f=16): [W(16)][60.0%][w=495MiB/s][w=3958 IOPS][eta 00m:48s]  
Jobs: 16 (f=16): [W(16)][64.2%][w=2031MiB/s][w=16.3k IOPS][eta 00m:43s]
Jobs: 16 (f=16): [W(16)][68.3%][w=449MiB/s][w=3589 IOPS][eta 00m:38s] 
Jobs: 16 (f=16): [W(16)][72.5%][w=1642MiB/s][w=13.1k IOPS][eta 00m:33s]
Jobs: 16 (f=16): [W(16)][76.7%][w=605MiB/s][w=4843 IOPS][eta 00m:28s] 
Jobs: 16 (f=16): [W(16)][80.8%][w=264MiB/s][w=2108 IOPS][eta 00m:23s] 
Jobs: 16 (f=16): [W(16)][85.0%][w=819MiB/s][w=6555 IOPS][eta 00m:18s]  
Jobs: 16 (f=16): [W(16)][89.2%][w=289MiB/s][w=2308 IOPS][eta 00m:13s] 
Jobs: 16 (f=16): [W(16)][93.4%][w=1766MiB/s][w=14.1k IOPS][eta 00m:08s]
Jobs: 16 (f=16): [W(16)][97.5%][w=338MiB/s][w=2704 IOPS][eta 00m:03s] 
Jobs: 16 (f=16): [W(16)][100.0%][w=491MiB/s][w=3925 IOPS][eta 00m:00s]
TEST: (groupid=0, jobs=16): err= 0: pid=83687: Sat Mar 18 16:34:01 2023
  write: IOPS=9499, BW=1187MiB/s (1245MB/s)(139GiB/120007msec); 0 zone resets
    clat (usec): min=22, max=242377, avg=1675.45, stdev=3282.40
     lat (usec): min=25, max=242380, avg=1680.25, stdev=3287.36
    clat percentiles (usec):
     |  1.00th=[   40],  5.00th=[   42], 10.00th=[   43], 20.00th=[   47],
     | 30.00th=[   65], 40.00th=[   80], 50.00th=[  235], 60.00th=[  807],
     | 70.00th=[ 1876], 80.00th=[ 3228], 90.00th=[ 5145], 95.00th=[ 6652],
     | 99.00th=[ 8848], 99.50th=[15139], 99.90th=[40633], 99.95th=[54264],
     | 99.99th=[81265]
   bw (  MiB/s): min=  191, max= 7042, per=100.00%, avg=1188.69, stdev=82.67, samples=3808
   iops        : min= 1519, max=56338, avg=9502.91, stdev=661.33, samples=3808
  lat (usec)   : 50=25.65%, 100=19.95%, 250=5.02%, 500=6.52%, 750=2.38%
  lat (usec)   : 1000=2.48%
  lat (msec)   : 2=9.07%, 4=13.54%, 10=14.64%, 20=0.38%, 50=0.30%
  lat (msec)   : 100=0.06%, 250=0.01%
  cpu          : usr=0.55%, sys=3.79%, ctx=760065, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1140062,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  WRITE: bw=1187MiB/s (1245MB/s), 1187MiB/s-1187MiB/s (1245MB/s-1245MB/s), io=139GiB (149GB), run=120007-120007msec


If I were you, I'd add a dead simple new dataset without any bells and whistles. And share that via smb (again, plain defaults). And then I'd try to bechmark. (With BlackMagic SpeedTest or something similar.)

Please post the output of testparm -s afterwards. And then try to change options/parameters. But only one at a time! Step by step. Otherwise You'll never know, what change had what effect on Your system.
 
Last edited:

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Here the FIO local results on the pool (Stripe 12 HDD block 128k)

root@truenas[/mnt/RAID0]# fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=50g --io_size=1500g --blocksize=128k --iodepth=16 --direct=1 --numjobs=16 --runtime=120 --group_reporting TEST: (g=0): rw=write, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=psync, iodepth=16 ... fio-3.28 Starting 16 processes TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) TEST: Laying out IO file (1 file / 51200MiB) Jobs: 16 (f=16): [W(16)][5.8%][w=4937MiB/s][w=39.5k IOPS][eta 01m:54s] Jobs: 16 (f=16): [W(16)][10.0%][w=5371MiB/s][w=43.0k IOPS][eta 01m:48s] Jobs: 16 (f=16): [W(16)][14.2%][w=5908MiB/s][w=47.3k IOPS][eta 01m:43s] Jobs: 16 (f=16): [W(16)][18.3%][w=5022MiB/s][w=40.2k IOPS][eta 01m:38s] Jobs: 16 (f=16): [W(16)][22.5%][w=7770MiB/s][w=62.2k IOPS][eta 01m:33s] Jobs: 16 (f=16): [W(16)][26.7%][w=5953MiB/s][w=47.6k IOPS][eta 01m:28s] Jobs: 16 (f=16): [W(16)][30.8%][w=7834MiB/s][w=62.7k IOPS][eta 01m:23s] Jobs: 16 (f=16): [W(16)][35.0%][w=7534MiB/s][w=60.3k IOPS][eta 01m:18s] Jobs: 16 (f=16): [W(16)][39.2%][w=7088MiB/s][w=56.7k IOPS][eta 01m:13s] Jobs: 16 (f=16): [W(16)][43.3%][w=7126MiB/s][w=57.0k IOPS][eta 01m:08s] Jobs: 16 (f=16): [W(16)][47.5%][w=7518MiB/s][w=60.1k IOPS][eta 01m:03s] Jobs: 16 (f=16): [W(16)][51.7%][w=5597MiB/s][w=44.8k IOPS][eta 00m:58s] Jobs: 16 (f=16): [W(16)][55.8%][w=7008MiB/s][w=56.1k IOPS][eta 00m:53s] Jobs: 16 (f=16): [W(16)][60.3%][w=6071MiB/s][w=48.6k IOPS][eta 00m:48s] Jobs: 16 (f=16): [W(16)][64.5%][w=7950MiB/s][w=63.6k IOPS][eta 00m:43s] Jobs: 16 (f=16): [W(16)][69.2%][w=7142MiB/s][w=57.1k IOPS][eta 00m:37s] Jobs: 16 (f=16): [W(16)][73.3%][w=6186MiB/s][w=49.5k IOPS][eta 00m:32s] Jobs: 16 (f=16): [W(16)][77.5%][w=6376MiB/s][w=51.0k IOPS][eta 00m:27s] Jobs: 16 (f=16): [W(16)][81.7%][w=7713MiB/s][w=61.7k IOPS][eta 00m:22s] Jobs: 16 (f=16): [W(16)][85.8%][w=6219MiB/s][w=49.8k IOPS][eta 00m:17s] Jobs: 16 (f=16): [W(16)][90.0%][w=5196MiB/s][w=41.6k IOPS][eta 00m:12s] Jobs: 16 (f=16): [W(16)][94.2%][w=5554MiB/s][w=44.4k IOPS][eta 00m:07s] Jobs: 16 (f=16): [W(16)][98.3%][w=6614MiB/s][w=52.9k IOPS][eta 00m:02s] Jobs: 16 (f=16): [W(16)][100.0%][w=5951MiB/s][w=47.6k IOPS][eta 00m:00s] TEST: (groupid=0, jobs=16): err= 0: pid=5804: Sat Mar 18 17:00:53 2023 write: IOPS=50.4k, BW=6298MiB/s (6604MB/s)(738GiB/120002msec); 0 zone resets clat (usec): min=53, max=2852, avg=315.37, stdev=83.97 lat (usec): min=55, max=2859, avg=316.79, stdev=84.14 clat percentiles (usec): | 1.00th=[ 165], 5.00th=[ 210], 10.00th=[ 231], 20.00th=[ 251], | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 306], 60.00th=[ 330], | 70.00th=[ 351], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 445], | 99.00th=[ 545], 99.50th=[ 652], 99.90th=[ 996], 99.95th=[ 1221], | 99.99th=[ 1598] bw ( MiB/s): min= 4326, max=11209, per=100.00%, avg=6305.68, stdev=75.56, samples=3776 iops : min=34608, max=89660, avg=50438.06, stdev=604.47, samples=3776 lat (usec) : 100=0.01%, 250=20.01%, 500=78.20%, 750=1.57%, 1000=0.12% lat (msec) : 2=0.10%, 4=0.01% cpu : usr=1.04%, sys=10.48%, ctx=6048367, majf=0, minf=0 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=0,6046137,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=16 Run status group 0 (all jobs): WRITE: bw=6298MiB/s (6604MB/s), 6298MiB/s-6298MiB/s (6604MB/s-6604MB/s), io=738GiB (792GB), run=120002-120002msec root@truenas[/mnt/RAID0]#
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
So ... your HDD pool seems fine per se (as seems the networking). Next, I'd do some benchmarks. Start with sequential loads on default shares. Watch zpool iostat $POOLNAME 1 ... and maybe gstat -p (for per device analysis).

Please don't forget:

Code:
testparm -s


and maybe (just for sure)

Code:
zpool status
 
Last edited:

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
it goes like that during a copy from the mac





capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
RAID0 1.96T 194T 3 33 23.6K 26.5M
root@truenas[~]# zpool iostat RAID0 1
capacity operations bandwidth
RAID0 1.97T 194T 0 0 0 0
RAID0 1.97T 194T 0 0 0 0
RAID0 1.97T 194T 0 0 0 0
RAID0 1.97T 194T 0 950 3.76K 761M
RAID0 1.97T 194T 0 899 0 785M
RAID0 1.97T 194T 0 1.07K 0 974M
RAID0 1.97T 194T 0 1.54K 0 1.39G
RAID0 1.97T 194T 0 963 0 769M
RAID0 1.98T 194T 0 1001 0 870M
RAID0 1.98T 194T 0 1.78K 0 1.53G
RAID0 1.98T 194T 0 958 0 817M
RAID0 1.98T 194T 0 0 0 0
RAID0 1.98T 194T 0 934
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
gstat
(daX pool drives, adaX boot pool drives (2 sata mirrored ssd)
everything seems fine. I'll try to check until it goes mad like yesterday (if it happens)
dT: 1.008s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 0 0 0 0.0 0 0 0.0 0.0| da0
0 0 0 0 0.0 0 0 0.0 0.0| da1
0 1 0 0 0.0 0 0 0.0 9.1| da2
0 1 0 0 0.0 0 0 0.0 17.7| da3
0 1 0 0 0.0 0 0 0.0 9.4| da4
0 5 0 0 0.0 4 16 0.7 0.5| da5
0 6 0 0 0.0 4 16 0.6 10.0| da6
0 5 0 0 0.0 4 16 0.6 0.5| da7
0 1 0 0 0.0 0 0 0.0 8.7| da8
0 1 0 0 0.0 0 0 0.0 8.7| da9
0 0 0 0 0.0 0 0 0.0 0.0| da10
0 0 0 0 0.0 0 0 0.0 0.0| da11
0 0 0 0 0.0 0 0 0.0 0.0| ada0
0 0 0 0 0.0 0 0 0.0
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
There seems to be nothing wrong. Not the least. Again ...
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Sorry I've missed those one :
^C
root@truenas[~]# zpool status
pool: RAID0
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
RAID0 ONLINE 0 0 0
gptid/ae6a6da1-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae287c37-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae7703b2-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/aec26429-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/aec72d9e-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae71d71e-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae746c89-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/aec49c54-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae65776a-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae6f3138-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae6809b8-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0
gptid/ae6cfff3-c1bb-11ed-9f35-000e1e547490 ONLINE 0 0 0

errors: No known data errors

pool: boot-pool
state: ONLINE
scan: resilvered 1.61G in 00:00:04 with 0 errors on Thu Jan 12 19:46:36 2023
config:

NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada1p2 ONLINE 0 0 0

errors: No known data errors
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
testparm :

root@truenas[~]# testparm -s
Load smb config files from /usr/local/etc/smb4.conf
Loaded services file OK.
Weak crypto is allowed

Server role: ROLE_STANDALONE

# Global parameters
[global]
aio max threads = 2
bind interfaces only = Yes
disable spoolss = Yes
dns proxy = No
enable web service discovery = Yes
kernel change notify = No
load printers = No
logging = file
max log size = 5120
nsupdate command = /usr/local/bin/samba-nsupdate -g
registry shares = Yes
restrict anonymous = 2
server multi channel support = No
server role = standalone server
server string = TrueNAS Server
unix extensions = No
idmap config *: range = 90000001-100000000
fruit:nfs_aces = No
rpc_server:mdssvc = disabled
rpc_daemon:mdssd = disabled
idmap config * : backend = tdb
directory name cache size = 0
dos filemode = Yes
strict sync = No


[EN COURS]
access based share enum = Yes
ea support = No
mangled names = no
path = /mnt/RAID0/EN COURS
read only = No
smbd max xattr size = 2097152
vfs objects = catia fruit streams_xattr shadow_copy_zfs ixnas zfs_core aio_fbsd
fruit:resource = stream
fruit:metadata = stream
fruit:encoding = native
nfs4:chown = true
ixnas:dosattrib_xattr = false


[ATWIED]
access based share enum = Yes
ea support = No
mangled names = no
path = /mnt/RAID0/EN COURS/ATWIED
read only = No
smbd max xattr size = 2097152
vfs objects = catia fruit streams_xattr shadow_copy_zfs ixnas zfs_core aio_fbsd
fruit:resource = stream
fruit:metadata = stream
fruit:encoding = native
nfs4:chown = true
ixnas:dosattrib_xattr = false


[BLACK FLIES]
access based share enum = Yes
ea support = No
mangled names = no
path = /mnt/RAID0/EN COURS/BLACK FLIES
read only = No
smbd max xattr size = 2097152
vfs objects = catia fruit streams_xattr shadow_copy_zfs ixnas zfs_core aio_fbsd
fruit:resource = stream
fruit:metadata = stream
fruit:encoding = native
nfs4:chown = true
ixnas:dosattrib_xattr = false

[FILM3]
access based share enum = Yes
ea support = No
mangled names = no
path = /mnt/RAID0/EN COURS/FILM3
read only = No
smbd max xattr size = 2097152
vfs objects = catia fruit streams_xattr shadow_copy_zfs ixnas zfs_core aio_fbsd
fruit:resource = stream
fruit:metadata = stream
fruit:encoding = native
nfs4:chown = true
ixnas:dosattrib_xattr = false
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
The shares seem pretty standard. But the global section doesn't. Could you please post the general service settings including _all_ aux parameters set? Please post the output from cat /etc/nsmb.conf on your mac as well.
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
there is no nsmb.conf on the mac. I haven't modify any settings (the command reply no such file or directory. Nothing with ls also.
SMB service parameters :
1679160072300.png
 
Top