SMBD process goes 100% and copy speed collapse, ARC request demand metadata issue ?

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
So another test still with my 670GB 55kitem set of audiofiles // Truenas v13u4

Clean empty dataset (always the same settings as previous tests without any aux param sync, atime disabled in dataset)

Copy from the mac : 24 minutes (as usual)
Delete from Ubuntu 22.04.1 Files app : 11 minutes (1h02 from mac os as a reminder)

So way faster (but still not as fast as between mac : 4minutes).
And SMBD on the server never go more than 30% on cores (instead of 100% when doing from mac os finder)
ARC Req demand_metadata mostly under 10M (instead of 65M when deleting from mac os finder)
My question is : Is Ubuntu takes care of all alternate stream/metadata/ressource fork/or whatever that finder copy with the files ? That would explain such a difference ?

Capture d’écran 2023-03-20 200009.png


So what is the solution
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
iSCSI? :wink:
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Yes Surely the problem is Mac OS / Finder way to work with SMB and/or Metadata. But What to do to correct this ?
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
iSCSI on macs... I'm not sure this would be even supported by Apple and userfriendly for my client / freelances users in post production... (not in IT)
I'm dealing with low tech people.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
I'm afraid I don't know what else to do either, since SMB implementation on MacOS is somewhat "uncharitable" ... to put it kindly.
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Looking for solutions I've tested those lines on Mac os side :
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE for finder not to read and write DS store files
and
adding
[default] dir_cache_max_cnt=0 signing_required=no
in /etc/nsmb.conf

I don't see any differences

Delete command still goes 60M in ARC Req Demand_metadata and the server is crawling...

:'(
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Hi all :)

Tested this morning to delete still the same 670GB 55kitems folder from Mac OS 10.14.6 (Mojave) (To see if it's different from Monterey 12.6.3)
share stat says SMB 3.0.2 (instead of 3.1.1 with monterey, rest is the same.
Result is even worse with the "can't delete because "file" is in use" bug... 7minutes for 300 (three hundred) items so it can take days at this speed.
ARC Req Demand_Metadata goes 60M and SMBD processes go 100%...etc, same same.

Still if anybody has any idea let me know

Nicolas
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Again … just out of curiosity: Did you test Ventura already?
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Looking to how it react I think this :

Finder seams to send all delete request in one shot saturating the server and the more you approach the end of the delete process to faster it goes actually deleting files.

It's not a technical statement but when you look at the number of item it delete, it's very slow at the beginning and almost all the delete process and goes faster 1minute before the end of the process as the number of ARC Req demand_metadata diminish
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Again … just out of curiosity: Did you test Ventura already?

Hello
That's my next test but there many other hardware and software compatibility issue with ventura for now. So even if it works it might transfer the problem somewhere it's even worse for us.

I have a venture laptop, I'll test with it as soon as possible.
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
I'm afraid I don't know what else to do either, since SMB implementation on MacOS is somewhat "uncharitable" ... to put it kindly.
The problem is there is unfortunately no other choice than apple for us in audio post... unless I agree with loosing all my clients...
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
Could you describe your intended workflow? How many Users need to access what kind of data? Is there any privilege separation? Or is it just scratch space / "public storage"?
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Again … just out of curiosity: Did you test Ventura already?

Tested on Ventura .... Same as monterey nothing better
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
You _could_ virtualize TrueNAS and MacOS (with TrueNAS attached via iSCSI/NFS) on your machine via ESXi and share the attached TrueNAS data via virtualized MacOS.
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Our setup

Dataset structure :
POOL
+PROJECTS
++FILM1
++FILM2
...
FILM1 and FILM2 are child dataset of PROJECTS
Let's asume 3 users :
PROJECTS_manager (in a admin group)
FILM1_user (in users group)
FILM2_user (in users group
PROJECTS_manager must be able to access and full control everything in the dataset PROJECTS and FILMx child dataset
FILM1_user must be able ONLY to see FILM1 and have full control in it
FILM2-user must be able ONLY to see FILM2 and have full control in it
FILM1_user and FILM2_user should not see "PROJECTS"
Clients are mac os monterey via SMB

That's how our actual server is configured and the truenas test rig also.
So Client users will for example upload their Pro tools sessions (and audiofiles when it goes to another departement (about 10k item for an episode of 40minutes. Exchange Audio Rushes (also thousand of audiofiles...) Backup their work daily with a tool like freefile sync.
A lot of copy, move, delete...
For now it's not intended for us to edit directly on the server. Performance compared to local storage will never be enough unless Huge investment (and even it won't be as fast as local SSD or Nvme)
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
See above. But it will be a pain to test this "on the fly".
 

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Yes that could be a solution... a little bit complicated but why not.

I've tested with sysctl vfs.zfs.arc.meta_prune=0
Still very slow and SMBD processes to 100%
ARC Req demand metadata around 40M

Anodos sent me some command test to do (SMB profiling). I'll try.
 

awasb

Patron
Joined
Jan 11, 2021
Messages
415
The sysctl won't speed up things. It prevents metadata pruning (and therefore further slow down caused by cache purges).
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Nicolas_Studiokgb

Contributor
Joined
Aug 7, 2020
Messages
130
Top