Help with transfer speeds and optimization

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
Server specs:
TrueNas Core 13.0 U2
CPU - 2X AMD EPYC 7542 32-Core Processors
MOBO - SuperMicro MBD-H12DSi-N6-O
RAM - 256GB DDR4-3200 PC4-25600 1Rx4 RDIMM ECC
NIC - Intel X540T2

2 VDEVS
4X - Seagate IronWolf 125 4TB SSDs
8X - Seagate IronWolf 125 2TB SSDs

2 Boot Drives
2X SAMSUNG 980 250GB M.2

2 Special VDEV mirrored drives
2X SAMSUNG 980 1TB M.2

I have a tunable setup - vfs.zfs.arc.meta_min set at 4294967298 with a sysctl type
Stopping the eviction metadata from ARC

TrueNas and servers are fairly new to me. We use this server for photoshop, illustrator, and occasional video editing. There is about 6TB of folders with smaller files. Currently we have 4 editors that work off the server. When I am copying files to and from the server I get about 110MiB/s from what it is showing on the network interface tab on the TrueNas dashboard. I am thinking that is the max I can get over a 1GbE network, but something seems off when using the adobe programs. When the designers open photoshop / illustrator, and they go to open files, they get the spinning wheel for quite a while when trying to open folders or save files on the server. Are there any hardware upgrades or software settings I can change to improve the speeds at all. I am working on purchasing a 10GbE switch soon to improve the connection. The computers and the server all have 10GbE NICs installed and ready to go. But is there anything I can do right now to improve the speeds? Or do I have a bottleneck somewhere? I have it setup as SMB sharing only for use with Microsoft and Apple computers.
 
Joined
Oct 22, 2019
Messages
3,641
When I am copying files to and from the server I get about 110MiB/s from what it is showing on the network interface tab on the TrueNas dashboard. I am thinking that is the max I can get over a 1GbE network
Those are good speeds for a 1GbE network.


When the designers open photoshop / illustrator, and they go to open files, they get the spinning wheel for quite a while when trying to open folders or save files on the server.
Does this "slow saving" behavior also occur on a local folder? (Do you have a weekly Cron Job that trims your SSD pool? Are you using "AutoTRIM" in the pool's options?)

Does it take a while to load the folder contents? (Do such folders contain many individual files?)


What does this reveal:
Code:
arc_summary | head -n 20
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Also, please show the output of zpool status. For best results with 10G networking, your pools should be SSD pools.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
Those are good speeds for a 1GbE network.
That is what I was thinking, but it just seems strange how long it takes for folders and their contents to load.
Does this "slow saving" behavior also occur on a local folder? (Do you have a weekly Cron Job that trims your SSD pool? Are you using "AutoTRIM" in the pool's options?)

Does it take a while to load the folder contents? (Do such folders contain many individual files?)
I do not have a Cron Job or AutoTRIM setup that I am aware of. Yes, when opening folders and their contents, it takes a while. Our server is made up of tons of folders for all of our clients and then sub folders / files inside of those.
What does this reveal:
Code:
root@KJBRANDING[~]# arc_summary | head -n 20

------------------------------------------------------------------------
ZFS Subsystem Report                            Fri Feb 24 12:16:42 2023
FreeBSD 13.1-RELEASE-p1                                    zpl version 5
Machine: KJBRANDING.local (amd64)                       spa version 5000

ARC status:                                                      HEALTHY
        Memory throttle count:                                         0

ARC size (current):                                   53.7 %   136.8 GiB
        Target size (adaptive):                       53.8 %   137.0 GiB
        Min size (hard limit):                         3.1 %     8.0 GiB
        Max size (high water):                        31:1     254.8 GiB
        Most Frequently Used (MFU) cache size:        63.6 %    86.0 GiB
        Most Recently Used (MRU) cache size:           36.4 %   49.2 GiB
        Metadata Cache Size (hard limit):              75 %    191.1 GiB
        Metadata Cache Size (current):                  1.8 %    3.3 GiB
        Dnode Cache Size (hard limit):                 10.0 %   19.1 GiB
        Dnode Cache Size (current):                     2.1 %  403.4 MiB
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
Also, please show the output of zpool status. For best results with 10G networking, your pools should be SSD pools.
Code:
root@KJBRANDING[~]# zpool status
  pool: KJDATA-Z2
 state: ONLINE
  scan: scrub repaired 0B in 00:35:34 with 0 errors on Sun Jan 22 00:35:34 2023
config:

        NAME                                            STATE     READ WRITE CKSUM
        KJDATA-Z2                                       ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/4d9ba04a-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4db3b8ea-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d5c35ae-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d42f6d5-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/4d689280-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d804afc-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d8f2a94-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d86d3e6-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d9e0bcb-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4daabf3d-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d620304-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
            gptid/4d497072-d85f-11ec-b763-a0369f81fecc  ONLINE       0     0     0
        special
          mirror-3                                      ONLINE       0     0     0
            gptid/0ad07767-dc4a-11ec-ad8b-a0369f81fecc  ONLINE       0     0     0
            gptid/0acbf913-dc4a-11ec-ad8b-a0369f81fecc  ONLINE       0     0     0

errors: No known data errors

  pool: boot-pool
 state: ONLINE
  scan: scrub repaired 0B in 00:00:05 with 0 errors on Wed Feb 22 03:45:05 2023
 
Joined
Oct 22, 2019
Messages
3,641
but it just seems strange how long it takes for folders and their contents to load.
For me, over an SMB share, using PS to browse a folder with 20,000+ images loads the contents almost instantly. (There are no PSD files within, all JPEGs and PNGs.) But they list almost immediately in a folder with 20,000+ files.


What would you estimate is the total number of files between all users?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Your pool is severely unbalanced. You'd be better off detaching 4 disks from your 8-wide VDEV, and creating a 3rd VDEV for your RAIDZ2 pool. As it is, the bulk of your IOPS is wasted in your 8-wide VDEV. How did you come up with this layout in the first place?
 
Joined
Oct 22, 2019
Messages
3,641
Your pool is severely unbalanced.
They're all SSDs though, so isn't that less of an issue as compared to spinning HDDs?

Plus, depending on how many total files they're dealing with, he might need to bump the sysctl value a bit.

My pool, all HDDs, loads up and presents a folder of 20,000+ files instantly (almost) when I browse to it with PS over the SMB share. All metadata is being read from ARC; not from the HDDs.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
In our main share, there are about 1500 client folders. There are around 400,000 files total. When browsing for files using photoshop, the files do show up, but they are greyed out for a bit until they load.
 
Joined
Oct 22, 2019
Messages
3,641
There are around 400,000 files total.
Holy moly.

To rule something out, try bumping the sysctl tuneable to something like 34359738384.

It may seem "excessive", but it's only for a test. The initial value might not be high enough to prevent aggressive metadata eviction.

After some typical usage over time (and browsing these folders), check back on the arc_summary. See if you notice the metadata amount increasing. (i.e, "Metadata Cache Size current")
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
How did you come up with this layout in the first place?
When I built this, I was brand new to TrueNas and server building in general. I still am not that familiar with the terminology. I came from a Drobo. This initial setup is what I came up with for doing the Rsync transfer of our database from the Drobo to the TrueNas. If there is a better way to create the pool, I am open to options. And is it possible to reconfigure the pool without losing data?
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
Holy moly.
I should have said that within those 1500 client folders are a total of around 400,000 files.
To rule something out, try bumping the sysctl tuneable to something like 34359738384
I will go ahead and give that a try.
After some typical usage over time (and browsing these folders), check back on the arc_summary. See if you notice the metadata amount increasing.
Is there anything else I can adjust or change? Do you think I should be adjusting my pool and adding a VDEV? Are they unbalanced and creating a bottleneck?
 
Joined
Oct 22, 2019
Messages
3,641
Is there anything else I can adjust or change? Do you think I should be adjusting my pool and adding a VDEV? Are they unbalanced and creating a bottleneck?
One thing at a time. If you can leverage your RAM, you might not have to resort to rebuilding a pool or vdevs.

If you notice your "Metadata current" size increasing, it will hint to you the metadata demands are higher than you first suspected.
 

jbarry14

Explorer
Joined
May 23, 2022
Messages
56
One thing at a time. If you can leverage your RAM, you might not have to resort to rebuilding a pool or vdevs.

If you notice your "Metadata current" size increasing, it will hint to you the metadata demands are higher than you first suspected.
Ok I will start with that. I appreciate your help and advice. Thank you
 
Top