would setting sync off on a smb share improve it's speed?

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
would i be able in this case to add a slog? because from what i understand a slog in no good with smb which is usually a-sync.
i'm kind of lost here. if you can give me a direction i will gladly read it.

ryzen 3 3200g 3.6ghz raedon vega 4 am4
asus tuf b450
gskill aegis 2*8gb 3000 ddr4 cl16
2*wd red 1TB i thing it is 5400 rpm.
transcend ssd
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Quoting from here:


always = ZFS will sync every write to disk immediately, even if ESXi (or whatever app) doesn't ask for it.
standard = ZFS will sync writes when the app asks for it (ESXi always syncs, at least on NFS)
disabled = ZFS won't sync writes whether asked or not. In case of a crash, you will lose a few seconds of writes.

If you have an SSD SLOG, it will be used to sync the write.


I think that sets it out relatively well... apologies to anyone in the forum here who wrote the same in a different or better way, but this was the first google result.

SMB shares will not be requesting sync writes, so if the dataset shared was not set to sync=always (remembering the default is standard), aysnc writes are already the case, so setting it to disabled will change nothing and would eliminate the impact of a SLOG in any case.

Perhaps adding another VDEV and more RAM to your 16GB would be the best chance to speed things up.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
SMB asks for sync writes when accessed from a Mac. Sync disabled is a good way to test whether sync was the reason for write slowness.

More RAM helps with read speed for small files.

We really need some specifics. What is slow? Client OS, access pattern (tens of thousands of small files? Fewer big files? Read or write? If read, size of the typical dataset), observed throughput, desired throughput, for starters.

Once you define your issue clearly, you may find you are halfway to a solution.
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
Quoting from here:


always = ZFS will sync every write to disk immediately, even if ESXi (or whatever app) doesn't ask for it.
standard = ZFS will sync writes when the app asks for it (ESXi always syncs, at least on NFS)
disabled = ZFS won't sync writes whether asked or not. In case of a crash, you will lose a few seconds of writes.

If you have an SSD SLOG, it will be used to sync the write.


I think that sets it out relatively well... apologies to anyone in the forum here who wrote the same in a different or better way, but this was the first google result.

SMB shares will not be requesting sync writes, so if the dataset shared was not set to sync=always (remembering the default is standard), aysnc writes are already the case, so setting it to disabled will change nothing and would eliminate the impact of a SLOG in any case.

Perhaps adding another VDEV and more RAM to your 16GB would be the best chance to speed things up.

the whole files weigh just 30G. i thought the memory is enough. i thought 16 would be enough. maybe i should replace the hard drives to ssd?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
the whole files weigh just 30G. i thought the memory is enough. i thought 16 would be enough. maybe i should replace the hard drives to ssd?

As @Yorick was suggesting, we need to understand a bit more about what you think slow is and how you're set up before making any more recommendations of what will help.

RAM may help if you have a large volume of smaller files in that 30GB. SSD may help with transfer speeds or IOPS if those are the bottlenecks (maybe it's network or some other issue).
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
SMB asks for sync writes when accessed from a Mac. Sync disabled is a good way to test whether sync was the reason for write slowness.

More RAM helps with read speed for small files.

We really need some specifics. What is slow? Client OS, access pattern (tens of thousands of small files? Fewer big files? Read or write? If read, size of the typical dataset), observed throughput, desired throughput, for starters.

Once you define your issue clearly, you may find you are halfway to a solution.
on the client side there are win10 systems. so i understand they do not ask for sync writes. the whole dataset is about 30G so i thought it will do. the previous server had only 8 and it did not hang. it a cpr office. i'm not about the transfers. they have exe files on the servers which somehow are used on all the clients sides. also they don't really have a database - only one client at a time can write to a certain profile. i'm not sure how big are the files being loaded. but i think there are big excel files which can way 3-4G all in all. systems runs ok most of the time but sometimes between read operations it takes a few sec to complete the operation. it is not running smooth. i did check the speed of the lan and it was fine (95mb). i wonder if i should just replace the harddrives with ssd (that would be on my expense cause it's my mistake i guess)
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
As @Yorick was suggesting, we need to understand a bit more about what you think slow is and how you're set up before making any more recommendations of what will help.

RAM may help if you have a large volume of smaller files in that 30GB. SSD may help with transfer speeds or IOPS if those are the bottlenecks (maybe it's network or some other issue).
I think they are many small files of their clients (accounting office). i just thought they are all being loaded together. usually they will work on one client on at time. but on some programs they load big files of excel. i can't say for sure cause i don't work there. i'm just the computer guy (a bad one obviously). i elaborated a bit more in my reply to @Yorick
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Alright, it's getting a little clearer. You have issues with read being slow at times, over a GBit link. There are many small files, so we're expecting some seek. You think the dataset will fit into 30GiB. You have 16GiB now for system and ARC, so about 12GiB of ARC.

Let's see whether it's an access pattern issue. From CLI, "zfs-stats -a", and take a look at the ARC statistics. You can "code" insert that here. That'll tell you whether more ARC (more RAM) is the answer.

Second, you have chosen a, shall we say, idiosyncratic motherboard. This has a Realtek® RTL8111H Ethernet chip, which is known to Not Be Awesome in FreeBSD. Quick and simple test: Try an Intel NIC add-in card and see whether that helps - at all, a lot, a little, TBD, but worth a test.

Top NICs: https://www.servethehome.com/buyers...as-servers/top-picks-freenas-nics-networking/

Lastly, this is really out there, but you have a Ryzen, so: Turn off C6 in BIOS. Sometimes called CoolNQuiet. See https://www.ixsystems.com/community/threads/frustrated-with-amd-ryzen-stability-on-11-2-u5.78263/

Some folk had better success with power state changes than pointing fingers at Realtek drivers.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
For others coming across this thread, while Ryzen support is getting better in FreeBSD, it's still not awesome. And as a rule, a consumer board without IPMI is not a great choice, not even in home use, and certainly not for an office. A conservative and affordable build is some form of SuperMicro current-gen X11, X11SCL-F, X11SCM-F, X11SCH-F depending on number of SATA ports desired. And a Pentium G5400, if file serve is all the unit is doing; or a Xeon E-2224 if it needs to do light (!) virtualization. Plus 32GiB or so of ECC memory.

Lest I seem too Intel-biased, there is of course a good AMD Epyc line: https://www.supermicro.com/en/products/aplus/solutions/SP3 . That's a different price point, though.
 
Last edited:

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
Alright, it's getting a little clearer. You have issues with read being slow at times, over a GBit link. There are many small files, so we're expecting some seek. You think the dataset will fit into 30GiB. You have 16GiB now for system and ARC, so about 12GiB of ARC.

Let's see whether it's an access pattern issue. From CLI, "zfs-stats -a", and take a look at the ARC statistics. You can "code" insert that here. That'll tell you whether more ARC (more RAM) is the answer.

Second, you have chosen a, shall we say, idiosyncratic motherboard. This has a Realtek® RTL8111H Ethernet chip, which is known to Not Be Awesome in FreeBSD. Quick and simple test: Try an Intel NIC add-in card and see whether that helps - at all, a lot, a little, TBD, but worth a test.

Top NICs: https://www.servethehome.com/buyers...as-servers/top-picks-freenas-nics-networking/

Lastly, this is really out there, but you have a Ryzen, so: Turn off C6 in BIOS. Sometimes called CoolNQuiet. See https://www.ixsystems.com/community/threads/frustrated-with-amd-ryzen-stability-on-11-2-u5.78263/

Some folk had better success with power state changes than pointing fingers at Realtek drivers.
thank you for this thorough answer!
the link is not Gbit but 100mbit limited by the cables and the switch (cisco).
the dataset size on the disk is 30GiB.
is the problem only with the nic (i wanted to buy intel nic but they were much more expensive)? or is it the motherboard? I'm thinking of maybe buying a whole new computer and selling this one.
when i'll be in the office i will try and run this command
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
the link is not Gbit but 100mbit limited by the cables and the switch (cisco).

That makes it unlikely that moving to SSD will help you.
It'll also be good to do a basic scrub of the networking - just make sure there aren't any duplex mismatches / half-duplex links anywhere between the FreeNAS server and the test workstation.

I'd definitely look at the power state settings - C6 and "CoolNQuiet" will both interfere.

As for an alternative build, I gave my recommendations above. I'd find out where the bottleneck is before spending large on server-grade hardware, however.

An Intel i210 adapter is 30 bucks (example: https://www.newegg.com/startech-st1000spexi/p/N82E16833114139?Description=intel i210&cm_re=intel_i210-_-33-114-139-_-Product&quicklink=true), that seems a reasonable outlay if smartctl and adjusting power states get you nowhere.
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
That makes it unlikely that moving to SSD will help you.
It'll also be good to do a basic scrub of the networking - just make sure there aren't any duplex mismatches / half-duplex links anywhere between the FreeNAS server and the test workstation.

I'd definitely look at the power state settings - C6 and "CoolNQuiet" will both interfere.

As for an alternative build, I gave my recommendations above. I'd find out where the bottleneck is before spending large on server-grade hardware, however.

An Intel i210 adapter is 30 bucks (example: https://www.newegg.com/startech-st1000spexi/p/N82E16833114139?Description=intel i210&cm_re=intel_i210-_-33-114-139-_-Product&quicklink=true), that seems a reasonable outlay if smartctl and adjusting power states get you nowhere.
so i was at the office today and what i found is that workloads come nowhere near the full capacity of the server. i wasn't able to run all the tools recommended here (probably because i don't have much experience with cli. but i did see that there was 7G free memory (of course 0 swap). services only 1.6G. cpu load was 0-3% max. i think under system there was something about processes. mean was 0.23 longtern was arround 0.2. the peak was 0.6. also had some support, check the router. it did run some unnecessary processes (some checks) but after that they said everything seems fine. i also canceled the coolnquiet to Typical Current Idle. after all that they still get 1-2 sec hangs after trying to save a small change they make. it doesn't happen each time but about once every 10 or so times. i must say there was hardly a workload that day. there were 5 workers but about 2 were active and it even happened when 1 worker was trying to save. so it doesn't seem to be about load.
previous server had only 2GB ddr2! (i imagine the cpu is quite weak) and it ran quite smooth.

i ran arcstat and the results:
Code:
Last login: Tue Apr 21 13:37:34 on pts/1
FreeBSD 11.3-RELEASE-p6 (FreeNAS.amd64) #0 r325575+d5b100edfcb(HEAD): Fri Feb 21 18:53:26 UTC 2020

        FreeNAS (c) 2009-2020, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.

        For more information, documentation, help or support, go here:
        http://freenas.org
Welcome to FreeNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@GR-SRV[~]# arc_summary.py
System Memory:

        1.27%   175.41  MiB Active,     9.31%   1.25    GiB Inact
        32.93%  4.44    GiB Wired,      0.00%   0       Bytes Cache
        56.36%  7.60    GiB Free,       0.13%   18.59   MiB Gap

        Real Installed:                         14.00   GiB
        Real Available:                 99.19%  13.89   GiB
        Real Managed:                   97.04%  13.48   GiB

        Logical Total:                          14.00   GiB
        Logical Used:                   36.79%  5.15    GiB
        Logical Free:                   63.21%  8.85    GiB

Kernel Memory:                                  349.07  MiB
        Data:                           86.84%  303.14  MiB
        Text:                           13.16%  45.93   MiB

Kernel Memory Map:                              13.48   GiB
        Size:                           9.10%   1.23    GiB
        Free:                           90.90%  12.25   GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                15
        Mutex Misses:                           0
        Evict Skips:                            0

ARC Size:                               18.73%  2.34    GiB
        Target Size: (Adaptive)         100.00% 12.48   GiB
        Min Size (Hard Limit):          12.50%  1.56    GiB
        Max Size (High Water):          8:1     12.48   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       50.00%  6.24    GiB
        Frequently Used Cache Size:     50.00%  6.24    GiB

ARC Hash Breakdown:
        Elements Max:                           225.24k
        Elements Current:               99.99%  225.22k
        Collisions:                             383.36k
        Chain Max:                              4
        Chains:                                 11.28k
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     113.58m
        Cache Hit Ratio:                97.28%  110.50m
        Cache Miss Ratio:               2.72%   3.09m
        Actual Hit Ratio:               97.26%  110.47m

        Data Demand Efficiency:         91.58%  2.46m
        Data Prefetch Efficiency:       92.78%  15.84k

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             0.02%   26.76k
          Most Recently Used:           4.69%   5.18m
          Most Frequently Used:         95.29%  105.29m
          Most Recently Used Ghost:     0.00%   0
          Most Frequently Used Ghost:   0.00%   0

        CACHE HITS BY DATA TYPE:
          Demand Data:                  2.04%   2.25m
          Prefetch Data:                0.01%   14.69k
          Demand Metadata:              97.68%  107.94m
          Prefetch Metadata:            0.27%   294.71k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  6.70%   206.82k
          Prefetch Data:                0.04%   1.14k
          Demand Metadata:              86.68%  2.67m
          Prefetch Metadata:            6.58%   202.91k
                                                                Page:  3
------------------------------------------------------------------------

                                                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:                        22.15m
        Hit Ratio:                      0.66%   146.54k
        Miss Ratio:                     99.34%  22.00m

                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           1224
        vm.kmem_size                            14470336512
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.vol.immediate_write_sz          32768
        vfs.zfs.vol.unmap_sync_enabled          0
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        2
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.dva_throttle_enabled        1
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.zio.taskq_batch_pct             75
        vfs.zfs.zil_slog_bulk                   786432
        vfs.zfs.zil_nocacheflush                0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   7
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.immediate_write_sz              32768
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.standard_sm_blksz               131072
        vfs.zfs.dtl_sm_blksz                    4096
        vfs.zfs.min_auto_ashift                 12
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.def_queue_depth            32
        vfs.zfs.vdev.queue_depth_pct            1000
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit_non_rotating131072
        vfs.zfs.vdev.aggregation_limit          1048576
        vfs.zfs.vdev.initializing_max_active    1
        vfs.zfs.vdev.initializing_min_active    1
        vfs.zfs.vdev.removal_max_active         2
        vfs.zfs.vdev.removal_min_active         1
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.validate_skip              0
        vfs.zfs.vdev.max_ms_shift               38
        vfs.zfs.vdev.default_ms_shift           29
        vfs.zfs.vdev.max_ms_count_limit         131072
        vfs.zfs.vdev.min_ms_count               16
        vfs.zfs.vdev.max_ms_count               200
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.space_map_ibs                   14
        vfs.zfs.spa_allocators                  4
        vfs.zfs.spa_min_slop                    134217728
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            60000
        vfs.zfs.deadman_synctime_ms             600000
        vfs.zfs.debug_flags                     0
        vfs.zfs.debugflags                      0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.max_missing_tvds_scan           0
        vfs.zfs.max_missing_tvds_cachefile      2
        vfs.zfs.max_missing_tvds                0
        vfs.zfs.spa_load_print_vdev_tree        0
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab_sm_blksz               4096
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.force_ganging          16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 18446744073709551615
        vfs.zfs.zfs_scan_checkpoint_interval    7200
        vfs.zfs.zfs_scan_legacy                 0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync_pct             20
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  1491105792
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.default_ibs                     15
        vfs.zfs.default_bs                      9
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_idistance            67108864
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.per_txg_dirty_frees_percent     30
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.dbuf_cache_lowater_pct          10
        vfs.zfs.dbuf_cache_hiwater_pct          10
        vfs.zfs.dbuf_metadata_cache_overflow    0
        vfs.zfs.dbuf_metadata_cache_shift       6
        vfs.zfs.dbuf_cache_shift                5
        vfs.zfs.dbuf_metadata_cache_max_bytes   209321792
        vfs.zfs.dbuf_cache_max_bytes            418643584
        vfs.zfs.arc_min_prescient_prefetch_ms   6
        vfs.zfs.arc_min_prefetch_ms             1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_esize            0
        vfs.zfs.mfu_ghost_metadata_esize        0
        vfs.zfs.mfu_ghost_size                  0
        vfs.zfs.mfu_data_esize                  232592896
        vfs.zfs.mfu_metadata_esize              795225088
        vfs.zfs.mfu_size                        1581663232
        vfs.zfs.mru_ghost_data_esize            0
        vfs.zfs.mru_ghost_metadata_esize        0
        vfs.zfs.mru_ghost_size                  0
        vfs.zfs.mru_data_esize                  161916928
        vfs.zfs.mru_metadata_esize              44965888
        vfs.zfs.mru_size                        456947712
        vfs.zfs.anon_data_esize                 0
        vfs.zfs.anon_metadata_esize             0
        vfs.zfs.anon_size                       5029888
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  3349148672
        vfs.zfs.arc_free_target                 75313
        vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
        vfs.zfs.compressed_arc_enabled          1
        vfs.zfs.arc_grow_retry                  60
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Your ARC hit ratio is great, more RAM won't help. Edit: Or maybe it will, because metadata read back in after save? Hmm. @sretalla has a point. I'd be cautious with L2ARC though, as that will use RAM as well. If you want to test with L2ARC, make sure it's small - 40GB would be a great SSD size to use for testing.

after all that they still get 1-2 sec hangs after trying to save a small change they make

Okay, so writing, not reading. Getting closer still. ARC is not relevant to writing. From Win10, so unless you've changed the SMB share or dataset to force sync (what is the sync setting on there?), sync will be off.

What's the model number of those WD Red? We've not looked at DM-SMR because 1TB don't use that and, let's just make sure.
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
so i was at the office today and what i found is that workloads come nowhere near the full capacity of the server. i wasn't able to run all the tools recommended here (probably because i don't have much experience with cli. but i did see that there was 7G free memory (of course 0 swap). services only 1.6G. cpu load was 0-3% max. i think under system there was something about processes. mean was 0.23 longtern was arround 0.2. the peak was 0.6. also had some support, check the router. it did run some unnecessary processes (some checks) but after that they said everything seems fine. i also canceled the coolnquiet to Typical Current Idle. after all that they still get 1-2 sec hangs after trying to save a small change they make. it doesn't happen each time but about once every 10 or so times. i must say there was hardly a workload that day. there were 5 workers but about 2 were active and it even happened when 1 worker was trying to save. so it doesn't seem to be about load.
previous server had only 2GB ddr2! (i imagine the cpu is quite weak) and it ran quite smooth.

i ran arcstat and the results:
Code:
Last login: Tue Apr 21 13:37:34 on pts/1
FreeBSD 11.3-RELEASE-p6 (FreeNAS.amd64) #0 r325575+d5b100edfcb(HEAD): Fri Feb 21 18:53:26 UTC 2020

        FreeNAS (c) 2009-2020, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.

        For more information, documentation, help or support, go here:
        http://freenas.org
Welcome to FreeNAS

Warning: settings changed through the CLI are not written to
the configuration database and will be reset on reboot.

root@GR-SRV[~]# arc_summary.py
System Memory:

        1.27%   175.41  MiB Active,     9.31%   1.25    GiB Inact
        32.93%  4.44    GiB Wired,      0.00%   0       Bytes Cache
        56.36%  7.60    GiB Free,       0.13%   18.59   MiB Gap

        Real Installed:                         14.00   GiB
        Real Available:                 99.19%  13.89   GiB
        Real Managed:                   97.04%  13.48   GiB

        Logical Total:                          14.00   GiB
        Logical Used:                   36.79%  5.15    GiB
        Logical Free:                   63.21%  8.85    GiB

Kernel Memory:                                  349.07  MiB
        Data:                           86.84%  303.14  MiB
        Text:                           13.16%  45.93   MiB

Kernel Memory Map:                              13.48   GiB
        Size:                           9.10%   1.23    GiB
        Free:                           90.90%  12.25   GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                15
        Mutex Misses:                           0
        Evict Skips:                            0

ARC Size:                               18.73%  2.34    GiB
        Target Size: (Adaptive)         100.00% 12.48   GiB
        Min Size (Hard Limit):          12.50%  1.56    GiB
        Max Size (High Water):          8:1     12.48   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       50.00%  6.24    GiB
        Frequently Used Cache Size:     50.00%  6.24    GiB

ARC Hash Breakdown:
        Elements Max:                           225.24k
        Elements Current:               99.99%  225.22k
        Collisions:                             383.36k
        Chain Max:                              4
        Chains:                                 11.28k
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     113.58m
        Cache Hit Ratio:                97.28%  110.50m
        Cache Miss Ratio:               2.72%   3.09m
        Actual Hit Ratio:               97.26%  110.47m

        Data Demand Efficiency:         91.58%  2.46m
        Data Prefetch Efficiency:       92.78%  15.84k

        CACHE HITS BY CACHE LIST:
          Anonymously Used:             0.02%   26.76k
          Most Recently Used:           4.69%   5.18m
          Most Frequently Used:         95.29%  105.29m
          Most Recently Used Ghost:     0.00%   0
          Most Frequently Used Ghost:   0.00%   0

        CACHE HITS BY DATA TYPE:
          Demand Data:                  2.04%   2.25m
          Prefetch Data:                0.01%   14.69k
          Demand Metadata:              97.68%  107.94m
          Prefetch Metadata:            0.27%   294.71k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  6.70%   206.82k
          Prefetch Data:                0.04%   1.14k
          Demand Metadata:              86.68%  2.67m
          Prefetch Metadata:            6.58%   202.91k
                                                                Page:  3
------------------------------------------------------------------------

                                                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:                        22.15m
        Hit Ratio:                      0.66%   146.54k
        Miss Ratio:                     99.34%  22.00m

                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           1224
        vm.kmem_size                            14470336512
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.vol.immediate_write_sz          32768
        vfs.zfs.vol.unmap_sync_enabled          0
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        2
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.dva_throttle_enabled        1
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.zio.taskq_batch_pct             75
        vfs.zfs.zil_slog_bulk                   786432
        vfs.zfs.zil_nocacheflush                0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   7
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.immediate_write_sz              32768
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.standard_sm_blksz               131072
        vfs.zfs.dtl_sm_blksz                    4096
        vfs.zfs.min_auto_ashift                 12
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.def_queue_depth            32
        vfs.zfs.vdev.queue_depth_pct            1000
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit_non_rotating131072
        vfs.zfs.vdev.aggregation_limit          1048576
        vfs.zfs.vdev.initializing_max_active    1
        vfs.zfs.vdev.initializing_min_active    1
        vfs.zfs.vdev.removal_max_active         2
        vfs.zfs.vdev.removal_min_active         1
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.validate_skip              0
        vfs.zfs.vdev.max_ms_shift               38
        vfs.zfs.vdev.default_ms_shift           29
        vfs.zfs.vdev.max_ms_count_limit         131072
        vfs.zfs.vdev.min_ms_count               16
        vfs.zfs.vdev.max_ms_count               200
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.space_map_ibs                   14
        vfs.zfs.spa_allocators                  4
        vfs.zfs.spa_min_slop                    134217728
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            60000
        vfs.zfs.deadman_synctime_ms             600000
        vfs.zfs.debug_flags                     0
        vfs.zfs.debugflags                      0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.max_missing_tvds_scan           0
        vfs.zfs.max_missing_tvds_cachefile      2
        vfs.zfs.max_missing_tvds                0
        vfs.zfs.spa_load_print_vdev_tree        0
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab_sm_blksz               4096
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.force_ganging          16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 18446744073709551615
        vfs.zfs.zfs_scan_checkpoint_interval    7200
        vfs.zfs.zfs_scan_legacy                 0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync_pct             20
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  1491105792
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.default_ibs                     15
        vfs.zfs.default_bs                      9
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_idistance            67108864
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.per_txg_dirty_frees_percent     30
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.dbuf_cache_lowater_pct          10
        vfs.zfs.dbuf_cache_hiwater_pct          10
        vfs.zfs.dbuf_metadata_cache_overflow    0
        vfs.zfs.dbuf_metadata_cache_shift       6
        vfs.zfs.dbuf_cache_shift                5
        vfs.zfs.dbuf_metadata_cache_max_bytes   209321792
        vfs.zfs.dbuf_cache_max_bytes            418643584
        vfs.zfs.arc_min_prescient_prefetch_ms   6
        vfs.zfs.arc_min_prefetch_ms             1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_esize            0
        vfs.zfs.mfu_ghost_metadata_esize        0
        vfs.zfs.mfu_ghost_size                  0
        vfs.zfs.mfu_data_esize                  232592896
        vfs.zfs.mfu_metadata_esize              795225088
        vfs.zfs.mfu_size                        1581663232
        vfs.zfs.mru_ghost_data_esize            0
        vfs.zfs.mru_ghost_metadata_esize        0
        vfs.zfs.mru_ghost_size                  0
        vfs.zfs.mru_data_esize                  161916928
        vfs.zfs.mru_metadata_esize              44965888
        vfs.zfs.mru_size                        456947712
        vfs.zfs.anon_data_esize                 0
        vfs.zfs.anon_metadata_esize             0
        vfs.zfs.anon_size                       5029888
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  3349148672
        vfs.zfs.arc_free_target                 75313
        vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
        vfs.zfs.compressed_arc_enabled          1
        vfs.zfs.arc_grow_retry                  60
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
Your cache misses are mostly metadata... there may be some benefit to adding L2ARC in metadata only mode to assist with that. https://www.ixsystems.com/community/threads/l2arc-device-recommendation.76010/
thank you. what might be the reason for those misses? honestly i might be asking a stupid question but misses sound to me like something that shouldn't happen. is the system out of sync or something?
 

tomerg

Dabbler
Joined
Mar 9, 2020
Messages
37
Your ARC hit ratio is great, more RAM won't help. Edit: Or maybe it will, because metadata read back in after save? Hmm. @sretalla has a point. I'd be cautious with L2ARC though, as that will use RAM as well. If you want to test with L2ARC, make sure it's small - 40GB would be a great SSD size to use for testing.



Okay, so writing, not reading. Getting closer still. ARC is not relevant to writing. From Win10, so unless you've changed the SMB share or dataset to force sync (what is the sync setting on there?), sync will be off.

What's the model number of those WD Red? We've not looked at DM-SMR because 1TB don't use that and, let's just make sure.
thank you.
so i kind of have to dive into this but may i ask - does this mean the nic is good? btw i live in israel and not everything is available, espcially on these times. so hope i can find ssd that small (or should i resize it myself?).
as for the wd it is Caviar Red WD10EFRX 1TB.
 
Top