32GB or 64GB RAM for fastest 10Gbps SMB performance (Build Advice)

Teeps

Dabbler
Joined
Sep 13, 2015
Messages
37
Hi,

I am building a new FreeNAS and upgrading from 1Gb to 10Gb NIC. I mostly do 3D design work, rendering thousands of PNG files to the NAS, then re-importing those files into video editing software. Also photoshop, reading large 3D files, assorted production tasks.

My current build is X10SL7-F / E3-1230 v3 / 32GB - this will be used as a backup NAS after I build the new one. I'll also likely leave crashplan / plex / transmission running on this machine.

Considering this for new build
X10SL7-F
E3-1265L V3 (small chassis, potential thermal constraints)
32GB ECC
6 x 12TB WD Red Plus - RAID10 (~27TB) (quietest/coolest drives - studio apartment)
2 x SSD (jails / zil / L2ARC?)
Chelsio t520 -CR 10G SFP+ card

My question: With the RAID10 + 10G connection, can I significantly increase the performance of SMB by shelling out for an X11 board with 64GB of RAM and adding L2ARC or will I already be getting close to max single-user performance with the X10 setup?

Here's my current ARC performance, but perhaps Crashplan/Plex/Transmission heavily skew this data one way or another so if I don't run those services on the new server, I would guess it will look much different.

Thanks for any thoughts on this. My first instinct is to just build the X10 version and experiment with using a couple SSDs as an SLOG and a metadata-only L2ARC


Code:
root@freenas:~ # arc_summary.py
System Memory:

        0.98%   311.24  MiB Active,     45.71%  14.22   GiB Inact
        47.97%  14.92   GiB Wired,      0.00%   0       Bytes Cache
        2.32%   740.34  MiB Free,       3.02%   963.40  MiB Gap  

        Real Installed:                         32.00   GiB
        Real Available:                 99.74%  31.92   GiB
        Real Managed:                   97.47%  31.11   GiB

        Logical Total:                          32.00   GiB
        Logical Used:                   53.31%  17.06   GiB
        Logical Free:                   46.69%  14.94   GiB

Kernel Memory:                                  703.04  MiB
        Data:                           93.47%  657.11  MiB
        Text:                           6.53%   45.93   MiB

Kernel Memory Map:                              31.11   GiB
        Size:                           6.79%   2.11    GiB
        Free:                           93.21%  28.99   GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                1.76b
        Mutex Misses:                           1.43m
        Evict Skips:                            1.43m

ARC Size:                               52.05%  10.99   GiB
        Target Size: (Adaptive)         52.05%  10.99   GiB
        Min Size (Hard Limit):          17.83%  3.76    GiB
        Max Size (High Water):          5:1     21.11   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       67.19%  7.38    GiB
        Frequently Used Cache Size:     32.81%  3.60    GiB

ARC Hash Breakdown:
        Elements Max:                           1.58m
        Elements Current:               34.64%  547.87k
        Collisions:                             303.80m
        Chain Max:                              7
        Chains:                                 32.66k
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     21.49b
        Cache Hit Ratio:                90.63%  19.47b
        Cache Miss Ratio:               9.37%   2.01b
        Actual Hit Ratio:               90.42%  19.43b

        Data Demand Efficiency:         98.36%  6.86b
        Data Prefetch Efficiency:       1.73%   1.57b

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           24.57%  4.79b
          Most Frequently Used:         75.19%  14.64b
          Most Recently Used Ghost:     0.13%   25.27m
          Most Frequently Used Ghost:   0.15%   29.85m

        CACHE HITS BY DATA TYPE:
          Demand Data:                  34.63%  6.74b
          Prefetch Data:                0.14%   27.18m
          Demand Metadata:              64.92%  12.64b
          Prefetch Metadata:            0.32%   61.35m

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  5.60%   112.63m
          Prefetch Data:                76.91%  1.55b
          Demand Metadata:              15.98%  321.63m
          Prefetch Metadata:            1.52%   30.56m
                                                                Page:  3
------------------------------------------------------------------------

                                                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:                        6.40b
        Hit Ratio:                      24.67%  1.58b
        Miss Ratio:                     75.33%  4.82b

                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           2378
        vm.kmem_size                            33401757696
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.vol.immediate_write_sz          32768
        vfs.zfs.vol.unmap_sync_enabled          0
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.recursive                   0
        vfs.zfs.vol.mode                        2
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.dva_throttle_enabled        1
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.zio.taskq_batch_pct             75
        vfs.zfs.zil_maxblocksize                131072
        vfs.zfs.zil_slog_bulk                   786432
        vfs.zfs.zil_nocacheflush                0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   7
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.immediate_write_sz              32768
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.standard_sm_blksz               131072
        vfs.zfs.dtl_sm_blksz                    4096
        vfs.zfs.min_auto_ashift                 12
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.def_queue_depth            32
        vfs.zfs.vdev.queue_depth_pct            1000
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit_non_rotating131072
        vfs.zfs.vdev.aggregation_limit          1048576
        vfs.zfs.vdev.initializing_max_active    1
        vfs.zfs.vdev.initializing_min_active    1
        vfs.zfs.vdev.removal_max_active         2
        vfs.zfs.vdev.removal_min_active         1
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.validate_skip              0
        vfs.zfs.vdev.max_ms_shift               38
        vfs.zfs.vdev.default_ms_shift           29
        vfs.zfs.vdev.max_ms_count_limit         131072
        vfs.zfs.vdev.min_ms_count               16
        vfs.zfs.vdev.max_ms_count               200
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.space_map_ibs                   14
        vfs.zfs.spa_allocators                  4
        vfs.zfs.spa_min_slop                    134217728
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            60000
        vfs.zfs.deadman_synctime_ms             600000
        vfs.zfs.debug_flags                     0
        vfs.zfs.debugflags                      0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.max_missing_tvds_scan           0
        vfs.zfs.max_missing_tvds_cachefile      2
        vfs.zfs.max_missing_tvds                0
        vfs.zfs.spa_load_print_vdev_tree        0
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab_sm_blksz               4096
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.force_ganging          16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 18446744073709551615
        vfs.zfs.zfs_scan_checkpoint_interval    7200
        vfs.zfs.zfs_scan_legacy                 0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync_pct             20
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  3426974924
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.default_ibs                     15
        vfs.zfs.default_bs                      9
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_idistance            67108864
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.per_txg_dirty_frees_percent     30
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.dbuf_cache_lowater_pct          10
        vfs.zfs.dbuf_cache_hiwater_pct          10
        vfs.zfs.dbuf_metadata_cache_overflow    0
        vfs.zfs.dbuf_metadata_cache_shift       6
        vfs.zfs.dbuf_cache_shift                5
        vfs.zfs.dbuf_metadata_cache_max_bytes   505125248
        vfs.zfs.dbuf_cache_max_bytes            1010250496
        vfs.zfs.arc_min_prescient_prefetch_ms   6
        vfs.zfs.arc_min_prefetch_ms             1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_esize            1418575872
        vfs.zfs.mfu_ghost_metadata_esize        4478765568
        vfs.zfs.mfu_ghost_size                  5897341440
        vfs.zfs.mfu_data_esize                  4280783872
        vfs.zfs.mfu_metadata_esize              3986432
        vfs.zfs.mfu_size                        4791363584
        vfs.zfs.mru_ghost_data_esize            479363072
        vfs.zfs.mru_ghost_metadata_esize        5419480576
        vfs.zfs.mru_ghost_size                  5898843648
        vfs.zfs.mru_data_esize                  3625588736
        vfs.zfs.mru_metadata_esize              212480
        vfs.zfs.mru_size                        5920862720
        vfs.zfs.anon_data_esize                 0
        vfs.zfs.anon_metadata_esize             0
        vfs.zfs.anon_size                       1080832
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  5666084864
        vfs.zfs.arc_free_target                 173772
        vfs.zfs.arc_kmem_cache_reap_retry_ms    1000
        vfs.zfs.compressed_arc_enabled          1
        vfs.zfs.arc_grow_retry                  60
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_no_grow_shift               5
        vfs.zfs.arc_min                         4041001984
        vfs.zfs.arc_max                         22664339456
        vfs.zfs.abd_chunk_size                  4096
        vfs.zfs.abd_scatter_enabled             1
                                                                Page:  7
------------------------------------------------------------------------
 
Last edited:

Morris

Contributor
Joined
Nov 21, 2020
Messages
120
I'm running 10Gb with 32GB and happy with performance. My workload is mostly sequential so I can't compare to yours. Try dropping your 10Gb card into your existing server and see how it performs.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
iSCSI would likely perform better than SMB
RAM (i.e. ARC instead of L2ARC) is always better
L2ARC isn't really useful until you have 64GB more more RAM, as L2ARC will use RAM itself
Set the minimum metadata size for your ARC larger than default
Based on your stated workflow you would likely be much better suited to have a working pool on high performance NVMe SSD with good endurance and export to a separate spinning disk pool when done
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
E3-1265L V3 (small chassis, potential thermal constraints)
This ("L" CPU) doesn't work the way you think it does.
In addition other than scrubs your TN CPU usage will likely be very low 1-5%.
 

Teeps

Dabbler
Joined
Sep 13, 2015
Messages
37
Bump
Have a similar question with 32 GB vs 64 GB performance differences.
I went with the cheaper option and recreated the same NAS with X10SL7-F and 32GB of RAM and it works very well for my basic use case. Massive boost in network transfer rates.
 
Top