SOLVED L2ARC size is >1TB? Also poor performance.

Status
Not open for further replies.

gusgus

Dabbler
Joined
Sep 15, 2014
Messages
13
About a year ago I noticed a drop in my SMB share performance. If I recall accurately, at the time I didn't think much about it as I had just updated FreeNAS to the latest release (I think it was a minor release update, like 9.10-Ux to 9.10-Uy, but I'm not sure). I had assumed that the L2ARC cache would rebuild itself and performance would return to normal. When it didn't, I assumed some SMB performance regression. However it never resolved itself.

Recently I happened to check the ZFS reports and saw that the L2ARC size is way oversized (3.5TB average?? see data below), and that my hit ratio plummeted from 90% to ~20%. I understand that this indicates a shift in usage: more users and/or more files being accessed randomly, and it is true that usage has been trending this way somewhat. So the hit ratio change makes sense to me.

However the huge L2ARC size does not make sense to me, and I suspect that there is some misconfiguration or a bug. Prior to Jan2017 the L2ARC is much closer to my real L2ARC device size (250GB SSD).

I have also observed the same performance drop on my secondary system which was updated at the same time. It is interesting that the same performance drop occurred on a system with a completely different usage scenario (exact same L2ARC size, but different ARC size due to less RAM). It seems to suggest that L2ARC completely changed in the upgrade.

I have not tweaked ZFS parameters in either system, they should be the stock configuration.

Questions:
  • Are my L2ARC sizes normal?
  • If no, what can I do to fix?
  • I used to see a pretty good speed improvement with my L2ARC. If there has been a change, would it be best to tweak my L2ARC parameters or remove L2ARC altogether, given the data below? I have read on tweaking but am not sure which would be better to start with, if any tweaking should be done at all (I'm leaning towards none).
  • Since I noticed the huge L2ARC yesterday, I have been dumping arc_summary.py results every 30 minutes via a script. I noticed that my L2ARC in the primary system was marked as degraded yesterday, but today after a reboot it says healthy (there was no email warning for a degraded array, nor was there a red light in the web GUI). It is a single 250GB SSD and is not configured for mirroring or striping. Smartctl long and short tests are fine. Should I be concerned about a degraded L2ARC in arc_summary.py?
  • If so, why would it intermittently become degraded and healthy without flagging the array as dirty?

Report Data - Primary System - Usage: SMB with 0 to 3 light users and 1 heavy user, with regular rsync activity
RRD Graphs (images):
open

open

https://drive.google.com/open?id=0B_39nqHr-Y4taThvSFR5NzdBMXM
https://drive.google.com/open?id=0B_39nqHr-Y4tUFdHQkVWQ1VzVGc

Output of /usr/local/www/freenasUI/tools/arc_summary.py:
Code:
System Memory:

  0.17%  80.00  MiB Active,  1.06%  506.04  MiB Inact
  2.30%  1.07  GiB Wired,  0.01%  4.38  MiB Cache
  96.47%  44.96  GiB Free,  0.00%  0  Bytes Gap

  Real Installed:  48.00  GiB
  Real Available:  99.78%  47.90  GiB
  Real Managed:  97.31%  46.61  GiB

  Logical Total:  48.00  GiB
  Logical Used:  5.29%  2.54  GiB
  Logical Free:  94.71%  45.46  GiB

Kernel Memory:  354.74  MiB
  Data:  92.19%  327.05  MiB
  Text:  7.81%  27.69  MiB

Kernel Memory Map:  46.61  GiB
  Size:  1.42%  678.55  MiB
  Free:  98.58%  45.95  GiB
  Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
  Storage pool Version:  5000
  Filesystem Version:  5
  Memory Throttle Count:  0

ARC Misc:
  Deleted:  54
  Mutex Misses:  0
  Evict Skips:  0

ARC Size:  0.69%  324.54  MiB
  Target Size: (Adaptive)  100.00% 45.61  GiB
  Min Size (Hard Limit):  12.50%  5.70  GiB
  Max Size (High Water):  8:1  45.61  GiB

ARC Size Breakdown:
  Recently Used Cache Size:  50.00%  22.80  GiB
  Frequently Used Cache Size:  50.00%  22.80  GiB

ARC Hash Breakdown:
  Elements Max:  12.33k
  Elements Current:  100.00% 12.33k
  Collisions:  260
  Chain Max:  1
  Chains:  5
  Page:  2
------------------------------------------------------------------------

ARC Total accesses:  101.17k
  Cache Hit Ratio:  41.63%  42.12k
  Cache Miss Ratio:  58.37%  59.06k
  Actual Hit Ratio:  34.73%  35.14k

  Data Demand Efficiency:  67.95%  32.05k
  Data Prefetch Efficiency:  4.83%  352

  CACHE HITS BY CACHE LIST:
  Anonymously Used:  16.56%  6.97k
  Most Recently Used:  53.07%  22.35k
  Most Frequently Used:  30.38%  12.79k
  Most Recently Used Ghost:  0.00%  0
  Most Frequently Used Ghost:  0.00%  0

  CACHE HITS BY DATA TYPE:
  Demand Data:  51.70%  21.77k
  Prefetch Data:  0.04%  17
  Demand Metadata:  31.74%  13.37k
  Prefetch Metadata:  16.52%  6.96k

  CACHE MISSES BY DATA TYPE:
  Demand Data:  17.39%  10.27k
  Prefetch Data:  0.57%  335
  Demand Metadata:  75.86%  44.80k
  Prefetch Metadata:  6.18%  3.65k
  Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
  Passed Headroom:  90
  Tried Lock Failures:  133
  IO In Progress:  0
  Low Memory Aborts:  0
  Free on Write:  84
  Writes While Full:  0
  R/W Clashes:  0
  Bad Checksums:  0
  IO Errors:  0
  SPA Mismatch:  12.22m

L2 ARC Size: (Adaptive)  115.77  MiB
  Header Size:  0.00%  0  Bytes

L2 ARC Evicts:
  Lock Retries:  0
  Upon Reading:  0

L2 ARC Breakdown:  54.68k
  Hit Ratio:  0.00%  0
  Miss Ratio:  100.00% 54.68k
  Feeds:  10.61k

L2 ARC Buffer:
  Bytes Scanned:  693.89  GiB
  Buffer Iterations:  10.61k
  List Iterations:  42.44k
  NULL List Iterations:  195

L2 ARC Writes:
  Writes Sent:  100.00% 2.82k
  Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:  1.51m
  Hit Ratio:  0.41%  6.23k
  Miss Ratio:  99.59%  1.50m

  Page:  5
------------------------------------------------------------------------

  Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
  kern.maxusers  3401
  vm.kmem_size  50046877696
  vm.kmem_size_scale  1
  vm.kmem_size_min  0
  vm.kmem_size_max  1319413950874
  vfs.zfs.vol.unmap_enabled  1
  vfs.zfs.vol.mode  2
  vfs.zfs.sync_pass_rewrite  2
  vfs.zfs.sync_pass_dont_compress  5
  vfs.zfs.sync_pass_deferred_free  2
  vfs.zfs.zio.dva_throttle_enabled  1
  vfs.zfs.zio.exclude_metadata  0
  vfs.zfs.zio.use_uma  1
  vfs.zfs.zil_slog_limit  786432
  vfs.zfs.cache_flush_disable  0
  vfs.zfs.zil_replay_disable  0
  vfs.zfs.version.zpl  5
  vfs.zfs.version.spa  5000
  vfs.zfs.version.acl  1
  vfs.zfs.version.ioctl  7
  vfs.zfs.debug  0
  vfs.zfs.super_owner  0
  vfs.zfs.min_auto_ashift  12
  vfs.zfs.max_auto_ashift  13
  vfs.zfs.vdev.queue_depth_pct  1000
  vfs.zfs.vdev.write_gap_limit  4096
  vfs.zfs.vdev.read_gap_limit  32768
  vfs.zfs.vdev.aggregation_limit  131072
  vfs.zfs.vdev.trim_max_active  64
  vfs.zfs.vdev.trim_min_active  1
  vfs.zfs.vdev.scrub_max_active  2
  vfs.zfs.vdev.scrub_min_active  1
  vfs.zfs.vdev.async_write_max_active  10
  vfs.zfs.vdev.async_write_min_active  1
  vfs.zfs.vdev.async_read_max_active  3
  vfs.zfs.vdev.async_read_min_active  1
  vfs.zfs.vdev.sync_write_max_active  10
  vfs.zfs.vdev.sync_write_min_active  10
  vfs.zfs.vdev.sync_read_max_active  10
  vfs.zfs.vdev.sync_read_min_active  10
  vfs.zfs.vdev.max_active  1000
  vfs.zfs.vdev.async_write_active_max_dirty_percent60
  vfs.zfs.vdev.async_write_active_min_dirty_percent30
  vfs.zfs.vdev.mirror.non_rotating_seek_inc1
  vfs.zfs.vdev.mirror.non_rotating_inc  0
  vfs.zfs.vdev.mirror.rotating_seek_offset1048576
  vfs.zfs.vdev.mirror.rotating_seek_inc  5
  vfs.zfs.vdev.mirror.rotating_inc  0
  vfs.zfs.vdev.trim_on_init  1
  vfs.zfs.vdev.larger_ashift_minimal  0
  vfs.zfs.vdev.bio_delete_disable  0
  vfs.zfs.vdev.bio_flush_disable  0
  vfs.zfs.vdev.cache.bshift  16
  vfs.zfs.vdev.cache.size  0
  vfs.zfs.vdev.cache.max  16384
  vfs.zfs.vdev.metaslabs_per_vdev  200
  vfs.zfs.vdev.trim_max_pending  10000
  vfs.zfs.txg.timeout  5
  vfs.zfs.trim.enabled  1
  vfs.zfs.trim.max_interval  1
  vfs.zfs.trim.timeout  30
  vfs.zfs.trim.txg_delay  32
  vfs.zfs.space_map_blksz  4096
  vfs.zfs.spa_min_slop  134217728
  vfs.zfs.spa_slop_shift  5
  vfs.zfs.spa_asize_inflation  24
  vfs.zfs.deadman_enabled  1
  vfs.zfs.deadman_checktime_ms  5000
  vfs.zfs.deadman_synctime_ms  1000000
  vfs.zfs.debug_flags  0
  vfs.zfs.recover  0
  vfs.zfs.spa_load_verify_data  1
  vfs.zfs.spa_load_verify_metadata  1
  vfs.zfs.spa_load_verify_maxinflight  10000
  vfs.zfs.ccw_retry_interval  300
  vfs.zfs.check_hostid  1
  vfs.zfs.mg_fragmentation_threshold  85
  vfs.zfs.mg_noalloc_threshold  0
  vfs.zfs.condense_pct  200
  vfs.zfs.metaslab.bias_enabled  1
  vfs.zfs.metaslab.lba_weighting_enabled  1
  vfs.zfs.metaslab.fragmentation_factor_enabled1
  vfs.zfs.metaslab.preload_enabled  1
  vfs.zfs.metaslab.preload_limit  3
  vfs.zfs.metaslab.unload_delay  8
  vfs.zfs.metaslab.load_pct  50
  vfs.zfs.metaslab.min_alloc_size  33554432
  vfs.zfs.metaslab.df_free_pct  4
  vfs.zfs.metaslab.df_alloc_threshold  131072
  vfs.zfs.metaslab.debug_unload  0
  vfs.zfs.metaslab.debug_load  0
  vfs.zfs.metaslab.fragmentation_threshold70
  vfs.zfs.metaslab.gang_bang  16777217
  vfs.zfs.free_bpobj_enabled  1
  vfs.zfs.free_max_blocks  18446744073709551615
  vfs.zfs.no_scrub_prefetch  0
  vfs.zfs.no_scrub_io  0
  vfs.zfs.resilver_min_time_ms  3000
  vfs.zfs.free_min_time_ms  1000
  vfs.zfs.scan_min_time_ms  1000
  vfs.zfs.scan_idle  50
  vfs.zfs.scrub_delay  4
  vfs.zfs.resilver_delay  2
  vfs.zfs.top_maxinflight  32
  vfs.zfs.delay_scale  500000
  vfs.zfs.delay_min_dirty_percent  60
  vfs.zfs.dirty_data_sync  67108864
  vfs.zfs.dirty_data_max_percent  10
  vfs.zfs.dirty_data_max_max  4294967296
  vfs.zfs.dirty_data_max  4294967296
  vfs.zfs.max_recordsize  1048576
  vfs.zfs.zfetch.array_rd_sz  1048576
  vfs.zfs.zfetch.max_distance  8388608
  vfs.zfs.zfetch.min_sec_reap  2
  vfs.zfs.zfetch.max_streams  8
  vfs.zfs.prefetch_disable  0
  vfs.zfs.send_holes_without_birth_time  1
  vfs.zfs.mdcomp_disable  0
  vfs.zfs.nopwrite_enabled  1
  vfs.zfs.dedup.prefetch  1
  vfs.zfs.l2c_only_size  0
  vfs.zfs.mfu_ghost_data_esize  0
  vfs.zfs.mfu_ghost_metadata_esize  0
  vfs.zfs.mfu_ghost_size  0
  vfs.zfs.mfu_data_esize  7880704
  vfs.zfs.mfu_metadata_esize  564224
  vfs.zfs.mfu_size  18428416
  vfs.zfs.mru_ghost_data_esize  0
  vfs.zfs.mru_ghost_metadata_esize  0
  vfs.zfs.mru_ghost_size  0
  vfs.zfs.mru_data_esize  83743232
  vfs.zfs.mru_metadata_esize  7813632
  vfs.zfs.mru_size  291476992
  vfs.zfs.anon_data_esize  0
  vfs.zfs.anon_metadata_esize  0
  vfs.zfs.anon_size  32768
  vfs.zfs.l2arc_norw  1
  vfs.zfs.l2arc_feed_again  1
  vfs.zfs.l2arc_noprefetch  1
  vfs.zfs.l2arc_feed_min_ms  200
  vfs.zfs.l2arc_feed_secs  1
  vfs.zfs.l2arc_headroom  2
  vfs.zfs.l2arc_write_boost  8388608
  vfs.zfs.l2arc_write_max  8388608
  vfs.zfs.arc_meta_limit  12243283968
  vfs.zfs.arc_free_target  84755
  vfs.zfs.compressed_arc_enabled  1
  vfs.zfs.arc_shrink_shift  7
  vfs.zfs.arc_average_blocksize  8192
  vfs.zfs.arc_min  6121641984
  vfs.zfs.arc_max  48973135872
  Page:  7
------------------------------------------------------------------------


Report Data - Secondary System - Usage: Rsync Cold Storage
RRD Graphs:
open

open

https://drive.google.com/open?id=0B_39nqHr-Y4tVlNwTlg0RVJGdDA
https://drive.google.com/open?id=0B_39nqHr-Y4tRVBqM2ZEeHo0SFE

Output of /usr/local/www/freenasUI/tools/arc_summary.py:
Code:
System Memory:

  0.27%  84.41  MiB Active,  1.54%  490.43  MiB Inact
  2.90%  921.30  MiB Wired,  0.02%  6.62  MiB Cache
  95.27%  29.57  GiB Free,  0.00%  0  Bytes Gap

  Real Installed:  32.00  GiB
  Real Available:  99.57%  31.86  GiB
  Real Managed:  97.40%  31.04  GiB

  Logical Total:  32.00  GiB
  Logical Used:  6.08%  1.95  GiB
  Logical Free:  93.92%  30.05  GiB

Kernel Memory:  252.18  MiB
  Data:  89.02%  224.50  MiB
  Text:  10.98%  27.68  MiB

Kernel Memory Map:  31.04  GiB
  Size:  1.79%  567.73  MiB
  Free:  98.21%  30.48  GiB
  Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
  Storage pool Version:  5000
  Filesystem Version:  5
  Memory Throttle Count:  0

ARC Misc:
  Deleted:  53
  Mutex Misses:  0
  Evict Skips:  0

ARC Size:  0.89%  273.33  MiB
  Target Size: (Adaptive)  100.00% 30.04  GiB
  Min Size (Hard Limit):  12.50%  3.75  GiB
  Max Size (High Water):  8:1  30.04  GiB

ARC Size Breakdown:
  Recently Used Cache Size:  50.00%  15.02  GiB
  Frequently Used Cache Size:  50.00%  15.02  GiB

ARC Hash Breakdown:
  Elements Max:  8.68k
  Elements Current:  99.99%  8.68k
  Collisions:  677
  Chain Max:  1
  Chains:  14
  Page:  2
------------------------------------------------------------------------

ARC Total accesses:  82.21k
  Cache Hit Ratio:  78.54%  64.57k
  Cache Miss Ratio:  21.46%  17.64k
  Actual Hit Ratio:  74.24%  61.04k

  Data Demand Efficiency:  89.53%  49.98k
  Data Prefetch Efficiency:  7.03%  370

  CACHE HITS BY CACHE LIST:
  Anonymously Used:  5.47%  3.53k
  Most Recently Used:  65.05%  42.00k
  Most Frequently Used:  29.48%  19.03k
  Most Recently Used Ghost:  0.00%  0
  Most Frequently Used Ghost:  0.00%  0

  CACHE HITS BY DATA TYPE:
  Demand Data:  69.31%  44.75k
  Prefetch Data:  0.04%  26
  Demand Metadata:  25.22%  16.28k
  Prefetch Metadata:  5.43%  3.51k

  CACHE MISSES BY DATA TYPE:
  Demand Data:  29.65%  5.23k
  Prefetch Data:  1.95%  344
  Demand Metadata:  63.87%  11.27k
  Prefetch Metadata:  4.53%  800
  Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (HEALTHY)
  Passed Headroom:  0
  Tried Lock Failures:  38
  IO In Progress:  0
  Low Memory Aborts:  0
  Free on Write:  35
  Writes While Full:  0
  R/W Clashes:  0
  Bad Checksums:  0
  IO Errors:  0
  SPA Mismatch:  6.12m

L2 ARC Size: (Adaptive)  118.87  MiB
  Header Size:  0.00%  0  Bytes

L2 ARC Evicts:
  Lock Retries:  0
  Upon Reading:  0

L2 ARC Breakdown:  13.72k
  Hit Ratio:  0.00%  0
  Miss Ratio:  100.00% 13.72k
  Feeds:  11.41k

L2 ARC Buffer:
  Bytes Scanned:  374.60  GiB
  Buffer Iterations:  11.41k
  List Iterations:  45.62k
  NULL List Iterations:  250

L2 ARC Writes:
  Writes Sent:  100.00% 8.07k
  Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:  1.74m
  Hit Ratio:  1.12%  19.44k
  Miss Ratio:  98.88%  1.72m

  Page:  5
------------------------------------------------------------------------

  Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
  kern.maxusers  2375
  vm.kmem_size  33324769280
  vm.kmem_size_scale  1
  vm.kmem_size_min  0
  vm.kmem_size_max  1319413950874
  vfs.zfs.vol.unmap_enabled  1
  vfs.zfs.vol.mode  2
  vfs.zfs.sync_pass_rewrite  2
  vfs.zfs.sync_pass_dont_compress  5
  vfs.zfs.sync_pass_deferred_free  2
  vfs.zfs.zio.dva_throttle_enabled  1
  vfs.zfs.zio.exclude_metadata  0
  vfs.zfs.zio.use_uma  1
  vfs.zfs.zil_slog_limit  786432
  vfs.zfs.cache_flush_disable  0
  vfs.zfs.zil_replay_disable  0
  vfs.zfs.version.zpl  5
  vfs.zfs.version.spa  5000
  vfs.zfs.version.acl  1
  vfs.zfs.version.ioctl  7
  vfs.zfs.debug  0
  vfs.zfs.super_owner  0
  vfs.zfs.min_auto_ashift  12
  vfs.zfs.max_auto_ashift  13
  vfs.zfs.vdev.queue_depth_pct  1000
  vfs.zfs.vdev.write_gap_limit  4096
  vfs.zfs.vdev.read_gap_limit  32768
  vfs.zfs.vdev.aggregation_limit  131072
  vfs.zfs.vdev.trim_max_active  64
  vfs.zfs.vdev.trim_min_active  1
  vfs.zfs.vdev.scrub_max_active  2
  vfs.zfs.vdev.scrub_min_active  1
  vfs.zfs.vdev.async_write_max_active  10
  vfs.zfs.vdev.async_write_min_active  1
  vfs.zfs.vdev.async_read_max_active  3
  vfs.zfs.vdev.async_read_min_active  1
  vfs.zfs.vdev.sync_write_max_active  10
  vfs.zfs.vdev.sync_write_min_active  10
  vfs.zfs.vdev.sync_read_max_active  10
  vfs.zfs.vdev.sync_read_min_active  10
  vfs.zfs.vdev.max_active  1000
  vfs.zfs.vdev.async_write_active_max_dirty_percent60
  vfs.zfs.vdev.async_write_active_min_dirty_percent30
  vfs.zfs.vdev.mirror.non_rotating_seek_inc1
  vfs.zfs.vdev.mirror.non_rotating_inc  0
  vfs.zfs.vdev.mirror.rotating_seek_offset1048576
  vfs.zfs.vdev.mirror.rotating_seek_inc  5
  vfs.zfs.vdev.mirror.rotating_inc  0
  vfs.zfs.vdev.trim_on_init  1
  vfs.zfs.vdev.larger_ashift_minimal  0
  vfs.zfs.vdev.bio_delete_disable  0
  vfs.zfs.vdev.bio_flush_disable  0
  vfs.zfs.vdev.cache.bshift  16
  vfs.zfs.vdev.cache.size  0
  vfs.zfs.vdev.cache.max  16384
  vfs.zfs.vdev.metaslabs_per_vdev  200
  vfs.zfs.vdev.trim_max_pending  10000
  vfs.zfs.txg.timeout  5
  vfs.zfs.trim.enabled  1
  vfs.zfs.trim.max_interval  1
  vfs.zfs.trim.timeout  30
  vfs.zfs.trim.txg_delay  32
  vfs.zfs.space_map_blksz  4096
  vfs.zfs.spa_min_slop  134217728
  vfs.zfs.spa_slop_shift  5
  vfs.zfs.spa_asize_inflation  24
  vfs.zfs.deadman_enabled  1
  vfs.zfs.deadman_checktime_ms  5000
  vfs.zfs.deadman_synctime_ms  1000000
  vfs.zfs.debug_flags  0
  vfs.zfs.recover  0
  vfs.zfs.spa_load_verify_data  1
  vfs.zfs.spa_load_verify_metadata  1
  vfs.zfs.spa_load_verify_maxinflight  10000
  vfs.zfs.ccw_retry_interval  300
  vfs.zfs.check_hostid  1
  vfs.zfs.mg_fragmentation_threshold  85
  vfs.zfs.mg_noalloc_threshold  0
  vfs.zfs.condense_pct  200
  vfs.zfs.metaslab.bias_enabled  1
  vfs.zfs.metaslab.lba_weighting_enabled  1
  vfs.zfs.metaslab.fragmentation_factor_enabled1
  vfs.zfs.metaslab.preload_enabled  1
  vfs.zfs.metaslab.preload_limit  3
  vfs.zfs.metaslab.unload_delay  8
  vfs.zfs.metaslab.load_pct  50
  vfs.zfs.metaslab.min_alloc_size  33554432
  vfs.zfs.metaslab.df_free_pct  4
  vfs.zfs.metaslab.df_alloc_threshold  131072
  vfs.zfs.metaslab.debug_unload  0
  vfs.zfs.metaslab.debug_load  0
  vfs.zfs.metaslab.fragmentation_threshold70
  vfs.zfs.metaslab.gang_bang  16777217
  vfs.zfs.free_bpobj_enabled  1
  vfs.zfs.free_max_blocks  18446744073709551615
  vfs.zfs.no_scrub_prefetch  0
  vfs.zfs.no_scrub_io  0
  vfs.zfs.resilver_min_time_ms  3000
  vfs.zfs.free_min_time_ms  1000
  vfs.zfs.scan_min_time_ms  1000
  vfs.zfs.scan_idle  50
  vfs.zfs.scrub_delay  4
  vfs.zfs.resilver_delay  2
  vfs.zfs.top_maxinflight  32
  vfs.zfs.delay_scale  500000
  vfs.zfs.delay_min_dirty_percent  60
  vfs.zfs.dirty_data_sync  67108864
  vfs.zfs.dirty_data_max_percent  10
  vfs.zfs.dirty_data_max_max  4294967296
  vfs.zfs.dirty_data_max  3421345382
  vfs.zfs.max_recordsize  1048576
  vfs.zfs.zfetch.array_rd_sz  1048576
  vfs.zfs.zfetch.max_distance  8388608
  vfs.zfs.zfetch.min_sec_reap  2
  vfs.zfs.zfetch.max_streams  8
  vfs.zfs.prefetch_disable  0
  vfs.zfs.send_holes_without_birth_time  1
  vfs.zfs.mdcomp_disable  0
  vfs.zfs.nopwrite_enabled  1
  vfs.zfs.dedup.prefetch  1
  vfs.zfs.l2c_only_size  0
  vfs.zfs.mfu_ghost_data_esize  0
  vfs.zfs.mfu_ghost_metadata_esize  0
  vfs.zfs.mfu_ghost_size  0
  vfs.zfs.mfu_data_esize  7620608
  vfs.zfs.mfu_metadata_esize  391680
  vfs.zfs.mfu_size  14747136
  vfs.zfs.mru_ghost_data_esize  0
  vfs.zfs.mru_ghost_metadata_esize  0
  vfs.zfs.mru_ghost_size  0
  vfs.zfs.mru_data_esize  73773056
  vfs.zfs.mru_metadata_esize  955904
  vfs.zfs.mru_size  259650560
  vfs.zfs.anon_data_esize  0
  vfs.zfs.anon_metadata_esize  0
  vfs.zfs.anon_size  32768
  vfs.zfs.l2arc_norw  1
  vfs.zfs.l2arc_feed_again  1
  vfs.zfs.l2arc_noprefetch  1
  vfs.zfs.l2arc_feed_min_ms  200
  vfs.zfs.l2arc_feed_secs  1
  vfs.zfs.l2arc_headroom  2
  vfs.zfs.l2arc_write_boost  8388608
  vfs.zfs.l2arc_write_max  8388608
  vfs.zfs.arc_meta_limit  8062756864
  vfs.zfs.arc_free_target  56452
  vfs.zfs.compressed_arc_enabled  1
  vfs.zfs.arc_shrink_shift  7
  vfs.zfs.arc_average_blocksize  8192
  vfs.zfs.arc_min  4031378432
  vfs.zfs.arc_max  32251027456
  Page:  7
------------------------------------------------------------------------
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
About a year ago I noticed a drop in my SMB share performance. If I recall accurately, at the time I didn't think much about it as I had just updated FreeNAS to the latest release (I think it was a minor release update, like 9.10-Ux to 9.10-Uy, but I'm not sure).

Is 9.10.2-U1 the version you are running right now (or a version from about that time)? Maybe you are seeing the effects of an old bug.
https://bugs.freenas.org/issues/19953
 

gusgus

Dabbler
Joined
Sep 15, 2014
Messages
13
Ah, yes this looks exactly like my problem! Good catch! :) I will update my secondary system to 11.0-U4 since the bug was fixed in 11.0-RC. Never expected that a bug in 9.10.2 would push me into 11.0 :) Cheers!

Update: Updating in the GUI from 9.10.2-U6 to 11.0-U4 did get rid of the L2ARC sizing issue and performance seems better. I'll test a while longer on my secondary server before taking the plunge on my primary server.
 
Last edited:
Status
Not open for further replies.
Top