FreeNAS 9.3.1 L2ARC on 40GB partition - reporting 88GB size?

Status
Not open for further replies.

abhaxus

Cadet
Joined
Feb 23, 2016
Messages
1
Awhile back I acquired a 240GB Seagate 600 Pro to play around with l2arc and slog on my FreeNAS box. Most of the time I use the box just as a home media server, but I occasionally use it as an iSCSI target for a hyper-v host. I wasn't really concerned about performance, just wanting to learn.

Yesterday I was looking at the reporting tab in the WebGUI and noticed that it was reporting my l2arc size as over 80GB. This is strange to me because the size of the partition is only 40GB. I added the partition using the UUID following this guide: https://clinta.github.io/FreeNAS-Multipurpose-SSD/

My FreeNAS box: latest 9.3.1
AMD FX 8320 underclocked/undervolted
Asus Crosshair V Formula-Z
32GB Crucial ECC (4x8gb DDR3L UDIMM)
M1015 P20 firmware
HP SAS Expander
Norco 4220
Seagate 600 Pro 240gb (attached to motherboard SATA port)
40GB partition for l2arc
12GB partition for slog​
12x Seagate 4TB NAS hdd
(2 six disk RAIDZ2 vdevs in one pool)​
lz4 compression is enabled on the pool, but it is mostly incompressible data (video, music, pictures)

Output of ./arcstat.py -f l2size:
Code:
l2size
89G


Output of gpart show ada0:
Code:
34 468862061 ada0 GPT (223G)
34 94 - free - (47k)
128 25165824 1 freebsd-zfs (12G)
25165952 83886080 2 freebsd-zfs (40G)
109052032 359810063 - free - (171G)


I did see a post on the FreeNAS forums dating back to 9.1 where someone had noted a similar error, but there was no resolution beyond the typical insults and discussion about how improperly sized his l2arc was. There was some thought that perhaps the l2arc stats were reflecting compression but that was deemed to be inaccurate in 9.1.x, and the command suggested to check for the actual size (arcstat.py -f l2asize) also does not work in 9.3.1.

Apologies if this comes up elsewhere, I've done about an hour of googling and found nothing.

I guess my main/only question: is this something I should be concerned about, or is this just normal behavior? I don't really have any need for the cache/log device with my current usecase.
 

Adrian

Contributor
Joined
Jun 29, 2011
Messages
166
I have the same problem with FreeNAS-9.10.2-U1 (86c7ef5) on a new FreeNAS Mini XL with iXsystems log and cache devices and 8 * WD 6TB Red disks where a 116G device is currently showing a l2asize of 2.5T.

Any suggestions as to how I can help? I could set up remote access (ssh+pubkey) for an expert to investigate.

At present I am bulk copying data to it via robocopy on Windows from an overly full FreeNAS Mini, so the arc stats are of course appalling. I understand that a L2 cache can be counter productive, I do make some use of NFS and might be better off using the cache device as a mirror for the log as they appear to be identical devices.

Various configuration details below.. Some voluminous.
arcstat
camcontrol devlist
gpart show
arc_summary.py (which shows L2ARC DEGRADED - any chance of this being due to something similar to https://bugs.freenas.org/issues/3418 (L2 compression IO errors)?
sysctl kstat.zfs.misc.arcstats

Code:
[root@freenasxl ~]# arcstat.py -f l2asize
l2asize
   2.5T


Code:
[root@freenasxl ~]# camcontrol devlist
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus1 target 0 lun 0 (pass1,ada1)
<2.5" SATA SSD 3MG2-P M150821>     at scbus2 target 0 lun 0 (pass2,ada2) (LOG)
<2.5" SATA SSD 3MG2-P M150821>     at scbus3 target 0 lun 0 (pass3,ada3) (CACHE)
<16GB SATA Flash Drive SFDK002A>   at scbus4 target 0 lun 0 (pass4,ada4)
<Marvell Console 1.01>             at scbus9 target 0 lun 0 (pass5)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus10 target 0 lun 0 (pass6,ada5)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus11 target 0 lun 0 (pass7,ada6)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus12 target 0 lun 0 (pass8,ada7)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus13 target 0 lun 0 (pass9,ada8)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus14 target 0 lun 0 (pass10,ada9)
<WDC WD60EFRX-68L0BN1 82.00A82>    at scbus15 target 0 lun 0 (pass11,ada10)


Code:
[root@freenasxl ~]# gpart show ada2
=>       34  242255597  ada2  GPT  (116G)
         34         94        - free -  (47K)
        128  242255496     1  freebsd-zfs  (116G)
  242255624          7        - free -  (3.5K)

[root@freenasxl ~]# gpart show ada3
=>       34  242255597  ada3  GPT  (116G)
         34         94        - free -  (47K)
        128  242255496     1  freebsd-zfs  (116G)
  242255624          7        - free -  (3.5K)


Code:
[root@freenasxl ~]# arc_summary.py
System Memory:

        0.13%   40.78   MiB Active,     2.17%   691.07  MiB Inact
        69.84%  21.70   GiB Wired,      0.00%   0       Bytes Cache
        27.86%  8.66    GiB Free,       0.00%   0       Bytes Gap

        Real Installed:                         32.00   GiB
        Real Available:                 99.82%  31.94   GiB
        Real Managed:                   97.29%  31.08   GiB

        Logical Total:                          32.00   GiB
        Logical Used:                   70.83%  22.67   GiB
        Logical Free:                   29.17%  9.33    GiB

Kernel Memory:                                  301.86  MiB
        Data:                           90.83%  274.18  MiB
        Text:                           9.17%   27.68   MiB

Kernel Memory Map:                              31.08   GiB
        Size:                           60.36%  18.76   GiB
        Free:                           39.64%  12.32   GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                36.43m
        Mutex Misses:                           36.09k
        Evict Skips:                            36.09k

ARC Size:                               119.50% 35.94   GiB
        Target Size: (Adaptive)         100.00% 30.08   GiB
        Min Size (Hard Limit):          12.50%  3.76    GiB
        Max Size (High Water):          8:1     30.08   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       48.59%  17.46   GiB
        Frequently Used Cache Size:     51.41%  18.48   GiB

ARC Hash Breakdown:
        Elements Max:                           21.30m
        Elements Current:               100.00% 21.30m
        Collisions:                             58.06m
        Chain Max:                              18
        Chains:                                 4.04m
                                                                Page:  2
------------------------------------------------------------------------

ARC Total accesses:                                     366.86m
        Cache Hit Ratio:                0.31%   1.12m
        Cache Miss Ratio:               99.69%  365.74m
        Actual Hit Ratio:               0.30%   1.10m

        Data Demand Efficiency:         1.99%   22.80m
        Data Prefetch Efficiency:       28.29%  813

        CACHE HITS BY CACHE LIST:
          Most Recently Used:           94.53%  1.06m
          Most Frequently Used:         4.06%   45.48k
          Most Recently Used Ghost:     1.60%   17.90k
          Most Frequently Used Ghost:   21.42%  240.00k

        CACHE HITS BY DATA TYPE:
          Demand Data:                  40.46%  453.35k
          Prefetch Data:                0.02%   230
          Demand Metadata:              58.13%  651.29k
          Prefetch Metadata:            1.39%   15.53k

        CACHE MISSES BY DATA TYPE:
          Demand Data:                  6.11%   22.34m
          Prefetch Data:                0.00%   583
          Demand Metadata:              93.89%  343.40m
          Prefetch Metadata:            0.00%   1.58k
                                                                Page:  3
------------------------------------------------------------------------

L2 ARC Summary: (DEGRADED)
        Passed Headroom:                        378.00k
        Tried Lock Failures:                    11.34k
        IO In Progress:                         479
        Low Memory Aborts:                      13
        Free on Write:                          167.83k
        Writes While Full:                      365.52k
        R/W Clashes:                            0
        Bad Checksums:                          41.43k
        IO Errors:                              0
        SPA Mismatch:                           23.14m

L2 ARC Size: (Adaptive)                         2.51    TiB
        Header Size:                    0.07%   1.72    GiB

L2 ARC Evicts:
        Lock Retries:                           194
        Upon Reading:                           0

L2 ARC Breakdown:                               365.74m
        Hit Ratio:                      0.02%   82.94k
        Miss Ratio:                     99.98%  365.65m
        Feeds:                                  406.97k

L2 ARC Buffer:
        Bytes Scanned:                          17.78   TiB
        Buffer Iterations:                      406.97k
        List Iterations:                        1.63m
        NULL List Iterations:                   745.58k

L2 ARC Writes:
        Writes Sent:                    100.00% 402.79k
                                                                Page:  4
------------------------------------------------------------------------

DMU Prefetch Efficiency:                        1.12b
        Hit Ratio:                      3.25%   36.34m
        Miss Ratio:                     96.75%  1.08b

                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           2380
        vm.kmem_size                            33369219072
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.mode                        2
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.dva_throttle_enabled        1
        vfs.zfs.zio.exclude_metadata            0
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.zil_slog_limit                  786432
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.version.ioctl                   7
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.queue_depth_pct            1000
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.larger_ashift_minimal      0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.space_map_blksz                 4096
        vfs.zfs.spa_min_slop                    134217728
        vfs.zfs.spa_slop_shift                  5
        vfs.zfs.spa_asize_inflation             24
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_checktime_ms            5000
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.debug_flags                     0
        vfs.zfs.recover                         0
        vfs.zfs.spa_load_verify_data            1
        vfs.zfs.spa_load_verify_metadata        1
        vfs.zfs.spa_load_verify_maxinflight     10000
        vfs.zfs.ccw_retry_interval              300
        vfs.zfs.check_hostid                    1
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.metaslab.fragmentation_factor_enabled1
        vfs.zfs.metaslab.preload_enabled        1
        vfs.zfs.metaslab.preload_limit          3
        vfs.zfs.metaslab.unload_delay           8
        vfs.zfs.metaslab.load_pct               50
        vfs.zfs.metaslab.min_alloc_size         33554432
        vfs.zfs.metaslab.df_free_pct            4
        vfs.zfs.metaslab.df_alloc_threshold     131072
        vfs.zfs.metaslab.debug_unload           0
        vfs.zfs.metaslab.debug_load             0
        vfs.zfs.metaslab.fragmentation_threshold70
        vfs.zfs.metaslab.gang_bang              16777217
        vfs.zfs.free_bpobj_enabled              1
        vfs.zfs.free_max_blocks                 18446744073709551615
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  3429916262
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.max_distance             8388608
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.send_holes_without_birth_time   1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_esize            432815104
        vfs.zfs.mfu_ghost_metadata_esize        1084612096
        vfs.zfs.mfu_ghost_size                  1517427200
        vfs.zfs.mfu_data_esize                  0
        vfs.zfs.mfu_metadata_esize              0
        vfs.zfs.mfu_size                        5742592
        vfs.zfs.mru_ghost_data_esize            30801920
        vfs.zfs.mru_ghost_metadata_esize        15867030016
        vfs.zfs.mru_ghost_size                  15897831936
        vfs.zfs.mru_data_esize                  16170037760
        vfs.zfs.mru_metadata_esize              2496000
        vfs.zfs.mru_size                        16397537792
        vfs.zfs.anon_data_esize                 0
        vfs.zfs.anon_metadata_esize             0
        vfs.zfs.anon_size                       46077952
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  8073869312
        vfs.zfs.arc_free_target                 56518
        vfs.zfs.compressed_arc_enabled          1
        vfs.zfs.arc_shrink_shift                7
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         4036934656
        vfs.zfs.arc_max                         32295477248
                                                                Page:  7
------------------------------------------------------------------------


Code:
[root@freenasxl /usr/local/www/freenasUI/tools]# sysctl kstat.zfs.misc.arcstats
kstat.zfs.misc.arcstats.demand_hit_predictive_prefetch: 307
kstat.zfs.misc.arcstats.sync_wait_for_async: 127
kstat.zfs.misc.arcstats.arc_meta_min: 2018467328
kstat.zfs.misc.arcstats.arc_meta_max: 2451952296
kstat.zfs.misc.arcstats.arc_meta_limit: 8073869312
kstat.zfs.misc.arcstats.arc_meta_used: 2417005512
kstat.zfs.misc.arcstats.memory_throttle_count: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 773623
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 1684188
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 19935036013568
kstat.zfs.misc.arcstats.l2_write_pios: 416857
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 421047
kstat.zfs.misc.arcstats.l2_write_full: 378443
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 9057
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 489
kstat.zfs.misc.arcstats.l2_write_in_l2: 125560950
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 23445765
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 379155
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 11675
kstat.zfs.misc.arcstats.l2_padding_needed: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 1924117448
kstat.zfs.misc.arcstats.l2_asize: 2842632573440
kstat.zfs.misc.arcstats.l2_size: 2874728680448
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 42734
kstat.zfs.misc.arcstats.l2_abort_lowmem: 13
kstat.zfs.misc.arcstats.l2_free_on_write: 174593
kstat.zfs.misc.arcstats.l2_evict_l1cached: 11240
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 194
kstat.zfs.misc.arcstats.l2_writes_lock_retry: 313
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_done: 416857
kstat.zfs.misc.arcstats.l2_writes_sent: 416857
kstat.zfs.misc.arcstats.l2_write_bytes: 3354007813120
kstat.zfs.misc.arcstats.l2_read_bytes: 143234560
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_feeds: 421047
kstat.zfs.misc.arcstats.l2_misses: 372911182
kstat.zfs.misc.arcstats.l2_hits: 90528
kstat.zfs.misc.arcstats.mfu_ghost_evictable_metadata: 1119464448
kstat.zfs.misc.arcstats.mfu_ghost_evictable_data: 440291840
kstat.zfs.misc.arcstats.mfu_ghost_size: 1559756288
kstat.zfs.misc.arcstats.mfu_evictable_metadata: 0
kstat.zfs.misc.arcstats.mfu_evictable_data: 0
kstat.zfs.misc.arcstats.mfu_size: 7636992
kstat.zfs.misc.arcstats.mru_ghost_evictable_metadata: 15966968320
kstat.zfs.misc.arcstats.mru_ghost_evictable_data: 19660800
kstat.zfs.misc.arcstats.mru_ghost_size: 15986629120
kstat.zfs.misc.arcstats.mru_evictable_metadata: 3386880
kstat.zfs.misc.arcstats.mru_evictable_data: 16057168384
kstat.zfs.misc.arcstats.mru_size: 16308777472
kstat.zfs.misc.arcstats.anon_evictable_metadata: 0
kstat.zfs.misc.arcstats.anon_evictable_data: 0
kstat.zfs.misc.arcstats.anon_size: 26392064
kstat.zfs.misc.arcstats.other_size: 217762176
kstat.zfs.misc.arcstats.metadata_size: 204475392
kstat.zfs.misc.arcstats.data_size: 36639651840
kstat.zfs.misc.arcstats.hdr_size: 70650496
kstat.zfs.misc.arcstats.overhead_size: 135770624
kstat.zfs.misc.arcstats.uncompressed_size: 16534946816
kstat.zfs.misc.arcstats.compressed_size: 16207035904
kstat.zfs.misc.arcstats.size: 39056657352
kstat.zfs.misc.arcstats.c_max: 32295477248
kstat.zfs.misc.arcstats.c_min: 4036934656
kstat.zfs.misc.arcstats.c: 32295477248
kstat.zfs.misc.arcstats.p: 18752217369
kstat.zfs.misc.arcstats.hash_chain_max: 19
kstat.zfs.misc.arcstats.hash_chains: 4061577
kstat.zfs.misc.arcstats.hash_collisions: 60381800
kstat.zfs.misc.arcstats.hash_elements_max: 22178047
kstat.zfs.misc.arcstats.hash_elements: 22178046
kstat.zfs.misc.arcstats.evict_l2_skip: 1940
kstat.zfs.misc.arcstats.evict_l2_ineligible: 69519360
kstat.zfs.misc.arcstats.evict_l2_eligible: 1552698425856
kstat.zfs.misc.arcstats.evict_l2_cached: 3399798332928
kstat.zfs.misc.arcstats.evict_not_enough: 3150332
kstat.zfs.misc.arcstats.evict_skip: 22524132
kstat.zfs.misc.arcstats.mutex_miss: 39813
kstat.zfs.misc.arcstats.deleted: 37633676
kstat.zfs.misc.arcstats.allocated: 424552841
kstat.zfs.misc.arcstats.mfu_ghost_hits: 269371
kstat.zfs.misc.arcstats.mfu_hits: 46583
kstat.zfs.misc.arcstats.mru_ghost_hits: 22678
kstat.zfs.misc.arcstats.mru_hits: 1098011
kstat.zfs.misc.arcstats.prefetch_metadata_misses: 4893
kstat.zfs.misc.arcstats.prefetch_metadata_hits: 20138
kstat.zfs.misc.arcstats.prefetch_data_misses: 587
kstat.zfs.misc.arcstats.prefetch_data_hits: 230
kstat.zfs.misc.arcstats.demand_metadata_misses: 350209363
kstat.zfs.misc.arcstats.demand_metadata_hits: 675795
kstat.zfs.misc.arcstats.demand_data_misses: 22791045
kstat.zfs.misc.arcstats.demand_data_hits: 468027
kstat.zfs.misc.arcstats.misses: 373005890
kstat.zfs.misc.arcstats.hits: 1164190
 

Adrian

Contributor
Joined
Jun 29, 2011
Messages
166
Thanks.

I have found complaints in a LINUX ZFS forum that when a similar problem affects L2ARC it badly impairs performance, so I have removed the cache device.

Looks like the upstream FreeBSD issue is fixed, in current at least.

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216178 (long) contains

A commit references this bug:

Author: avg
Date: Sat Feb 25 17:03:49 UTC 2017
New revision: 314274
URL: https://svnweb.freebsd.org/changeset/base/314274

Log:
l2arc: try to fix write size calculation broken by Compressed ARC commit

While there, make a change to not evict a first buffer outside the
requested eviciton range.

To do:
- give more consistent names to the size variables
- upstream to OpenZFS

PR: 216178
Reported by: lev
Tested by: lev
MFC after: 2 weeks

Changes:
head/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
 
Status
Not open for further replies.
Top