Odd l2arc size?

Status
Not open for further replies.

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I added a l2arc to see if I could reduce the bussy time on the drivers. My FreeNAS is a seedbox.


kristoffer@freenas:~ % zpool status tank
pool: tank
state: ONLINE
scan: scrub repaired 0 in 9h49m with 0 errors on Sun Jan 17 09:49:07 2016
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/68cf78ef-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
gptid/6b4ac5c1-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
gptid/6dbfbdba-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
gptid/7040d2ea-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
gptid/72c9b0f2-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
gptid/754e37fa-8016-11e5-ac06-000c2975a9ee ONLINE 0 0 0
cache
gptid/6fe8f9ce-c6f1-11e5-adf2-000c2975a9ee ONLINE 0 0 0

errors: No known data errors
kristoffer@freenas:~ % arcstat.py -f l2size
l2size
622G
kristoffer@freenas:~ %

how come the arc is reported as 622 GB?!

kristoffer@freenas:~ % gpart show | grep da1
=> 34 188743613 da1 GPT (90G)

da1 at mpt0 bus 0 scbus2 target 1 lun 0
da1: <VMware Virtual disk 1.0> Fixed Direct Access SCSI-2 device
da1: Serial Number 6000c29eccfe5afbe8219d62e5256d4c
da1: 320.000MB/s transfers (160.000MHz DT, offset 127, 16bit)
da1: Command Queueing enabled
da1: 92160MB (188743680 512 byte sectors: 255H 63S/T 11748C)
da1: quirks=0x40<RETRY_BUSY>


What have I done? :)
 

Attachments

  • l2arc.PNG
    l2arc.PNG
    70.1 KB · Views: 412
  • l2arczpool.PNG
    l2arczpool.PNG
    14.7 KB · Views: 221
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Are we supposed to guess how big is your L2ARC device is?

My guess is you added a cache device that is 800GB. Am I close? Did I win? :smile:
I pasted logs there you see its 90 gig. No need to guess :)

Skickat från min HTC One via Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What does "sysctl kstat.zfs.misc.arcstats.l2_size" say?

arc_summary.py does a better job of producing output anyways. Run it and post the output in CODE tags.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
All I can think of is that you're somehow mistaken. If you look at

index.php


I would say that it looks like you had a 90G L2ARC device up until the wee hours Saturday morning, then removed it, and now there's something else there. If you screwed around with the NAS from the CLI, perhaps you somehow screwed up the L2ARC. What device is gptid/6fe8f9ce-c6f1-11e5-adf2-000c2975a9ee on?
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I had a 70G l2arc before via vmware, I just increased the disc size . Removed the old l2arc and added this one.

sysctl kstat.zfs.misc.arcstats.l2_size
kstat.zfs.misc.arcstats.l2_size: 1055413339136

L2 ARC Size: (Adaptive) 982.92 GiB
Header Size: 0.19% 1.85 GiB

really odd.
I added the disc via gui. maybe I'll remove it and add it via cli, there is only that 90Gb disc from a 240GB ssd via vmware. the other 6x4TB disc are raidz2 , the tank pool.
so if its not a bug, the size reported is correct then the cache is the pool tank it self.

but da1 is the vm disc as you can see from the logs too. So they are contradicting.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Code:
System Memory:

        1.33%   342.99  MiB Active,     3.53%   909.25  MiB Inact
        86.54%  21.79   GiB Wired,      0.03%   7.99    MiB Cache
        8.56%   2.15    GiB Free,       0.02%   4.98    MiB Gap

        Real Installed:                         28.00   GiB
        Real Available:                 92.77%  25.97   GiB
        Real Managed:                   96.95%  25.18   GiB

        Logical Total:                          28.00   GiB
        Logical Used:                   89.11%  24.95   GiB
        Logical Free:                   10.89%  3.05    GiB

Kernel Memory:                                  802.67  MiB
        Data:                           97.05%  778.96  MiB
        Text:                           2.95%   23.71   MiB

Kernel Memory Map:                              25.99   GiB
        Size:                           67.61%  17.57   GiB
        Free:                           32.39%  8.42    GiB
                                                                Page:  1
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                45.21m
        Recycle Misses:                         7.40m
        Mutex Misses:                           3.89k
        Evict Skips:                            3.89k

ARC Size:                               100.00% 18.72   GiB
        Target Size: (Adaptive)         100.00% 18.72   GiB
        Min Size (Hard Limit):          12.50%  2.34    GiB
        Max Size (High Water):          8:1     18.72   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       13.08%  2.45    GiB
        Frequently Used Cache Size:     86.92%  16.27   GiB
        IO In Progress:                         1.41k
        Low Memory Aborts:                      4
        Free on Write:                          889.04k
        Writes While Full:                      160.19k
        R/W Clashes:                            3.23k
        Bad Checksums:                          6.36m
        IO Errors:                              1.10m
        SPA Mismatch:                           22.78m

L2 ARC Size: (Adaptive)                         982.92  GiB
        Header Size:                    0.19%   1.85    GiB

L2 ARC Evicts:
        Lock Retries:                           161
        Upon Reading:                           12

L2 ARC Breakdown:                               63.83m
        Hit Ratio:                      21.77%  13.90m
        Miss Ratio:                     78.23%  49.94m
        Feeds:                                  299.99k

L2 ARC Buffer:
        Bytes Scanned:                          17.01   TiB
        Buffer Iterations:                      299.99k
        List Iterations:                        16.42m
        NULL List Iterations:                   2.02m

L2 ARC Writes:
        Writes Sent:                    100.00% 275.23k
                                                                Page:  4
------------------------------------------------------------------------

File-Level Prefetch: (HEALTHY)
DMU Efficiency:                                 1.11b
        Hit Ratio:                      79.58%  885.58m
        Miss Ratio:                     20.42%  227.19m

        Colinear:                               227.19m
          Hit Ratio:                    0.05%   120.41k
          Miss Ratio:                   99.95%  227.07m

        Stride:                                 853.56m
          Hit Ratio:                    99.93%  852.92m
          Miss Ratio:                   0.07%   632.15k

DMU Misc:
        Reclaim:                                227.07m
          Successes:                    0.88%   2.01m
          Failures:                     99.12%  225.06m

        Streams:                                32.68m
          +Resets:                      0.11%   36.72k
          -Resets:                      99.89%  32.65m
          Bogus:                                0
                                                                Page:  5
------------------------------------------------------------------------

                                                                Page:  6
------------------------------------------------------------------------

ZFS Tunable (sysctl):
        kern.maxusers                           1998
        vm.kmem_size                            29493795840
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        1319413950874
        vfs.zfs.l2c_only_size                   1038265752576
        vfs.zfs.mfu_ghost_data_lsize            206046208
        vfs.zfs.mfu_ghost_metadata_lsize        331117056
        vfs.zfs.mfu_ghost_size                  537163264
        vfs.zfs.mfu_data_lsize                  17250142208
        vfs.zfs.mfu_metadata_lsize              339456
        vfs.zfs.mfu_size                        17370234368
        vfs.zfs.mru_ghost_data_lsize            18641485824
        vfs.zfs.mru_ghost_metadata_lsize        924658176
        vfs.zfs.mru_ghost_size                  19566144000
        vfs.zfs.mru_data_lsize                  377534464
        vfs.zfs.mru_metadata_lsize              15378944
        vfs.zfs.mru_size                        500144640
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       494592
        vfs.zfs.l2arc_norw                      0
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                0
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               40000000
        vfs.zfs.l2arc_write_max                 10000000
        vfs.zfs.arc_meta_limit                  5025952800
        vfs.zfs.arc_shrink_shift                5
        vfs.zfs.arc_average_blocksize           8192
        vfs.zfs.arc_min                         2512976400
        vfs.zfs.arc_max                         20103811200
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.max_recordsize                  1048576
        vfs.zfs.delay_scale                     500000
        vfs.zfs.delay_min_dirty_percent         60
        vfs.zfs.dirty_data_sync                 67108864
        vfs.zfs.dirty_data_max_percent          10
        vfs.zfs.dirty_data_max_max              4294967296
        vfs.zfs.dirty_data_max                  2789042585
        vfs.zfs.free_max_blocks                 131072
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.mg_fragmentation_threshold      85
        vfs.zfs.mg_noalloc_threshold            0
        vfs.zfs.condense_pct                    200
        vfs.zfs.metaslab.bias_enabled           1
        vfs.zfs.metaslab.lba_weighting_enabled  1
        vfs.zfs.deadman_synctime_ms             1000000
        vfs.zfs.recover                         0
        vfs.zfs.space_map_blksz                 32768
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
        vfs.zfs.trim.enabled                    1
        vfs.zfs.txg.timeout                     5
        vfs.zfs.min_auto_ashift                 9
        vfs.zfs.max_auto_ashift                 13
        vfs.zfs.vdev.trim_max_pending           10000
        vfs.zfs.vdev.metaslabs_per_vdev         200
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.larger_ashift_minimal      0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.mirror.non_rotating_seek_inc1
        vfs.zfs.vdev.mirror.non_rotating_inc    0
        vfs.zfs.vdev.mirror.rotating_seek_offset1048576
        vfs.zfs.vdev.mirror.rotating_seek_inc   5
        vfs.zfs.vdev.mirror.rotating_inc        0
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.trim_max_active            64
        vfs.zfs.vdev.trim_min_active            1
        vfs.zfs.vdev.scrub_max_active           2
        vfs.zfs.vdev.scrub_min_active           1
        vfs.zfs.vdev.async_write_max_active     10
        vfs.zfs.vdev.async_write_min_active     1
        vfs.zfs.vdev.async_read_max_active      3
        vfs.zfs.vdev.async_read_min_active      1
        vfs.zfs.vdev.sync_write_max_active      10
        vfs.zfs.vdev.sync_write_min_active      10
        vfs.zfs.vdev.sync_read_max_active       10
        vfs.zfs.vdev.sync_read_min_active       10
        vfs.zfs.vdev.max_active                 1000
        vfs.zfs.vdev.async_write_active_max_dirty_percent60
        vfs.zfs.vdev.async_write_active_min_dirty_percent30
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.version.ioctl                   4
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.use_uma                     1
        vfs.zfs.vol.unmap_enabled               1
        vfs.zfs.vol.mode                        2
                                                                Page:  7
------------------------------------------------------------------------
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
What log or command output is that? That's a lot of useful info.

EDIT: Thanks! Looks like it's part of freenas. arc_summary.py
 
Last edited:

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Code:
 zfs list

NAME                                                    USED  AVAIL  REFER  MOUN                                                                                               TPOINT
freenas-boot                                           1.46G  13.9G    31K  none
freenas-boot/ROOT                                      1.43G  13.9G    25K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648      13.4M  13.9G   619M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201512121950      18.7M  13.9G   625M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201601181840      1.40G  13.9G   627M  /
freenas-boot/ROOT/Initial-Install                       152K  13.9G   510M  lega                                                                                               cy
freenas-boot/ROOT/Wizard-2015-10-31_22:29:59              1K  13.9G   510M  lega                                                                                               cy
freenas-boot/ROOT/default                               596K  13.9G   511M  lega                                                                                               cy
freenas-boot/grub                                      27.1M  13.9G  6.79M  lega                                                                                               cy
tank                                                   4.82T  9.22T   208K  /mnt                                                                                               /tank
tank/.system                                           13.2M  9.22T   208K  lega                                                                                               cy
tank/.system/configs-1ea613efe35c4745be02d805c47c80b9  5.71M  9.22T  5.71M  lega                                                                                               cy
tank/.system/configs-5ece5c906a8f4df886779fae5cade8a5   192K  9.22T   192K  lega                                                                                               cy
tank/.system/cores                                     1.36M  9.22T  1.36M  lega                                                                                               cy
tank/.system/rrd-1ea613efe35c4745be02d805c47c80b9       192K  9.22T   192K  lega                                                                                               cy
tank/.system/rrd-5ece5c906a8f4df886779fae5cade8a5       192K  9.22T   192K  lega                                                                                               cy
tank/.system/samba4                                    3.51M  9.22T  3.51M  lega                                                                                               cy
tank/.system/syslog-1ea613efe35c4745be02d805c47c80b9   1.12M  9.22T  1.12M  lega                                                                                               cy
tank/.system/syslog-5ece5c906a8f4df886779fae5cade8a5    719K  9.22T   719K  lega                                                                                               cy
tank/Backup                                             377G  9.22T   377G  /mnt                                                                                               /tank/Backup
tank/Media                                             1.99T  9.22T  1.99T  /mnt                                                                                               /tank/Media
tank/incomming                                         2.45T   493G  2.45T  /mnt                                                                                               /tank/incomming
tank/jails                                             7.32G  9.22T  21.7M  /mnt                                                                                               /tank/jails
tank/jails/.warden-template-VirtualBox-4.3.12           834M  9.22T   834M  /mnt                                                                                               /tank/jails/.warden-template-VirtualBox-4.3.12
tank/jails/.warden-template-pluginjail                  576M  9.22T   576M  /mnt                                                                                               /tank/jails/.warden-template-pluginjail
tank/jails/.warden-template-standard                   2.63G  9.22T  2.63G  /mnt                                                                                               /tank/jails/.warden-template-standard
tank/jails/Irssi                                       39.3M  9.22T  2.67G  /mnt                                                                                               /tank/jails/Irssi
tank/jails/couchpotato_1                                243M  9.22T   818M  /mnt                                                                                               /tank/jails/couchpotato_1
tank/jails/crashplan_1                                  411M  9.22T   986M  /mnt                                                                                               /tank/jails/crashplan_1
tank/jails/seedbox                                     2.48G  9.22T  3.64G  /mnt                                                                                               /tank/jails/seedbox
tank/jails/transmission_1                               133M  9.22T   708M  /mnt                                                                                               /tank/jails/transmission_1
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My guess is you did something unknowingly evil and wrong when modifying the L2ARC device. Detach it from the system, delete it from ESXi (including on-disk), reboot the NAS, create a new virtual disk of the desired size, and attach the disk, please. Then report what you see.

People think virtualizing FreeNAS is a great idea, but I've got years of experience to the contrary.... so many forum users have come up with interesting edge cases and things that "shouldn't" be possible.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Wow, I'm out of it today. Sorry about that.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
My guess is you did something unknowingly evil and wrong when modifying the L2ARC device. Detach it from the system, delete it from ESXi (including on-disk), reboot the NAS, create a new virtual disk of the desired size, and attach the disk, please. Then report what you see.

People think virtualizing FreeNAS is a great idea, but I've got years of experience to the contrary.... so many forum users have come up with interesting edge cases and things that "shouldn't" be possible.
Ill do that as soon as the ill kids at home are finished with the movie they are seeing.

But there shouldn't be anything Messing with the pool it self, right?

Skickat från min HTC One via Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Right. Disconnecting the L2ARC isn't that hard, follow the instructions in the manual.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I just reattached it and its all fine.

L2 ARC Buffer:
Bytes Scanned: 19.91 TiB

Odd tho that it reported ever increasing size.

I was too fast there! :) It's beyond 200GB now again. there is something between vmware and FreeNAS .

I'll tinker with it later tonight.
 
Last edited:
Status
Not open for further replies.
Top