ZFS NOT using enough RAM?

Status
Not open for further replies.

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
i currently have a system with 16gb of ram, no log or cache devices with two storage arrays, one 5x4tb raidz1 extending another 5x1tb raidz1. i've got the system on freenas 9.1.0rc1 x64 and it's using about 5gb of RAM? when my pool just had the first array (5tb of raw storage) this was exactly the same. thinking it was a tunable i deleted all the tunables, waited for the gui to be responsive (bug already reported by another user) again and again until there were none left. the system did autotune a few things and there are no longer any sysctls.

current and typical work loads are, nfs constantly uploading and downloading with bittorrent sync app, and dumping drives via sftp on a laptop with a usb dock.

note: i'm also only getting 25MB/s over gigabit lan, will explore in another post/thread.
 

Attachments

  • top on server.png
    top on server.png
    265.6 KB · Views: 387
  • tunables.png
    tunables.png
    17.2 KB · Views: 435

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What does the FreeNAS UI say for RAM usage? Here's mine:


Also post you FreeNAS version and verify you didn't accidentally install the x86 version.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
What does the FreeNAS UI say for RAM usage? Here's mine:


Also post you FreeNAS version and verify you didn't accidentally install the x86 version.

sure no problem, i see in ui and in terminal it's x64. and ram usage is consistent between ui & terminal shell.

Code:
[smurfy@nas] /# uname -v
FreeBSD 9.1-STABLE #0 r+7f710c8: Fri Jul 12 15:24:36 PDT 2013    root@build.ixsystems.com:/tank/home/alfred/fn/9.1/os-base/amd64/tank/home/alfred/fn/9.1/FreeBSD/src/sys/FREENAS.amd64

Code:
[smurfy@nas] /mnt/stor/temp# arc_summary.py |more\
? 
System Memory:
 
        1.11%   176.50  MiB Active,     1.54%   243.36  MiB Inact
        34.17%  5.29    GiB Wired,      0.01%   1.14    MiB Cache
        63.16%  9.77    GiB Free,       0.00%   784.00  KiB Gap
 
        Real Installed:                         16.00   GiB
        Real Available:                 99.68%  15.95   GiB
        Real Managed:                   96.97%  15.47   GiB
 
        Logical Total:                          16.00   GiB
        Logical Used:                   37.45%  5.99    GiB
        Logical Free:                   62.55%  10.01   GiB
 
Kernel Memory:                                  4.64    GiB
        Data:                           99.52%  4.62    GiB
        Text:                           0.48%   22.71   MiB
 
Kernel Memory Map:                              5.94    GiB
        Size:                           75.35%  4.47    GiB
        Free:                           24.65%  1.46    GiB
                                                                Page:  1
------------------------------------------------------------------------
 
ARC Summary: (HEALTHY)
        Storage pool Version:                   5000
        Filesystem Version:                     5
        Memory Throttle Count:                  0
 
ARC Misc:
        Deleted:                                15.62m
        Recycle Misses:                         243.30k
        Mutex Misses:                           26.63k
        Evict Skips:                            26.63k
 
ARC Size:                               83.46%  4.63    GiB
        Target Size: (Adaptive)         83.46%  4.63    GiB
        Min Size (Hard Limit):          12.50%  709.91  MiB
        Max Size (High Water):          8:1     5.55    GiB
 
ARC Size Breakdown:
        Recently Used Cache Size:       93.76%  4.34    GiB
        Frequently Used Cache Size:     6.24%   295.70  MiB
 
ARC Hash Breakdown:
        Elements Max:                           156.68k
        Elements Current:               89.06%  139.54k
        Collisions:                             6.14m
        Chain Max:                              10
        Chains:                                 28.96k
                                                                Page:  2
------------------------------------------------------------------------
 
ARC Efficiency:                                 19.88m
        Cache Hit Ratio:                84.59%  16.82m
        Cache Miss Ratio:               15.41%  3.06m
        Actual Hit Ratio:               83.95%  16.69m
 
        Data Demand Efficiency:         99.75%  12.39m
        Data Prefetch Efficiency:       1.84%   2.86m
 
        CACHE HITS BY CACHE LIST:
          Most Recently Used:           32.64%  5.49m
          Most Frequently Used:         66.60%  11.20m
          Most Recently Used Ghost:     0.48%   81.40k
          Most Frequently Used Ghost:   0.46%   78.18k
 
        CACHE HITS BY DATA TYPE:
          Demand Data:                  73.49%  12.36m
          Prefetch Data:                0.31%   52.80k
          Demand Metadata:              25.75%  4.33m
          Prefetch Metadata:            0.45%   74.95k
 
        CACHE MISSES BY DATA TYPE:
          Demand Data:                  1.00%   30.59k
          Prefetch Data:                91.75%  2.81m
          Demand Metadata:              4.89%   149.80k
          Prefetch Metadata:            2.36%   72.45k
                                                                Page:  3
------------------------------------------------------------------------
 
                                                                Page:  4
------------------------------------------------------------------------
 
File-Level Prefetch: (HEALTHY)
 
DMU Efficiency:                                 271.63m
        Hit Ratio:                      95.23%  258.67m
        Miss Ratio:                     4.77%   12.96m
 
        Colinear:                               12.96m
          Hit Ratio:                    0.01%   1.19k
          Miss Ratio:                   99.99%  12.96m
 
        Stride:                                 255.66m
          Hit Ratio:                    100.00% 255.66m
          Miss Ratio:                   0.00%   1.11k
 
DMU Misc:
        Reclaim:                                12.96m
          Successes:                    0.31%   40.44k
          Failures:                     99.69%  12.92m
 
        Streams:                                3.01m
          +Resets:                      0.01%   253
          -Resets:                      99.99%  3.01m
          Bogus:                                0
                                                                Page:  5
------------------------------------------------------------------------
 
                                                                Page:  6
------------------------------------------------------------------------
 
ZFS Tunable (sysctl):
        kern.maxusers                           384
        vm.kmem_size                            6616865280
        vm.kmem_size_scale                      1
        vm.kmem_size_min                        0
        vm.kmem_size_max                        8271081600
        vfs.zfs.l2c_only_size                   0
        vfs.zfs.mfu_ghost_data_lsize            4267428864
        vfs.zfs.mfu_ghost_metadata_lsize        398398976
        vfs.zfs.mfu_ghost_size                  4665827840
        vfs.zfs.mfu_data_lsize                  40632320
        vfs.zfs.mfu_metadata_lsize              14857216
        vfs.zfs.mfu_size                        64355328
        vfs.zfs.mru_ghost_data_lsize            118489088
        vfs.zfs.mru_ghost_metadata_lsize        182491648
        vfs.zfs.mru_ghost_size                  300980736
        vfs.zfs.mru_data_lsize                  4523603968
        vfs.zfs.mru_metadata_lsize              1164288
        vfs.zfs.mru_size                        4602661888
        vfs.zfs.anon_data_lsize                 0
        vfs.zfs.anon_metadata_lsize             0
        vfs.zfs.anon_size                       72368128
        vfs.zfs.l2arc_norw                      1
        vfs.zfs.l2arc_feed_again                1
        vfs.zfs.l2arc_noprefetch                1
        vfs.zfs.l2arc_feed_min_ms               200
        vfs.zfs.l2arc_feed_secs                 1
        vfs.zfs.l2arc_headroom                  2
        vfs.zfs.l2arc_write_boost               8388608
        vfs.zfs.l2arc_write_max                 8388608
        vfs.zfs.arc_meta_limit                  1488794688
        vfs.zfs.arc_meta_used                   334030848
        vfs.zfs.arc_min                         744397344
        vfs.zfs.arc_max                         5955178752
        vfs.zfs.dedup.prefetch                  1
        vfs.zfs.mdcomp_disable                  0
        vfs.zfs.nopwrite_enabled                1
        vfs.zfs.write_limit_override            0
        vfs.zfs.write_limit_inflated            51377037312
        vfs.zfs.write_limit_max                 2140709888
        vfs.zfs.write_limit_min                 33554432
        vfs.zfs.write_limit_shift               3
        vfs.zfs.no_write_throttle               0
        vfs.zfs.zfetch.array_rd_sz              1048576
        vfs.zfs.zfetch.block_cap                256
        vfs.zfs.zfetch.min_sec_reap             2
        vfs.zfs.zfetch.max_streams              8
        vfs.zfs.prefetch_disable                0
        vfs.zfs.no_scrub_prefetch               0
        vfs.zfs.no_scrub_io                     0
        vfs.zfs.resilver_min_time_ms            3000
        vfs.zfs.free_min_time_ms                1000
        vfs.zfs.scan_min_time_ms                1000
        vfs.zfs.scan_idle                       50
        vfs.zfs.scrub_delay                     4
        vfs.zfs.resilver_delay                  2
        vfs.zfs.top_maxinflight                 32
        vfs.zfs.write_to_degraded               0
        vfs.zfs.mg_alloc_failures               9
        vfs.zfs.check_hostid                    1
        vfs.zfs.deadman_enabled                 1
        vfs.zfs.deadman_synctime                1000
        vfs.zfs.recover                         0
        vfs.zfs.txg.synctime_ms                 1000
        vfs.zfs.txg.timeout                     5
        vfs.zfs.vdev.cache.bshift               16
        vfs.zfs.vdev.cache.size                 0
        vfs.zfs.vdev.cache.max                  16384
        vfs.zfs.vdev.trim_on_init               1
        vfs.zfs.vdev.write_gap_limit            4096
        vfs.zfs.vdev.read_gap_limit             32768
        vfs.zfs.vdev.aggregation_limit          131072
        vfs.zfs.vdev.ramp_rate                  2
        vfs.zfs.vdev.time_shift                 29
        vfs.zfs.vdev.min_pending                4
        vfs.zfs.vdev.max_pending                10
        vfs.zfs.vdev.larger_ashift_disable      0
        vfs.zfs.vdev.bio_delete_disable         0
        vfs.zfs.vdev.bio_flush_disable          0
        vfs.zfs.vdev.trim_max_pending           64
        vfs.zfs.vdev.trim_max_bytes             2147483648
        vfs.zfs.cache_flush_disable             0
        vfs.zfs.zil_replay_disable              0
        vfs.zfs.sync_pass_rewrite               2
        vfs.zfs.sync_pass_dont_compress         5
        vfs.zfs.sync_pass_deferred_free         2
        vfs.zfs.zio.use_uma                     0
        vfs.zfs.snapshot_list_prefetch          0
        vfs.zfs.version.ioctl                   3
        vfs.zfs.version.zpl                     5
        vfs.zfs.version.spa                     5000
        vfs.zfs.version.acl                     1
        vfs.zfs.debug                           0
        vfs.zfs.super_owner                     0
        vfs.zfs.trim.enabled                    1
        vfs.zfs.trim.max_interval               1
        vfs.zfs.trim.timeout                    30
        vfs.zfs.trim.txg_delay                  32
                                                                Page:  7
------------------------------------------------------------------------
 

Attachments

  • ram.png
    ram.png
    15.8 KB · Views: 366

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Very strange. Try disabling autotune and deleting all of the entries made by autotune. Post any sysctls and tunables you have left. Something is wrong I just have no clue what.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
strange this is that i can't. the gui will freeze on please wait, though the server is still working (my sftp transfer is humming along normally). i try to use the gui but can't, so i will close the tab and reopen it, but it will only work / reload 4-5 minutes later the the tune still there. for the time being i will try to fully disable autotune and see if there's a change.

i very much appreciate the help.
 

Attachments

  • frozen.png
    frozen.png
    58.3 KB · Views: 366

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Did you turn off Autotune first?
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
yes i did turn it off first and then deleted the entries. i restarted it and it was very slow after a few reboots so i turned it back on to get the performance that i was getting before. it tends to use ~10 out of the 16GB which i'm guessing zfs will keep free for other processes the server may run (i.e. scrubbing).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Nope, autotune is not perfect but it generally gets you started in the right direction. You can tweak the values to use more or less RAM. I'd recommend tweaking it a bit to use more of the RAM. I'd tell you the value to change but my system is down while I test new RAM so I can't tell you what I did. And you can turn off autotune now since the values are already present.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Looked at my autotune values and they are an almost exact match to yours.

Now post your system information like CPU, MB, add-on cards, etc... You should always include that in your first posting. How are you determining you are getting 25MB/sec throughput. Include your network layout as well.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
sorry for the slow response.....
i'm seeing that speed over sftp using filezilla on a mac mini os x 10.8.4 (2011 model) or an older lenovo with fedora 19. when i use nfs it's almost half that, on average to transfer data from internal drives (lenovo has an samsung 840 pro ssd). interestingly if it is scrubbing i'll see speeds range from 180-275M/s and if the mini indexes the nfs share i'll see the ethernet monitor in reporting for ale0/ethernet average 85M/s. the ram is using 10/16gb, cpu is under 25% total utilization, 25% used space on the pool.....the system seems like it's barely breaking a sweat.
Sysctls (all generated by autotune)
kern.ipc.maxsockbuf 2097152
net.inet.tcp.recvbuf_max 2097152
net.inet.tcp.sendbuf_max 2097152

tunables (all generated by autotune)
vfs.zfs.arc_max 10589900727
vm.kmem_size 11766556364
vm.kem_size_max 14708195456
hardware:
-cat5e wired, gigabit lan ports / switch / & router rj-45 are 50ft, same speed on 6ft lengths or direct connection
-pcie: lsi 9211-8i HBA, nic (dual port non aggregated) supermicro AOC-SG-i2 NIC
-AMD PHENOM II X6 1045T (cpu load is rarely touches 25%)
-FreeNAS (newest 9.1.0 x64) on 16gb lexar usb 2.0 flash drive
-16gb (4x4GB) ddr2-5300 (passed 8hrs of memtest)
-430w antec earth watts psu, mother board asus m4a78-e bios v.2603
-case old lian-li mid-atx, two icy dock 5in3 MB975SP-B
-raidz2 [10 x 4tb Seagate 4TB NAS ST4000VN000]
-nas is backed up via usb2 external hard drives using sftp, changing soon to usb3
zfs config:
-autotune enabled, lz4 compression (other settings no effect)
-dedupe off, atime off, hdd sleep 60 min, acoustic level: min, power level: 64=intermediate w/standby
-smart (long & short) passes every 7 days, auto scrub every 10 days
-snapshot every 15 min. held for 6 weeks
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
the ram is using 10/16gb, cpu is under 25% total utilization, 25% used space on the pool.....the system seems like it's barely breaking a sweat.
What does this line do?
vfs.zfs.arc_max 10589900727

Disable Autotune.
Sysctls (all generated by autotune)
Keep the Sysctls.

vm.kmem_size 11766556364
vm.kem_size_max 14708195456
Delete these two.

vfs.zfs.arc_max 10589900727
Set this around 13G. A bit more or less depending on what else is running.

-smart (long & short) passes every 7 days, auto scrub every 10 days
This seems a bit excessive.
 

papageorgi

Explorer
Joined
Jul 16, 2013
Messages
51
funny about the scrub i changed last night because i agree it is too much. i did it just because they're not enterprise drives (they're consumer nas drives) and i'm not using ecc ram and really don't feel i need this frequency, i changed it to 30 days and may even make it a bit longer. all the entries and values were created by autotune but i will ensure it's disabled and delete/modify the other entries. thanks for the help, i 'm excited to do this when i get home (haven't setup a vpn yet). once i do this and test it i'll check the performance and update with a post.

update: i haven't tested the speed yet though the server is significantly more responsive and is also using much more ram as i wanted. thanks to you all.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
-smart (long & short) passes every 7 days, auto scrub every 10 days
This seems a bit excessive.

I totally agree. I do scrubs every 14 days (1st and 14th of the month) and longs on the 7th and 21st. Short tests literally add no value. If a hard drive is having problems that will show up on a short test you'll already have gotten emails with errors, errors in the footer, problems with performance, or the drive dropped out. Even at 14 day intervals, I'm borderline on the "excessive" scale IMO.
 
Status
Not open for further replies.
Top