Sparx
Contributor
- Joined
- Apr 18, 2017
- Messages
- 107
So Im new to FreeNAS and all the features and stability looks great. Hope the docker stuff comes soon as I started with Corral just at the same time it was dropped! =)
Im coming from openmediavault and switching mainly because of the instability of that platform.
HW:
Xeon-D 1528
64GB 2133 RDIMM
8x6TB SAS over LSI 3008 HBA conf as RAIDZ2 one big pool with ~10ish volumes under main pool. No snapshots or L2ARC or ZIL.
Intended as file server to be able to write big files over 10GB quickly. Read speed not super high importance.
The pool was created in openmediavault. Pool upgrade was made in corral. Seen no issues neither in 9.10.2.U3 (currently running) nor openmediavault 3.0.70 something with the pool.
So with my omv I never really drop below 800MB/s when writing.
If I do the same in freenas i get a drop after 4k in same dd command.
I was trying to compare the zfs tunables but didnt really find the golden ticket. And yes. I have been playing around with some tunables to see if i could fix it.
While Im asking around. Is it possible to have the drives spin down to save some power without zfs going haywire?
There is something strange with the mapping of disks as far as i can see.
But the GUI volumes manager shows all drives from da0 to da7 without the gpt partition.
Im coming from openmediavault and switching mainly because of the instability of that platform.
HW:
Xeon-D 1528
64GB 2133 RDIMM
8x6TB SAS over LSI 3008 HBA conf as RAIDZ2 one big pool with ~10ish volumes under main pool. No snapshots or L2ARC or ZIL.
Intended as file server to be able to write big files over 10GB quickly. Read speed not super high importance.
The pool was created in openmediavault. Pool upgrade was made in corral. Seen no issues neither in 9.10.2.U3 (currently running) nor openmediavault 3.0.70 something with the pool.
So with my omv I never really drop below 800MB/s when writing.
Code:
openmediavault dd root@openmediavault:/pool/users/asdf# dd if=/dev/zero of=testfile bs=1M count=1k 1073741824 bytes (1.1 GB) copied, 0.404014 s, 2.7 GB/s root@openmediavault:/pool/users/asdf# dd if=/dev/zero of=testfile bs=1M count=10k 10737418240 bytes (11 GB) copied, 11.6819 s, 919 MB/s root@openmediavault:/pool/users/asdf# dd if=/dev/zero of=testfile bs=1M count=100k 107374182400 bytes (107 GB) copied, 135.652 s, 792 MB/s
If I do the same in freenas i get a drop after 4k in same dd command.
Code:
freenas dd [root@freenas] /mnt/pool/users/asdf# dd if=/dev/zero of=testfile1 bs=1M count=1k 1073741824 bytes transferred in 0.550282 secs (1951257394 bytes/sec) [root@freenas] /mnt/pool/users/asdf# dd if=/dev/zero of=testfile1 bs=1M count=4k 4294967296 bytes transferred in 2.256826 secs (1903100833 bytes/sec) [root@freenas] /mnt/pool/users/asdf# dd if=/dev/zero of=testfile1 bs=1M count=10k 10737418240 bytes transferred in 28.753772 secs (373426423 bytes/sec) [root@freenas] /mnt/pool/users/asdf# dd if=/dev/zero of=testfile1 bs=1M count=100k 107374182400 bytes transferred in 410.500419 secs (261568996 bytes/sec)
I was trying to compare the zfs tunables but didnt really find the golden ticket. And yes. I have been playing around with some tunables to see if i could fix it.
While Im asking around. Is it possible to have the drives spin down to save some power without zfs going haywire?
Code:
arc_summary.py [root@freenas] /mnt/pool/users/asdf# arc_summary.py System Memory: 0.51% 322.41 MiB Active, 0.34% 215.81 MiB Inact 51.59% 32.08 GiB Wired, 0.01% 3.88 MiB Cache 47.56% 29.58 GiB Free, 0.00% 0 Bytes Gap Real Installed: 64.00 GiB Real Available: 99.80% 63.87 GiB Real Managed: 97.37% 62.19 GiB Logical Total: 64.00 GiB Logical Used: 53.45% 34.21 GiB Logical Free: 46.55% 29.79 GiB Kernel Memory: 380.51 MiB Data: 92.73% 352.83 MiB Text: 7.27% 27.68 MiB Kernel Memory Map: 79.84 GiB Size: 39.10% 31.22 GiB Free: 60.90% 48.62 GiB Page: 1 ------------------------------------------------------------------------ ARC Summary: (HEALTHY) Storage pool Version: 5000 Filesystem Version: 5 Memory Throttle Count: 0 ARC Misc: Deleted: 36 Mutex Misses: 0 Evict Skips: 0 ARC Size: 7.51% 4.32 GiB Target Size: (Adaptive) 100.00% 57.48 GiB Min Size (Hard Limit): 13.31% 7.65 GiB Max Size (High Water): 7:1 57.48 GiB ARC Size Breakdown: Recently Used Cache Size: 53.36% 30.67 GiB Frequently Used Cache Size: 46.64% 26.81 GiB ARC Hash Breakdown: Elements Max: 253.17k Elements Current: 16.03% 40.59k Collisions: 4.00k Chain Max: 2 Chains: 27 Page: 2 ------------------------------------------------------------------------ ARC Total accesses: 26.29k Cache Hit Ratio: 58.84% 15.47k Cache Miss Ratio: 41.16% 10.82k Actual Hit Ratio: 41.61% 10.94k Data Demand Efficiency: 46.82% 9.11k CACHE HITS BY CACHE LIST: Anonymously Used: 29.29% 4.53k Most Recently Used: 52.74% 8.16k Most Frequently Used: 17.97% 2.78k Most Recently Used Ghost: 0.00% 0 Most Frequently Used Ghost: 0.00% 0 CACHE HITS BY DATA TYPE: Demand Data: 27.58% 4.27k Prefetch Data: 0.00% 0 Demand Metadata: 43.12% 6.67k Prefetch Metadata: 29.29% 4.53k CACHE MISSES BY DATA TYPE: Demand Data: 44.79% 4.85k Prefetch Data: 0.00% 0 Demand Metadata: 46.34% 5.01k Prefetch Metadata: 8.86% 959 Page: 3 ------------------------------------------------------------------------ Page: 4 ------------------------------------------------------------------------ Page: 5 ------------------------------------------------------------------------ VDEV Cache Summary: 10.13k Hit Ratio: 39.75% 4.03k Miss Ratio: 22.42% 2.27k Delegations: 37.83% 3.83k Page: 6 ------------------------------------------------------------------------ ZFS Tunable (sysctl): kern.maxusers 4423 vm.kmem_size 85726392320 vm.kmem_size_scale 1 vm.kmem_size_min 0 vm.kmem_size_max 1319413950874 vfs.zfs.vol.unmap_enabled 1 vfs.zfs.vol.mode 2 vfs.zfs.sync_pass_rewrite 2 vfs.zfs.sync_pass_dont_compress 5 vfs.zfs.sync_pass_deferred_free 2 vfs.zfs.zio.dva_throttle_enabled 1 vfs.zfs.zio.exclude_metadata 0 vfs.zfs.zio.use_uma 1 vfs.zfs.zil_slog_limit 786432 vfs.zfs.cache_flush_disable 0 vfs.zfs.zil_replay_disable 0 vfs.zfs.version.zpl 5 vfs.zfs.version.spa 5000 vfs.zfs.version.acl 1 vfs.zfs.version.ioctl 7 vfs.zfs.debug 0 vfs.zfs.super_owner 0 vfs.zfs.min_auto_ashift 12 vfs.zfs.max_auto_ashift 13 vfs.zfs.vdev.queue_depth_pct 1000 vfs.zfs.vdev.write_gap_limit 4096 vfs.zfs.vdev.read_gap_limit 32768 vfs.zfs.vdev.aggregation_limit 131072 vfs.zfs.vdev.trim_max_active 64 vfs.zfs.vdev.trim_min_active 1 vfs.zfs.vdev.scrub_max_active 8 vfs.zfs.vdev.scrub_min_active 4 vfs.zfs.vdev.async_write_max_active 10 vfs.zfs.vdev.async_write_min_active 1 vfs.zfs.vdev.async_read_max_active 10 vfs.zfs.vdev.async_read_min_active 1 vfs.zfs.vdev.sync_write_max_active 10 vfs.zfs.vdev.sync_write_min_active 10 vfs.zfs.vdev.sync_read_max_active 10 vfs.zfs.vdev.sync_read_min_active 10 vfs.zfs.vdev.max_active 1000 vfs.zfs.vdev.async_write_active_max_dirty_percent60 vfs.zfs.vdev.async_write_active_min_dirty_percent30 vfs.zfs.vdev.mirror.non_rotating_seek_inc1 vfs.zfs.vdev.mirror.non_rotating_inc 0 vfs.zfs.vdev.mirror.rotating_seek_offset1048576 vfs.zfs.vdev.mirror.rotating_seek_inc 5 vfs.zfs.vdev.mirror.rotating_inc 0 vfs.zfs.vdev.trim_on_init 1 vfs.zfs.vdev.larger_ashift_minimal 0 vfs.zfs.vdev.bio_delete_disable 0 vfs.zfs.vdev.bio_flush_disable 0 vfs.zfs.vdev.cache.bshift 16 vfs.zfs.vdev.cache.size 134217728 vfs.zfs.vdev.cache.max 134217728 vfs.zfs.vdev.metaslabs_per_vdev 200 vfs.zfs.vdev.trim_max_pending 10000 vfs.zfs.txg.timeout 5 vfs.zfs.trim.enabled 1 vfs.zfs.trim.max_interval 1 vfs.zfs.trim.timeout 30 vfs.zfs.trim.txg_delay 32 vfs.zfs.space_map_blksz 4096 vfs.zfs.spa_min_slop 134217728 vfs.zfs.spa_slop_shift 5 vfs.zfs.spa_asize_inflation 24 vfs.zfs.deadman_enabled 1 vfs.zfs.deadman_checktime_ms 5000 vfs.zfs.deadman_synctime_ms 1000000 vfs.zfs.debug_flags 0 vfs.zfs.recover 0 vfs.zfs.spa_load_verify_data 1 vfs.zfs.spa_load_verify_metadata 1 vfs.zfs.spa_load_verify_maxinflight 10000 vfs.zfs.ccw_retry_interval 300 vfs.zfs.check_hostid 1 vfs.zfs.mg_fragmentation_threshold 85 vfs.zfs.mg_noalloc_threshold 0 vfs.zfs.condense_pct 200 vfs.zfs.metaslab.bias_enabled 1 vfs.zfs.metaslab.lba_weighting_enabled 1 vfs.zfs.metaslab.fragmentation_factor_enabled1 vfs.zfs.metaslab.preload_enabled 1 vfs.zfs.metaslab.preload_limit 3 vfs.zfs.metaslab.unload_delay 8 vfs.zfs.metaslab.load_pct 50 vfs.zfs.metaslab.min_alloc_size 33554432 vfs.zfs.metaslab.df_free_pct 4 vfs.zfs.metaslab.df_alloc_threshold 131072 vfs.zfs.metaslab.debug_unload 0 vfs.zfs.metaslab.debug_load 0 vfs.zfs.metaslab.fragmentation_threshold70 vfs.zfs.metaslab.gang_bang 16777217 vfs.zfs.free_bpobj_enabled 1 vfs.zfs.free_max_blocks 18446744073709551615 vfs.zfs.no_scrub_prefetch 0 vfs.zfs.no_scrub_io 0 vfs.zfs.resilver_min_time_ms 3000 vfs.zfs.free_min_time_ms 1000 vfs.zfs.scan_min_time_ms 1000 vfs.zfs.scan_idle 50 vfs.zfs.scrub_delay 0 vfs.zfs.resilver_delay 2 vfs.zfs.top_maxinflight 512 vfs.zfs.delay_scale 500000 vfs.zfs.delay_min_dirty_percent 60 vfs.zfs.dirty_data_sync 67108864 vfs.zfs.dirty_data_max_percent 10 vfs.zfs.dirty_data_max_max 16864249856 vfs.zfs.dirty_data_max 6745699942 vfs.zfs.max_recordsize 1048576 vfs.zfs.zfetch.array_rd_sz 1048576 vfs.zfs.zfetch.max_distance 33554432 vfs.zfs.zfetch.min_sec_reap 2 vfs.zfs.zfetch.max_streams 8 vfs.zfs.prefetch_disable 1 vfs.zfs.send_holes_without_birth_time 1 vfs.zfs.mdcomp_disable 0 vfs.zfs.nopwrite_enabled 1 vfs.zfs.dedup.prefetch 0 vfs.zfs.l2c_only_size 0 vfs.zfs.mfu_ghost_data_esize 0 vfs.zfs.mfu_ghost_metadata_esize 0 vfs.zfs.mfu_ghost_size 0 vfs.zfs.mfu_data_esize 360448 vfs.zfs.mfu_metadata_esize 295936 vfs.zfs.mfu_size 6443520 vfs.zfs.mru_ghost_data_esize 0 vfs.zfs.mru_ghost_metadata_esize 0 vfs.zfs.mru_ghost_size 0 vfs.zfs.mru_data_esize 4424614912 vfs.zfs.mru_metadata_esize 3023360 vfs.zfs.mru_size 4608163840 vfs.zfs.anon_data_esize 0 vfs.zfs.anon_metadata_esize 0 vfs.zfs.anon_size 32768 vfs.zfs.l2arc_norw 0 vfs.zfs.l2arc_feed_again 1 vfs.zfs.l2arc_noprefetch 0 vfs.zfs.l2arc_feed_min_ms 200 vfs.zfs.l2arc_feed_secs 1 vfs.zfs.l2arc_headroom 2 vfs.zfs.l2arc_write_boost 8388608 vfs.zfs.l2arc_write_max 8388608 vfs.zfs.arc_meta_limit 15430750617 vfs.zfs.arc_free_target 113069 vfs.zfs.compressed_arc_enabled 1 vfs.zfs.arc_shrink_shift 7 vfs.zfs.arc_average_blocksize 8192 vfs.zfs.arc_min 8212864512 vfs.zfs.arc_max 61723002470 Page: 7 ------------------------------------------------------------------------
There is something strange with the mapping of disks as far as i can see.
Code:
zpool status [root@freenas] /mnt/pool/users/asdf# zpool status pool: freenas-boot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 errors: No known data errors pool: pool state: ONLINE scan: scrub canceled on Sat Apr 22 00:38:43 2017 config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 da0p1 ONLINE 0 0 0 da4p1 ONLINE 0 0 0 da2p1 ONLINE 0 0 0 da3p1 ONLINE 0 0 0 da6p1 ONLINE 0 0 0 gpt/zfs-1758ced06bdfef91 ONLINE 0 0 0 da1p1 ONLINE 0 0 0 da5p1 ONLINE 0 0 0
But the GUI volumes manager shows all drives from da0 to da7 without the gpt partition.
Last edited: