deasmi
Dabbler
- Joined
- Mar 21, 2013
- Messages
- 14
I have a zpool that has become awfully performing.
It's got a number of iscsi zvols on it which are presented to vmware, sometimes initial accesses can take > 5s and there are latency spikes of > 7s.
After looking for complicated solutions I've noticed it's over 60% full and so that's almost certainly the cause.
If I get it back below 60% or 50% is it likely to recover or am I better off destroying it altogether and starting again ?
Thanks
nas1# zpool status vol1
pool: vol1
state: ONLINE
scan: scrub repaired 0 in 2h4m with 0 errors on Sun May 10 05:04:06 2015
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b9457b59-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
gptid/b9cc4265-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/ba530c5d-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
gptid/badc1cdf-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
errors: No known data errors
nas1# zpool get all vol1
NAME PROPERTY VALUE SOURCE
vol1 size 920G -
vol1 capacity 43% -
vol1 altroot /mnt local
vol1 health ONLINE -
vol1 guid 14124041658513480580 default
vol1 version - default
vol1 bootfs - default
vol1 delegation on default
vol1 autoreplace off default
vol1 cachefile /data/zfs/zpool.cache local
vol1 failmode continue local
vol1 listsnapshots off default
vol1 autoexpand on local
vol1 dedupditto 0 default
vol1 dedupratio 1.00x -
vol1 free 517G -
vol1 allocated 403G -
vol1 readonly off -
vol1 comment - default
vol1 expandsize - -
vol1 freeing 0 default
vol1 fragmentation 17% -
vol1 leaked 0 default
vol1 feature@async_destroy enabled local
vol1 feature@empty_bpobj active local
vol1 feature@lz4_compress active local
vol1 feature@multi_vdev_crash_dump enabled local
vol1 feature@spacemap_histogram active local
vol1 feature@enabled_txg active local
vol1 feature@hole_birth active local
vol1 feature@extensible_dataset enabled local
vol1 feature@embedded_data active local
vol1 feature@bookmarks enabled local
vol1 feature@filesystem_limits enabled local
vol1 feature@large_blocks enabled local
nas1#
It's got a number of iscsi zvols on it which are presented to vmware, sometimes initial accesses can take > 5s and there are latency spikes of > 7s.
After looking for complicated solutions I've noticed it's over 60% full and so that's almost certainly the cause.
If I get it back below 60% or 50% is it likely to recover or am I better off destroying it altogether and starting again ?
Thanks
nas1# zpool status vol1
pool: vol1
state: ONLINE
scan: scrub repaired 0 in 2h4m with 0 errors on Sun May 10 05:04:06 2015
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/b9457b59-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
gptid/b9cc4265-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/ba530c5d-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
gptid/badc1cdf-4a28-11e3-b141-000c29ec0891 ONLINE 0 0 0
errors: No known data errors
nas1# zpool get all vol1
NAME PROPERTY VALUE SOURCE
vol1 size 920G -
vol1 capacity 43% -
vol1 altroot /mnt local
vol1 health ONLINE -
vol1 guid 14124041658513480580 default
vol1 version - default
vol1 bootfs - default
vol1 delegation on default
vol1 autoreplace off default
vol1 cachefile /data/zfs/zpool.cache local
vol1 failmode continue local
vol1 listsnapshots off default
vol1 autoexpand on local
vol1 dedupditto 0 default
vol1 dedupratio 1.00x -
vol1 free 517G -
vol1 allocated 403G -
vol1 readonly off -
vol1 comment - default
vol1 expandsize - -
vol1 freeing 0 default
vol1 fragmentation 17% -
vol1 leaked 0 default
vol1 feature@async_destroy enabled local
vol1 feature@empty_bpobj active local
vol1 feature@lz4_compress active local
vol1 feature@multi_vdev_crash_dump enabled local
vol1 feature@spacemap_histogram active local
vol1 feature@enabled_txg active local
vol1 feature@hole_birth active local
vol1 feature@extensible_dataset enabled local
vol1 feature@embedded_data active local
vol1 feature@bookmarks enabled local
vol1 feature@filesystem_limits enabled local
vol1 feature@large_blocks enabled local
nas1#