Storage performance degradation to unusable

tbaror

Contributor
Joined
Mar 20, 2013
Messages
105
Hello All,

I have following system spec below , that used for almost 3years already with latest version :FreeNAS-11.2-U5 (Build Date: Jun 24, 2019 18:41) , with auto-tune enabled.
Storage mostly used for XCP-NG vm guest spin-on on NFS SR(v4) dataset, and some dev team smb share ,All was working well for recently issues started to appear around 4 month ago ,but recent days impossible to work with the VM's until got to situation that system is unusable , vm's get stack and not responding ,same for smb.
To get to conclusion that the storage itself is in fault i did following tests:
Network testing: using iperf3 testing from xcp host to storage ,results shows figures that reflects 10Gbit network shown below
Code:
[11:54 vnxe03-r15 ~]# iperf3 -c 10.100.1.10

Connecting to host 10.100.1.10, port 5201

[  4] local 10.100.1.52 port 56708 connected to 10.100.1.10 port 5201

[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd

[  4]   0.00-1.00   sec  1008 MBytes  8.46 Gbits/sec    0    567 KBytes

[  4]   1.00-2.00   sec  1.09 GBytes  9.39 Gbits/sec    0    628 KBytes

[  4]   2.00-3.00   sec  1.09 GBytes  9.36 Gbits/sec    0    782 KBytes

[  4]   3.00-4.00   sec  1009 MBytes  8.46 Gbits/sec    0    822 KBytes

[  4]   4.00-5.00   sec   982 MBytes  8.24 Gbits/sec    0    895 KBytes

[  4]   5.00-6.00   sec  1.05 GBytes  9.04 Gbits/sec    0    895 KBytes

[  4]   6.00-7.00   sec  1019 MBytes  8.54 Gbits/sec    0    895 KBytes

[  4]   7.00-8.00   sec   941 MBytes  7.89 Gbits/sec    0    895 KBytes

[  4]   8.00-9.00   sec  1011 MBytes  8.48 Gbits/sec    0    895 KBytes

[  4]   9.00-10.00  sec   949 MBytes  7.95 Gbits/sec    0    895 KBytes

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth       Retr

[  4]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec    0             sender

[  4]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec                  receiver


Storage testing(Intenal) : i run following command :
Code:
dd if=/dev/zero of=/mnt/po01/vm_datastoretest.dat bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 205.693829 secs (101955028 bytes/sec)

Results actually shows around 120Mbytes/s ,this is really poor results I Remember when storage where new , i did same tests where more than 10 faster
System events
the only events i can see is as follows
Code:
nfsrv_cache_session: no session


Given all description and the fact the system not reporting any issues , i really don't know where to go from here , any idea
Please advice
Thanks

SYSTEM SPEC
The Dataset settings for the vm dataset i used following
Code:
set sync=disabled for just that dataset 
, the storage structure is as following

Disk pool
Shows healthy with 4x RZ2x 9 disks ,model Vendor:SEAGATE Product:ST10000NM0096 Revision:E001
Nvme drives
LOG
drive mirror 2x INTEL SSDPE2ME800G4
CACHE
drive stip 2x INTEL SSDPE2ME800G4
System oveall health

In addition i see inside server IPMI that all hardware component are healthy including NVME drives
Chassis model and System
SSG-6048R-E1CR60L
CPU 2xProcessor:Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Memory:512Gbytes
Network 4x10G nic
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Not an ideal setup for vmstorage but that aside, how full is your pool? You may also look into free space fragmentation.
 

tbaror

Contributor
Joined
Mar 20, 2013
Messages
105
Not an ideal setup for vmstorage but that aside, how full is your pool? You may also look into free space fragmentation.
Thanks for the answer
how can i check free space fragmentation on storage?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Thanks for the answer
how can i check free space fragmentation on storage?
zpool list should show the FRAG% column which gives an overall average.

I assume that you are still using the default recordsize of 128K as well?
 
Top