I just noticed that my CPU has been hovering at about 25%, which equates to one of the cores being consumed. I don't know what might be causing it... but I have a hunch. I am currently using VMWare to clone a VM from an iSCSI zvol to a NFS dataset/share (all on the same underlying zfs volume). Perhaps there is something special about the VMWare clone operation that causes excessive NIC interrupts, or introduces a high queue?
If I SSH to the server and run `top -SPH` I see:
Its just a wild guess, but it looks like my em0 interface has a large queue, which in turn is causing trouble. This system is not under heavy io load.
The em0 interface is used to access an NFS share as well as iscsi (there are two other interfaces used by the iscsi multipath)
I'm running FreeNAS-9.10-STABLE-201606270534 (dd17351)
If I SSH to the server and run `top -SPH` I see:
Code:
last pid: 57338; load averages: 1.19, 1.24, 1.19 up 23+06:51:36 16:12:44 788 processes: 6 running, 764 sleeping, 18 waiting CPU 0: 0.0% user, 0.0% nice, 1.9% system, 0.0% interrupt, 98.1% idle CPU 1: 0.0% user, 0.0% nice, 12.1% system, 0.0% interrupt, 87.9% idle CPU 2: 0.0% user, 0.0% nice, 1.9% system, 0.0% interrupt, 98.1% idle CPU 3: 0.0% user, 0.0% nice, 100% system, 0.0% interrupt, 0.0% idle Mem: 175M Active, 9846M Inact, 13G Wired, 51M Cache, 299M Free ARC: 10G Total, 1472M MFU, 8124M MRU, 690K Anon, 747M Header, 388M Other Swap: 20G Total, 16K Used, 20G Free PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND 0 root -92 - 0K 9728K CPU3 3 21.5H 100.00% kernel{em0 que} 11 root 155 ki31 0K 64K CPU0 0 553.0H 98.19% idle{idle: cpu0} 11 root 155 ki31 0K 64K CPU1 1 552.6H 95.26% idle{idle: cpu1} 11 root 155 ki31 0K 64K RUN 2 552.3H 93.90% idle{idle: cpu2} 7 root -16 - 0K 32K psleep 1 95:01 7.76% pagedaemon{pagedaemon} 57100 root 20 0 26064K 4468K CPU2 2 0:03 0.59% top 11 root 155 ki31 0K 64K RUN 3 533.0H 0.00% idle{idle: cpu3} ....SNIP...
Its just a wild guess, but it looks like my em0 interface has a large queue, which in turn is causing trouble. This system is not under heavy io load.
Code:
[root@zfs] ~# gstat -bI5s -f 'da[0-7]p2|ada[0-1]p2|ada0p1' dT: 5.016s w: 5.000s filter: da[0-7]p2|ada[0-1]p2|ada0p1 L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 6 0 0 0.0 6 57 0.2 0.3 da0p2 0 6 0 0 0.0 6 57 0.6 0.7 da1p2 0 6 0 0 0.0 6 57 0.6 0.6 da2p2 0 6 0 0 0.0 6 57 0.6 0.7 da3p2 0 7 0 0 0.0 7 160 1.0 2.1 da4p2 0 8 0 0 0.0 7 160 0.8 2.2 da5p2 0 2 0 0 0.0 2 54 0.2 0.8 da6p2 0 2 0 0 0.0 2 54 0.2 0.9 da7p2 0 16 0 0 0.0 8 160 0.1 1.8 ada0p1 0 15 0 4 0.2 15 757 0.1 0.2 ada0p2 0 3 0 0 0.0 2 124 0.3 0.9 ada1p2 0 3 0 0 0.0 2 124 0.3 0.8 ada2p2
The em0 interface is used to access an NFS share as well as iscsi (there are two other interfaces used by the iscsi multipath)
Code:
[root@zfs] ~# ifconfig em0 em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000 options=4019b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,VLAN_HWTSO> ether 00:15:17:6a:6c:3f inet 172.16.1.1 netmask 0xffffff00 broadcast 172.16.1.255 nd6 options=9<PERFORMNUD,IFDISABLED> media: Ethernet autoselect (1000baseT <full-duplex>) status: active
I'm running FreeNAS-9.10-STABLE-201606270534 (dd17351)
Last edited: