jason.rohm
Dabbler
- Joined
- May 7, 2013
- Messages
- 25
Build FreeNAS-9.10.1-U1 (ff51a5d)
Platform Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
Memory 73688MB
System Time Mon Oct 10 13:56:25 CDT 2016
Uptime 1:56PM up 6 days, 15:37, 1 user
Load Average 0.26, 0.24, 0.25
Some time ago I moved my lab ESXi 5.5 environment to NFS due to limitations with file iSCSI extents and the UNMAP primitive not being implemented. This issue seems to be resolved now. About two weeks ago I created an iSCSI zVol extent on the same hardware and moved most of my machines over to it. At the same time, I upgraded to 9.10.1-U1 from 9.3.something_stable (don't recall). The uptime reflected above represents the time I rebooted and moved the data, so it has been stable in this configuration for roughly five days.
I noticed something interesting and I'm searching for a reason to explain it. Prior to upgrade my ARC/L2ARC cache numbers were consistently in the 94% and 3% range. My ARC hit numbers as of this morning are 83.8% and 0.6% respectively. I've not "felt" any difference in performance, but hitting spinning disk 10% more often makes me wonder why and if there is some tuning that needs to be done.
Thoughts and feedback are welcome and requested.
Hardware configuration is:
HP DL380G6
Samsung 850 Pro ZIL
Samsung 850 Pro x2 ARC (striped on onboard array controller)
LSI 9201-16I
WD Red 2.5" 1TB SATA x12 (configured as 4 x 3 Z1)
Intel 9xx series 10G dual port NICs attached to Cisco Nexus 5010 w/9k MTU
Platform Intel(R) Xeon(R) CPU E5504 @ 2.00GHz
Memory 73688MB
System Time Mon Oct 10 13:56:25 CDT 2016
Uptime 1:56PM up 6 days, 15:37, 1 user
Load Average 0.26, 0.24, 0.25
Some time ago I moved my lab ESXi 5.5 environment to NFS due to limitations with file iSCSI extents and the UNMAP primitive not being implemented. This issue seems to be resolved now. About two weeks ago I created an iSCSI zVol extent on the same hardware and moved most of my machines over to it. At the same time, I upgraded to 9.10.1-U1 from 9.3.something_stable (don't recall). The uptime reflected above represents the time I rebooted and moved the data, so it has been stable in this configuration for roughly five days.
I noticed something interesting and I'm searching for a reason to explain it. Prior to upgrade my ARC/L2ARC cache numbers were consistently in the 94% and 3% range. My ARC hit numbers as of this morning are 83.8% and 0.6% respectively. I've not "felt" any difference in performance, but hitting spinning disk 10% more often makes me wonder why and if there is some tuning that needs to be done.
Thoughts and feedback are welcome and requested.
Hardware configuration is:
HP DL380G6
Samsung 850 Pro ZIL
Samsung 850 Pro x2 ARC (striped on onboard array controller)
LSI 9201-16I
WD Red 2.5" 1TB SATA x12 (configured as 4 x 3 Z1)
Intel 9xx series 10G dual port NICs attached to Cisco Nexus 5010 w/9k MTU