(from a complete FN newbie; no linux, nor bsd fluency. Please treat gently)
Hastily installed a new copy of FN on to a less-than-ideal but available machine, in order to avert a near-disaster situation with an inherited complex linux/drbd/esxi setup that had drives failing and had way too steep a learning curve for me to manage crisis avoidance. All critical data successfully moved to the FN box and esxi more-or-less happily now talking to FN.
Many thanks to the FN team for that.
Now waiting on sundry bits of hardware to arrive to create a more suitable platform.
The present scenario has one disc-less machine running esxi (small Windows SBS, linux firewall, etc) . The VMs are resident on the (primary) FN machine. There is also a secondary FN machine. Each of these machines is connected to a consumer-grade 8-port gigabit switch (subnet 192.128.218.xxx) which is exclusively for server-server traffic. They are also connected via a 24-port gigabit switch to the rest of the LAN (subnet 192.128.16.xxx). In all 3 machines the system drive is implemented as a USB stick, typically 8GB or 16GB.
Almost all of the NICs involved are Intel - integral (motherboard), or PCI, or PCIe. One - on the original life-saver machine - is a Realtek. That machine - which will when the new hardware arrives be replaced - is presently the primary FN box. Both of the FN boxes are using raidz2. The primary box has a 4-bay HS enclosure containing 1TB drives. The secondary has an 8-bay HS enclosure containing 750GB drives.
The transfer of data from FN1 to FN2 is extremely slow - with Intel NICs at each end. zfs send/recv on a volume of appr 300GB delivers about 8MBps. A copy of the same volume of data from FN2 to another dataset on FN2 achieves about 61MBps - not great, but acceptable given the read/write contention. This suggests that the FN1 to FN2 transfer is network constrained, not disc-constrained. The network utilisation - as shown in the GUI Reporting page - does not exceed 100Mbps.
The FN1 box also communicates with the esxi box. Initially, this connection was using the Realtek adapter. Every 30 to 40 mins there is a sustained burst of activity lasting about 7 to 10 minutes. The FN network graph and the vSphere network graph each agree ... the traffic is about 600Mbps.
Because this 'heartbeat' workload is so predictable, as a test I swapped the cables into the Intel and Realtek NICs and swapped the (fixed) IP addresses. When the Intel NIC was doing the job, the speed again dropped to less than 100Mbps. Equally, when the assignments/cables were reinstated the network transfer rate went back up to 600 or so. The Realtek was significantly out-performing the Intel. More accurately, the Intel was massively under-performing.
Everything I have read re FN suggests that (barring 10Gbe) Intel is the way to go for NICs, and that autosense should be good enough. ifconfig output confirms that all NICs are operating at 1000Mbps.
Can someone please shed some light on where my actions/thinking have gone astray, and/or what I can do to get the Intel NICs up to acceptable performance levels. I suppose I should add that I have not - as far as I know - altered any config files other than through the GUI.
Hastily installed a new copy of FN on to a less-than-ideal but available machine, in order to avert a near-disaster situation with an inherited complex linux/drbd/esxi setup that had drives failing and had way too steep a learning curve for me to manage crisis avoidance. All critical data successfully moved to the FN box and esxi more-or-less happily now talking to FN.
Many thanks to the FN team for that.
Now waiting on sundry bits of hardware to arrive to create a more suitable platform.
The present scenario has one disc-less machine running esxi (small Windows SBS, linux firewall, etc) . The VMs are resident on the (primary) FN machine. There is also a secondary FN machine. Each of these machines is connected to a consumer-grade 8-port gigabit switch (subnet 192.128.218.xxx) which is exclusively for server-server traffic. They are also connected via a 24-port gigabit switch to the rest of the LAN (subnet 192.128.16.xxx). In all 3 machines the system drive is implemented as a USB stick, typically 8GB or 16GB.
Almost all of the NICs involved are Intel - integral (motherboard), or PCI, or PCIe. One - on the original life-saver machine - is a Realtek. That machine - which will when the new hardware arrives be replaced - is presently the primary FN box. Both of the FN boxes are using raidz2. The primary box has a 4-bay HS enclosure containing 1TB drives. The secondary has an 8-bay HS enclosure containing 750GB drives.
The transfer of data from FN1 to FN2 is extremely slow - with Intel NICs at each end. zfs send/recv on a volume of appr 300GB delivers about 8MBps. A copy of the same volume of data from FN2 to another dataset on FN2 achieves about 61MBps - not great, but acceptable given the read/write contention. This suggests that the FN1 to FN2 transfer is network constrained, not disc-constrained. The network utilisation - as shown in the GUI Reporting page - does not exceed 100Mbps.
The FN1 box also communicates with the esxi box. Initially, this connection was using the Realtek adapter. Every 30 to 40 mins there is a sustained burst of activity lasting about 7 to 10 minutes. The FN network graph and the vSphere network graph each agree ... the traffic is about 600Mbps.
Because this 'heartbeat' workload is so predictable, as a test I swapped the cables into the Intel and Realtek NICs and swapped the (fixed) IP addresses. When the Intel NIC was doing the job, the speed again dropped to less than 100Mbps. Equally, when the assignments/cables were reinstated the network transfer rate went back up to 600 or so. The Realtek was significantly out-performing the Intel. More accurately, the Intel was massively under-performing.
Everything I have read re FN suggests that (barring 10Gbe) Intel is the way to go for NICs, and that autosense should be good enough. ifconfig output confirms that all NICs are operating at 1000Mbps.
Can someone please shed some light on where my actions/thinking have gone astray, and/or what I can do to get the Intel NICs up to acceptable performance levels. I suppose I should add that I have not - as far as I know - altered any config files other than through the GUI.