I am trying to determine which network approach would be best to maximum the network performance on a new FreeNAS server I am building. The end points would consist of either 1GB or 10GB clients (e.g. desktop with a single 1Gb NIC and VMware host with either multi 1Gb NIC or a single 10Gb NIC). I have both a 1Gb and 10Gb copper (Not Fabric) switches available to use. From a simplicity stand point, I can just used 1Gb through out the infrastructure, even on the VMware host, but worry about the overall throughput a single Gb NIC can maintain around FreeNAS iSCSI or NFS. In addition, I am not sure how effective would LAG be using LACP on FreeNAS. In short,
1. Has much testing been doing on FreeNAS using a 10Gb back end (10G NIC on FreeNAS, to 10G switch to both 1G/10G endpoints) to facilitate both 1Gb and 10G clients?
2. Or would it be just as easy to used all 1Gb infrastructure and configure LACP on FreeNAS, the 10G switch, and VMware host to increase performance as the environment grow?
3. Or is it possible to have both 1Gb and 10G interfaces on the FreeNAS, each on a difference VLANs/subnets to service both 1G and 10G clients that way?
Thanks
1. Has much testing been doing on FreeNAS using a 10Gb back end (10G NIC on FreeNAS, to 10G switch to both 1G/10G endpoints) to facilitate both 1Gb and 10G clients?
2. Or would it be just as easy to used all 1Gb infrastructure and configure LACP on FreeNAS, the 10G switch, and VMware host to increase performance as the environment grow?
3. Or is it possible to have both 1Gb and 10G interfaces on the FreeNAS, each on a difference VLANs/subnets to service both 1G and 10G clients that way?
Thanks