FreeNAS 9.3
default MTU
VMware 6.0.0 build 3620759
There is no load, no vm's actually running, system end to end is for backup and server loss disaster recovery in case a physical machine dies, we can hot up a p2v copy of the physical machine.
In general iscsi has been working /great/ so far on this FreeNAS system, and I'm in the process of building a third FreeNAS system identical to the first on FreeNAs 10, so if 10 is rthe solution I will find that out later this week.
Basically other than some testing, there is no load on the FreeNAS system, and on the two 10G private storage networks connected to two fresh and clean VMware ESXi 6.0.0 systems.
Bringing up the first iscsi share was a cake walk, everything just worked. The reason I'm bringing up a second one is I ran into an issue that needed a larger block size on the vmware side. Basically the default 1M vmware block side has a file size limit that is slightly smaller than a machine I'm trying to p2v for disaster planning. So rather than wipe out the work I have done to this point, I was hoping to just spin up a second iscsi share that I can build with a VMFS filesystem that has a larger block size. Seems simple enough.
Given this is for disaster purposes, I'm tempted to throw performance away use NFS and just call this a fail... but if I do that the resulting system will not be as useful in a disaster, as it is a build server, and running a build over an NFS mount is less desirable.
The problem I am seeing is that the second target never shows up on vmware. I've even tried turning chap off.
To debug the issue, I tried to connect from my labtop that runs windows 10 (heavyweight workstation class laptop, Dell 4700). And from there I noticed that the first time I connected it failed, and after a couple retries, and ao 10 second or so wait on the third try, it finally connected.
To be fair, the laptop test is from my desk, and it is not as a member of either of the 10g private layer 2 networks that my VMware servers use, instead I'm running over layer 3, over a path known to consistently perform at 95% of the 1g wire speed up and down.
We have a massively overbuilt network, we manufacture an embedded system that under typical conditions performs well below 100m, so the gig ports on the edge, and the 10g ports that make up the main infrastructure is under almost no load at all. In general the network is highly segmented, so isolation is wrapped around every engineer and what he is testing. our target device has a DHCP server, so isolation is inforced so when an engineer does something wrong, he only impacts his own little vlan.
For our production SANs we have a different set-up, but those are well isolated, though they use the same network design I'm using with FreeNAS, with the exception of the MTU. In my production SAN's we run with jumbo frames, but on FreeNAS I have stuck to the recommendations from the forums that are relevant, even if they seem overly paranoid... like obsessing over MTU... Never had a problem with MTU's in the production side, but we sort of know what we are doing... and on that side we are doing critical performance testing to scale a management software product we have to handle millions of devices under management... but I digress.
I am using FreeNAS as a place to store archive data, and idle production-ready backups, in case something dies.
Parallel to building the thrid FreeNAS system, I'm also building a new physical build server, so the FreeNAS/VMware system is a second level backup.
At this point, all I'm trying to do is get the iscsi share mounted on ESXi. It do0esn't even have to work well... I just didn't want to stoop to NFS... but I will if I have to. I just didn't want to set the bar that low, I was hoping the disaster plan was capable of some modest level of performance to hold us over the time it takes to spin new hardware... I don't think NFS will provide that, but it would suffice in a disaster I suppose.
I've got to be doing something wrong, but I don't know what. Obviously the iscsi share works at all, but it feels like it must be timing out on something. It came up slow and glitchy enough on windows that I feel like VMware must be timing out on something that only happens during the initiation of the connection.
Ideas?
default MTU
VMware 6.0.0 build 3620759
There is no load, no vm's actually running, system end to end is for backup and server loss disaster recovery in case a physical machine dies, we can hot up a p2v copy of the physical machine.
In general iscsi has been working /great/ so far on this FreeNAS system, and I'm in the process of building a third FreeNAS system identical to the first on FreeNAs 10, so if 10 is rthe solution I will find that out later this week.
Basically other than some testing, there is no load on the FreeNAS system, and on the two 10G private storage networks connected to two fresh and clean VMware ESXi 6.0.0 systems.
Bringing up the first iscsi share was a cake walk, everything just worked. The reason I'm bringing up a second one is I ran into an issue that needed a larger block size on the vmware side. Basically the default 1M vmware block side has a file size limit that is slightly smaller than a machine I'm trying to p2v for disaster planning. So rather than wipe out the work I have done to this point, I was hoping to just spin up a second iscsi share that I can build with a VMFS filesystem that has a larger block size. Seems simple enough.
Given this is for disaster purposes, I'm tempted to throw performance away use NFS and just call this a fail... but if I do that the resulting system will not be as useful in a disaster, as it is a build server, and running a build over an NFS mount is less desirable.
The problem I am seeing is that the second target never shows up on vmware. I've even tried turning chap off.
To debug the issue, I tried to connect from my labtop that runs windows 10 (heavyweight workstation class laptop, Dell 4700). And from there I noticed that the first time I connected it failed, and after a couple retries, and ao 10 second or so wait on the third try, it finally connected.
To be fair, the laptop test is from my desk, and it is not as a member of either of the 10g private layer 2 networks that my VMware servers use, instead I'm running over layer 3, over a path known to consistently perform at 95% of the 1g wire speed up and down.
We have a massively overbuilt network, we manufacture an embedded system that under typical conditions performs well below 100m, so the gig ports on the edge, and the 10g ports that make up the main infrastructure is under almost no load at all. In general the network is highly segmented, so isolation is wrapped around every engineer and what he is testing. our target device has a DHCP server, so isolation is inforced so when an engineer does something wrong, he only impacts his own little vlan.
For our production SANs we have a different set-up, but those are well isolated, though they use the same network design I'm using with FreeNAS, with the exception of the MTU. In my production SAN's we run with jumbo frames, but on FreeNAS I have stuck to the recommendations from the forums that are relevant, even if they seem overly paranoid... like obsessing over MTU... Never had a problem with MTU's in the production side, but we sort of know what we are doing... and on that side we are doing critical performance testing to scale a management software product we have to handle millions of devices under management... but I digress.
I am using FreeNAS as a place to store archive data, and idle production-ready backups, in case something dies.
Parallel to building the thrid FreeNAS system, I'm also building a new physical build server, so the FreeNAS/VMware system is a second level backup.
At this point, all I'm trying to do is get the iscsi share mounted on ESXi. It do0esn't even have to work well... I just didn't want to stoop to NFS... but I will if I have to. I just didn't want to set the bar that low, I was hoping the disaster plan was capable of some modest level of performance to hold us over the time it takes to spin new hardware... I don't think NFS will provide that, but it would suffice in a disaster I suppose.
I've got to be doing something wrong, but I don't know what. Obviously the iscsi share works at all, but it feels like it must be timing out on something. It came up slow and glitchy enough on windows that I feel like VMware must be timing out on something that only happens during the initiation of the connection.
Ideas?