Two servers connecting to iSCSI target

Status
Not open for further replies.
Joined
Jun 24, 2015
Messages
2
I am setting up a home lab environment that I can use for testing purposes. I have installed FreeNAS on a VMWare Workstation VM and given it a 20GB vmdk to use as a RAID 0. I've presented it as an iSCSI target, and have two Server 2012R2 VM's pointing at it. They can both see the drive. As a simple test I created a folder on the drive from Server A, expecting to see the new folder on Server B, but this does not happen. Server B does not show the new folder until I restart the server at which point it shows up fine. Does anyone know why this is happening?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A "folder on the drive"? So what filesystem are you using? NTFS?

You cannot have multiple clients accessing an iSCSI device unless there's some intelligent cluster-aware filesystem involved, or you have some other mitigation strategy figured out. The operating systems each cache information about what's out there on the disk, so when one of your Server 2012's writes stuff to the disk, the other doesn't see it because it has other versions of those blocks in cache.

NTFS is not a cluster-aware filesystem, so if you're using NTFS on that device, this will not work and cannot be fixed - the filesystem is inherently incapable of it.

This topic seems to come up so darn often. I kind of wish people would notice the warning in the manual.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
ISCSI is like hooking up a USB drive. You can't connect it to multiple machines (unless the OS is using a clustering filesystem).
 
Joined
Oct 2, 2014
Messages
925
This is going to sound stupid, and i am not a VMware ESXi expert, butttt. I have a iSCSI SAN (ubuntu server) and i have it connected to 2 ESXi hosts via iSCSI....they're sharing 2 LUN's...is ESXi doing something special to be able to connect 2 servers to a single iSCSI target?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
This is going to sound stupid, and i am not a VMware ESXi expert, butttt. I have a iSCSI SAN (ubuntu server) and i have it connected to 2 ESXi hosts via iSCSI....they're sharing 2 LUN's...is ESXi doing something special to be able to connect 2 servers to a single iSCSI target?
ESXi uses a CLUSTER-AWARE filesystem
 
Joined
Oct 2, 2014
Messages
925

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This is going to sound stupid, and i am not a VMware ESXi expert, butttt. I have a iSCSI SAN (ubuntu server) and i have it connected to 2 ESXi hosts via iSCSI....they're sharing 2 LUN's...is ESXi doing something special to be able to connect 2 servers to a single iSCSI target?

Just a little bit. This actually gets back to the problem we were tackling in the original thread. See, a cluster-aware filesystem has to avoid doing certain things, especially including assuming that certain stuff like metadata in its local cache is up-to-date.

VMware's VMFS is kind of a specialized bastard hybrid because it mainly needs to do one thing well, which is to provide exclusive access to a file resource to a single hypervisor. So what happens is that when you spin up VM 3 on Host B, VMFS locks that resource and makes it so that other hypervisors cannot also spin up that VM or access its data. This means that Host B knows without a doubt that all the blocks for the vmdk for VM 3 are its to play with, and that it need not concern itself with any cluster-awareness for all the blocks that comprise the VM 3 vmdk. Meanwhile VM 4 over on Host A is accessing nearby blocks on the same VMFS volume because those were allocated to VM 4, and VM 9 is happily running on Host C.

There isn't any actual block-level cluster access management going on for the blocks within those vmdk's, and so this means that your ESXi host can do things like local SSD caching ("Flash Read Cache") of the VM data because no other ESXi host would have updated the data in those blocks. This would normally be a problem with a generic cluster-aware filesystem, because you can't really trust local caches of data if you don't know what has been touched on disk.

What VMFS mainly has to worry about is managing its own metadata (for the VMFS filesystem). That DOES need to be fully cluster-aware, so that multiple hosts manipulating it are not negatively affecting each other. If you try to create two new VM's, one on each host, at exactly the same time, for example, both of them start reading the VMFS root directory at the same time for a place to add "MyNewVM-1" and "MyNewVM-2". Without some form of cooperative locking and access, both of them would find the first free space in the directory, create an entry, and write it back to storage .... and that'd be in the same spot. And both would try to allocate the subdirectory and free space for their VM's and probably do THAT in the same space as well. That's be a catastrophe. So that is instead done cooperatively, so that "MyNewVM-1" gets put in root directory slot 8 and "MyNewVM-2" gets put in root directory slot 9, and the free blocks are cooperatively allocated, and "/MyNewVM-1" is at block 10,000 and "/MyNewVM-2" is at block 10,020. And "MyNewVM-1.vmdk" is allocated from block 10,030 to 10,429, while "MyNewVM-2.vmdk" is allocated from 10,430 to 10,829.

The problem is that VMFS has to signal all the locking for those metadata updates through the SAN filer, which is basically fairly slow. However, once all that is done, and the VM's are spun up, they do not need to worry about further locking updates, and the VM's are basically just allowed to write whatever to their vmdk files, as though they were actual disks.

If you're still confused, please squeak.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
mod note: I've moved this subthread from the Off-Topic thread back onto the original post because this is totally relevant to the original post.
 
Joined
Oct 2, 2014
Messages
925
Just a little bit. This actually gets back to the problem we were tackling in the original thread. See, a cluster-aware filesystem has to avoid doing certain things, especially including assuming that certain stuff like metadata in its local cache is up-to-date.

VMware's VMFS is kind of a specialized bastard hybrid because it mainly needs to do one thing well, which is to provide exclusive access to a file resource to a single hypervisor. So what happens is that when you spin up VM 3 on Host B, VMFS locks that resource and makes it so that other hypervisors cannot also spin up that VM or access its data. This means that Host B knows without a doubt that all the blocks for the vmdk for VM 3 are its to play with, and that it need not concern itself with any cluster-awareness for all the blocks that comprise the VM 3 vmdk. Meanwhile VM 4 over on Host A is accessing nearby blocks on the same VMFS volume because those were allocated to VM 4, and VM 9 is happily running on Host C.

There isn't any actual block-level cluster access management going on for the blocks within those vmdk's, and so this means that your ESXi host can do things like local SSD caching ("Flash Read Cache") of the VM data because no other ESXi host would have updated the data in those blocks. This would normally be a problem with a generic cluster-aware filesystem, because you can't really trust local caches of data if you don't know what has been touched on disk.

What VMFS mainly has to worry about is managing its own metadata (for the VMFS filesystem). That DOES need to be fully cluster-aware, so that multiple hosts manipulating it are not negatively affecting each other. If you try to create two new VM's, one on each host, at exactly the same time, for example, both of them start reading the VMFS root directory at the same time for a place to add "MyNewVM-1" and "MyNewVM-2". Without some form of cooperative locking and access, both of them would find the first free space in the directory, create an entry, and write it back to storage .... and that'd be in the same spot. And both would try to allocate the subdirectory and free space for their VM's and probably do THAT in the same space as well. That's be a catastrophe. So that is instead done cooperatively, so that "MyNewVM-1" gets put in root directory slot 8 and "MyNewVM-2" gets put in root directory slot 9, and the free blocks are cooperatively allocated, and "/MyNewVM-1" is at block 10,000 and "/MyNewVM-2" is at block 10,020. And "MyNewVM-1.vmdk" is allocated from block 10,030 to 10,429, while "MyNewVM-2.vmdk" is allocated from 10,430 to 10,829.

The problem is that VMFS has to signal all the locking for those metadata updates through the SAN filer, which is basically fairly slow. However, once all that is done, and the VM's are spun up, they do not need to worry about further locking updates, and the VM's are basically just allowed to write whatever to their vmdk files, as though they were actual disks.

If you're still confused, please squeak.
Oh @jgreco you and @cyberjock are truly great people :) thank you for this , i tagged cyber because he has provided more answers then i will admit :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I was about to write something like what jgreco wrote, but as I scrolled through the thread he beat me to it. :P Definitely a good post jgreco!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, and an interesting reminder that "cluster-aware" means different things in different contexts. Any technique that allows you to get your stuff done while not stomping on (and being aware of) the next guy's stuff could count as "cluster-aware". VMFS counts primarily because there's no reason two hypervisors would be trying to work on a single VM (and if you try you'll find VMFS refuses to let you). Handy fun fact, if you run shell scripts from your ESXi host, these also lock, so you cannot run a shell script on mounted shared storage on two ESXi boxes at once. Also mostly reduces the need for cache coherence techniques (where, more or less, the VMFS solution is "don't do that.")
 
Status
Not open for further replies.
Top