iSCSI-3 Persistent Reservations under Hyper-V Server 2012

Status
Not open for further replies.

GladeCreek

Cadet
Joined
Jan 7, 2013
Messages
2
I have seen several nicely written guides on how to setup Cluster Shared Volumes on FreeNAS, and I have read forum posts of people doing it successfully, but then I've seen quite a few posts from folks that can not get it to work. I'm in the 'not working' category.

I'm running FreeNAS 8.3.0p1, and I've tried both device extents and file extents with similar results. The attached servers are running Hyper-V 2012 RTM, and can see the presented volumes (initialize, format, etc.), but I run in to a snag when attempting to use them as CSV volumes. Below is the output from the Validation. I thought perhaps the verbose message might jog something in someone's mind.

Is this really a problem with the underlying iSCSI package that FreeNAS uses? Or could it be something as simple as the Target authentication settings.

If anyone has this working with any version of Microsoft OS CSV, please post your settings so that I might try to emulate in my lab. Thanks very much.


Validate SCSI-3 Persistent Reservation
Description: Validate that storage supports the SCSI-3 Persistent Reservation commands.
Start: 1/7/2013 12:37:17 PM.

Validating Test Disk 1 for Persistent Reservation support.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING for Test Disk 1 from node HYPERV04.GCT.local.
Issuing call to Persistent Reservation RESERVE on Test Disk 1 from node HYPERV04.GCT.local.
Issuing Persistent Reservation READ RESERVATION on Test Disk 1 from node HYPERV04.GCT.local.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING for Test Disk 1 from node HYPERV03.GCT.local.
Issuing call to Persistent Reservation RESERVE on Test Disk 1 from node HYPERV03.GCT.local.
Issuing call Persistent Reservation PREEMPT on Test Disk 1 from unregistered node HYPERV03.GCT.local. This is expected to fail.
Issuing call to Persistent Reservation RESERVE on Test Disk 1 from node HYPERV03.GCT.local.
Failure issuing call to Persistent Reservation RESERVE on Test Disk 1 from node HYPERV03.GCT.local when that node has successfully registered. It is expected to succeed. The requested resource is in use.
.
Test Disk 1 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.

Validating Test Disk 0 for Persistent Reservation support.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING for Test Disk 0 from node HYPERV04.GCT.local.
Issuing call to Persistent Reservation RESERVE on Test Disk 0 from node HYPERV04.GCT.local.
Issuing Persistent Reservation READ RESERVATION on Test Disk 0 from node HYPERV04.GCT.local.
Issuing Persistent Reservation REGISTER AND IGNORE EXISTING for Test Disk 0 from node HYPERV03.GCT.local.
Issuing call to Persistent Reservation RESERVE on Test Disk 0 from node HYPERV03.GCT.local.
Issuing call Persistent Reservation PREEMPT on Test Disk 0 from unregistered node HYPERV03.GCT.local. This is expected to fail.
Issuing call to Persistent Reservation RESERVE on Test Disk 0 from node HYPERV03.GCT.local.
Failure issuing call to Persistent Reservation RESERVE on Test Disk 0 from node HYPERV03.GCT.local when that node has successfully registered. It is expected to succeed. The requested resource is in use.
.
Test Disk 0 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.Failure issuing call to Persistent Reservation RESERVE on Test Disk 0 from node HYPERV04.GCT.local when that node has successfully registered. It is expected to succeed. The requested resource is in use.
.
Failure issuing call to Persistent Reservation RESERVE on Test Disk 1 from node HYPERV04.GCT.local when that node has successfully registered. It is expected to succeed. The requested resource is in use.
.
Stop: 1/7/2013 12:37:18 PM.
Test failed. Please look at the test log for more information.
 

jerquiaga

Cadet
Joined
Jan 25, 2013
Messages
3
I've experienced the same issue. I was hoping that someone would have responded and had a thought. I've read all the same things about people successfully setting up FreeNAS to work as iSCSI storage for Hyper-V. Part of me wonders if there is something different about the way that Hyper-V works between 2008 R2 and 2012.

On the other hand, I haven't really seen anything that seem to indicate that it's worked any better for VMWare clustering.

What's really hard is that there is all kinds of posts saying that it works, and just as many saying that it doesn't.
 

jerquiaga

Cadet
Joined
Jan 25, 2013
Messages
3
Follow-up

So, on a whim I setup a couple of Server 2008 R2 servers in virtual box, and they successfully pass the cluster validation tests, so there must be something different about the way Server 2012 does persistent reservations (or the way the validation test works). Not sure if there is a good way to figure that difference out. Anyone have any ideas?



I've experienced the same issue. I was hoping that someone would have responded and had a thought. I've read all the same things about people successfully setting up FreeNAS to work as iSCSI storage for Hyper-V. Part of me wonders if there is something different about the way that Hyper-V works between 2008 R2 and 2012.

On the other hand, I haven't really seen anything that seem to indicate that it's worked any better for VMWare clustering.

What's really hard is that there is all kinds of posts saying that it works, and just as many saying that it doesn't.
 

kailord81

Cadet
Joined
Mar 28, 2013
Messages
1
Re: Follow-up

jerquiaga, for Windows 2012 try this workaround - use cluster.exe command to add the disk instead of using the failover cluster console - it works for me.

Present the LUN to both nodes
On one of the node, take it online, format the disk and give it a drive letter. E.g. Q:
On the same node, install the Failover Cluster Command Interface feature.
Open command prompt, run these commands:
cluster res "res1" /create /group:"Available Storage" /type:"Physical Disk"
cluster res "res1" /priv diskpath="Q:"
cluster res "res1" /on
You can now open failover cluster console and verify the disk.

You can repeat above steps for other LUNs that you want to add - just replace "res1" with other desired descriptive name.
 

jerquiaga

Cadet
Joined
Jan 25, 2013
Messages
3
Re: Follow-up

jerquiaga, for Windows 2012 try this workaround - use cluster.exe command to add the disk instead of using the failover cluster console - it works for me.

Present the LUN to both nodes
On one of the node, take it online, format the disk and give it a drive letter. E.g. Q:
On the same node, install the Failover Cluster Command Interface feature.
Open command prompt, run these commands:
cluster res "res1" /create /group:"Available Storage" /type:"Physical Disk"
cluster res "res1" /priv diskpath="Q:"
cluster res "res1" /on
You can now open failover cluster console and verify the disk.

You can repeat above steps for other LUNs that you want to add - just replace "res1" with other desired descriptive name.

Thanks for the possible workaround. I may give this a shot, but I've largely moved on to using Illumian and Napp-it for storage, since it seems to work as is. It's a little more work to get setup initially, but it seems to be quite efficient.
 
Joined
Apr 28, 2013
Messages
1
Re: Follow-up

Thanks for the possible workaround. I may give this a shot, but I've largely moved on to using Illumian and Napp-it for storage, since it seems to work as is. It's a little more work to get setup initially, but it seems to be quite efficient.

Hi Kailord,

I saw your note on this thread and thought I would get in touch. I attempted your workaround but the cluster service was complaining of 'there are no more endpoints..., System error 1753'. I attempted to start the cluster service but it would not. Most articles reference using cluster.exe to manage the cluster but I am struggling to understand how this works. If you have any other tips they would be gratefully appreciated.

I thought about upgrading my freenas as well but I can't find any information as to whether the latest version has a fix for this issue.

Kind regards,

John
 

mstrent

Dabbler
Joined
Oct 11, 2012
Messages
21
Re: Follow-up

Watching this thread. Hoping to use (test) FreeNAS (and buy a TrueNAS) with Hyper-V and iSCSI.
 

GladeCreek

Cadet
Joined
Jan 7, 2013
Messages
2
Well, in the end I chose to use the iSCSI target built in to Windows 2012. It's fully compliant with no shenanigans. As far as I know, FreeNAS doesn't support SPC3 and forcing the cluster by tricks is just inviting a disaster down the road. I mean the point of clustering is high availability and reliability right? IMO, you'd be way better off using local storage on the Hyper-V servers, and using Hyper-V's SAN-less live migrations - slower but same function. Also, Hyper-V 2012 R2 adds a bunch of new features to ensure reliability, failover and such without a SAN.

And finally on another but similar vein, now that Hyper-V (2012 R2) supports multiple simultaneous shared access on SMB shares, you can just use a simple file share for clustering (no iSCSI) and get the exact same result without the complications.

Hope that helps.
 
Joined
Nov 4, 2013
Messages
4
Well, in the end I chose to use the iSCSI target built in to Windows 2012. It's fully compliant with no shenanigans. As far as I know, FreeNAS doesn't support SPC3 and forcing the cluster by tricks is just inviting a disaster down the road. I mean the point of clustering is high availability and reliability right? IMO, you'd be way better off using local storage on the Hyper-V servers, and using Hyper-V's SAN-less live migrations - slower but same function. Also, Hyper-V 2012 R2 adds a bunch of new features to ensure reliability, failover and such without a SAN.

And finally on another but similar vein, now that Hyper-V (2012 R2) supports multiple simultaneous shared access on SMB shares, you can just use a simple file share for clustering (no iSCSI) and get the exact same result without the complications.

Hope that helps.

I originally tried 2012 R2 for the many of the reasons you mention, but I had trouble getting it onto the hardware (HP N40L microserver).

It's not massively critical this works anyway, I was tinkering more for educational purposes. We are about to go live with a new Hyper-v environment at work and I just wanted to play around with Windows clustering.

Thanks for the informative post!

Cheers,

Edit: On a side note, how have you found the performance of the iSCSI target built into 2012?
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
The "workaround" above seemed to render the same results, not to mention there were steps left out. The cluster must be formed first, for example.

BUT, I do have a current and apparently working cluster with two Server 2012 R2 nodes and FreeNAS iSCSI. Allow me to explain:
After not being able to make sense of the reservation differences between 2008 R2 and 2012 R2, I began to try all different variations of settings on both ends, still rendering the same results in the Verification process. Before I completely gave up and decided which FreeNAS alternative to go with, I came across this article: http://gabrewer.com/2013/01/misleading-error-in-cluster-validation-wizard/

Though the context is a bit different than ours here, the artilce reports false negatives in verification with iSCSI-3 Persistent Reservations for those upgrading from 2008 R2 to 2012(not R2). It apparently has something to do with storage spaces not being used and thus giving a "warning" (not "failure" like we're getting). Though very different from an objective standpoint, it gave me a glimpse of hope that possibly something similar was happening, especially since no one seems to be able to quantify the key differences in reservations between the two versions of Server (that I can find).

So, I went ahead and created a cluster with the iSCSI-3 Persistent Reservations failure and it's related tests being the only fault. I added four targets to the cluster, one being for a 4GB Quorum (MS says 500MB is good), and three 480GB drives to be used as CSVs. I have since created two VMs, 7 Pro and Server 2008 R2, on the same CSV. I live migrated the W7 back and forth as I was installing the OS on the Server 2008 VM. No Event errors or warnings. No apparent issues whatsoever so far.

I'm going to continue to test out this cluster, adding many more VMs to it over the next week or two. The downside of course, even if there are no problems, is that MS wont support a cluster that hasn't passed verification. I don't think I've ever called MS for support, but I'd hate to be stuck in a place where I can't if need be.

If anyone else would like to jump on board and give this a try, that would be awesome. I'm stuck with almost no choice but to go into production as is, so the more thumbs up the lower the risk. This would be greatly appreciated.
 

Vishal Patel

Cadet
Joined
Dec 18, 2013
Messages
2
How did it go steven. Did you end up in production? I have a freenas iscsi target working right now with a 2-node cluster, both running server 2012 r2. My validation fails, and errors show up in the event log but failover and live migrations work perfectly. Anyone else got this resolved, would love to get rid of these meaningless errors.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
How did it go steven. Did you end up in production? I have a freenas iscsi target working right now with a 2-node cluster, both running server 2012 r2. My validation fails, and errors show up in the event log but failover and live migrations work perfectly. Anyone else got this resolved, would love to get rid of these meaningless errors.


Hi Vishal,

This bug is worth reading: https://bugs.freenas.org/issues/4003#change-19962 Please, please add to it and state your concern. Many people are halted by this, though the team probably wont prioritize it until they hear a louder concern.

Long story short, it didn't go well. You're right, live migration seems to work like a charm. I did what Nick mentioned in the bug thread above though, which was to restart one of the nodes, and sure enough I ended up corrupting one of the VMs to an unrecoverable state after a restart or two.

The SCSI-3 Persistent Reservations are apparently very important. You'll notice in the event logs that it will say something to the effect of "tried to use [this or that] resource, but the resource is in use". The reservations are suppose to seamlessly hand off ownership of the Cluster Shared Volumes, but they can't because FreeNAS' iSCSI kernel is not up to speed with what Server 2012 R2 is asking for (from what I understand).

Per advice of the bug thread I above, I've sadly moved to Nexentastor. Their "community" version vs Enterprise, similar to FreeNAS vs TrueNAS, seamlessly supports the reservations. The issue is, they're community license states that it is not to be used in production. It is also limited to 16TB raw. My VM storage is under that limit, but I definitely need it for production. I will probably have to purchase their Enterprise version to cover my butt, though they're sales team hasn't responded to my inquiries for over two weeks now.

I have Nexentastor setup, but man is it hairy compared to FreeNAS. I REALLY hope FreeNAS gets this fixed very soon. Sadly, it will be too late for the current SAN I'm implementing as it will likely be in full production before it's fixed (unless it's fixed this week).
 

Vishal Patel

Cadet
Joined
Dec 18, 2013
Messages
2
Hi Steven,
I was finally able to get back to some testing this week and found that OI + NappIt works without a problem. Storage validation passed with all greens.
If anyone else has had a bad experience with this please feel free to responsd. I installed OI and then ran "wget -O - www.napp-it.org/nappit | perl" installed to get nappit installed. Email me if you need some info regarding this.
 

Steven Sedory

Explorer
Joined
Apr 7, 2014
Messages
96
Hi Vishal,

Good to know that that's an option as well. So far Nexentastor has been working out fine. Not sure if you're keeping an eye on the bug report I posted above, but there may be a fix soon. We'll see.
 
Status
Not open for further replies.
Top