Esxi 6.5 + iSCSI zvol share - Immediately shows "Normal, Degraded" in Vsphere

Status
Not open for further replies.

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
I've progressed through the article on b3n.org about setting up FreeNAS as a VM. Ive given it 2 dedicated cores and 16 GB of dedicated RAM.

I followed all the recommendations as best I could. I have the following setup:
supermicro x10 srh-clnf4
Intel(R) Xeon(R) CPU E5-2650 (12 core w. hyperthreading)
LSI 3008 (onboard) flashed to IT mode.
64 gb ddr4 2400 ram
6 wd red x 2 TB each (setup in raid10)
100 Gb Intel s3700 x2 (mirror) as ZIL

I installed embedded vsphere 6.5, the most recent release
Also installed FreeNAS Build FreeNAS-9.10.2-U3 (e1497f269)

I followed this guide as best as I could, and worked very well up to this point. But you'll see here, my device is shown as "normal, degraded."

Screen Shot 2017-06-04 at 12.14.02 AM.png

As an aside, I have to hit rescan/refresh on the iscsi software adapter followed by a refresh to see it as a device. But more importantly, I see the device as "normal, degraded." I'm not sure if this is due to not having multipathing. I have no external storage (its all attached to the mobo), so I don't see the point in having a second path, unless its to make this degraded message go away.

Any idea if I should troubleshoot this further, or its just a quirk of iscsi/multipathing? Attached are the snapshots of my FreeNAS iscsi share configuration.
Screen Shot 2017-06-04 at 12.14.02 AM.png Screen Shot 2017-06-04 at 12.37.20 AM.png Screen Shot 2017-06-04 at 12.37.44 AM.png Screen Shot 2017-06-04 at 12.37.49 AM.png Screen Shot 2017-06-04 at 12.37.56 AM.png Screen Shot 2017-06-04 at 12.38.06 AM.png Screen Shot 2017-06-04 at 12.38.12 AM.png Screen Shot 2017-06-04 at 12.38.17 AM.png
 

Attachments

  • Screen Shot 2017-06-04 at 12.46.45 AM.png
    Screen Shot 2017-06-04 at 12.46.45 AM.png
    101.9 KB · Views: 1,863
Last edited by a moderator:

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
Thanks for the reply!

Yes that was also one of the articles I stumbled on. I put that into the esxshell and rebooted without the warning going away.
 

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
Also think I found a solution to the mounting after power on issue. I added the following to my /etc/local.sh
Code:
# Custom Boot VSA script

# Let's sleep while we wait for the VM to come online. Say 30 Seconds. 
sleep 120

			# Rescan the hbas
			/sbin/esxcfg-rescan -A
  
# Rescan for datastores

/sbin/vmkfstools -V

# Restart the services on the host

/sbin/services.sh restart 

http://virtuallyhyper.com/2012/09/esxi-4-1-wait-for-local-vsa-to-start-before-starting-other-vms/
https://kb.vmware.com/selfservice/m...nguage=en_US&cmd=displayKC&externalId=2043564
 

bmac6996

Dabbler
Joined
Sep 11, 2014
Messages
22
Same here.. Create some iSCSI storage but shows degrade right away for me. Tried the above and same results.
 

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
I haven't had the time, but I wonder if creating a second target will make the degraded state go away. If it does, id say remove the second path as multi path isn't necessary. I'd try it myself, but I'm revisiting the "scan on startup" issue as the above doesn't seem to persist.


Sent from my iPhone using Tapatalk
 

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
For what its worth, I finally was able to get vMA to tell my host to scan for adapters. What a pain in the #@%! However its much more elegant than having to remember to rescan after every reboot. The I am just learning linux, so take all this with a grain of salt.

I did this all with root account. You however, can't login to vMA as root by default, I had to add root as one of the groups that can login. However Id try the rest below as the default vi-admin account and see how far you get. I may circle back and try to make this all work with a new account to avoid using root.

Disclaimer: I've yet to audit the following for security purposes, but I'll post what I have.

First login to ssh of you vMA server, then do
Code:
vifp addserver *ip here*


You'll then do your vi-admin username credentials. This will persist after a reboot

Then, try
Code:
vifptarget -s *ip here*


If it yells at you about a thumbprint, you may need to follow this:
https://communities.vmware.com/thread/516559

if after vifptarget, your shell has your servers ip in [ ] now, it worked you can issue commands to your host.

Try the following
Code:
esxcli storage core adapter rescan --all


If you have your iscsi adapter in esxi configured correctly, you should know see your ZFS storage within the storage section of your vsphere web host. Make sure your ZFS storage isn't already seen, as you'll never know if the above was successful.

Once all the above works, you can automate it.

I created a "enable_vifptarget" script in my /home/vi-admin/bin/ directory. This is the contents
Code:
#! /bin/bash
source $HOME/.bashrc
source /opt/vmware/vma/bin/vifptarget -s *IP Here*
source esxcli storage core adapter rescan --all


now make a crontab file. You may need to specify the following as "vi-admin" as this defaults to root I believe, but googling shouldn't be too hard to figure out.

do
Code:
sudo crontab -e

then
Code:
@reboot . /etc/skel/.bashrc ; bash -l -c '/home/vi-admin/bin/enable_vifptarget'


You need to make sure your vMA finishes initializing after your freeNAS appliance when your esxi system reboots. Ideally it wouldnt start initializing until after your freeNAS is completely up. Otherwise it may execute the above commands too soon. I think I told mine to wait 3 minutes to initialize after a reboot. Something easily done within vsphere with some right clicking on the right menus.

Now, reboot your host and see if the above does its job.
 
Joined
Nov 24, 2017
Messages
3
Hello.

maybe someone will help :)

I had a same problem. (iSCSI Status: Normal, Degraded)
6d939351263d9d2d54fb4ac41493d652.jpg


fix it easy just add two "VMkernel NIC"

fc55f59c4b2688e7a6b0a5def8f56a66.jpg
 
Last edited:

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
I figured as much. However my setup is virtualized, so I'm not sure what the need would be for having multiple adapters. Maybe it does need the logical paths for optimal efficiency? I may just throw in another pathway to stop the OCD.
 
Joined
Oct 18, 2016
Messages
1
I just fixed my degraded state by doing the same exact thing. My problem was a little more cumbersome. Originally I had assigned two physical NICs to my Management vSwitch and that caused my iSCSI storage adapter port binding to fail since; apparently, you're not allowed to have two physical NICs on the vSwitch and bind two VMkernels to run the fix described in this thread. Maybe there's a way, but I didn't want to throw another patch onto the host to make something work that normally wouldn't on a standard build.

In short;
I had to remove a NIC from my management vSwitch, create a dedicated vSwitch, add the custom port-groups and add the spare NIC to that. Finally, i was able to bind two VMKernels to my iSCSI adapter configuration.

If you know what you're doing, you know what order to do all this in. Speaking of order, it's quite specific but if you're like me, esxi is smart enough to error out when you're going the wrong thing. A bit of trial and error and following a little common sense, I arrived.

Anyway, I worked through all that and now my degraded state has disappeared from my server. As a result, I have a single NIC for management (plus 2 vmkernel NICs on the other that can be used to manage the server if the primary fails). But more importantly, I have a dedicated NIC (gig) for iSCSI traffic.

So a success - I'm soo stoked! and many thanks.
 

Attachments

  • Image 010.png
    Image 010.png
    31.6 KB · Views: 2,152
Status
Not open for further replies.
Top