Suddenly: No iSCSI, and cannot connect to GUI

Status
Not open for further replies.

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
I shut my environment down tonight so I could do some maintenance (memory swaps for MCA: errors). Did it by the book, everything seemed fine.

When I brought my freenas server back up, I could not connect to the GUI, and my VMware servers could only see an NFS share. No targets. I can only see the console because my server has an iLO port (IPMI) configured. Otherwise, I'd have nothing. I can ping my gateway from the host, but that's it.

After much reading, I figured out how to get to the error messages:


Starting istgt
istgt version 0.5 (20121028)
normal mode
using kqueue
using host atomic
istgt_lu.c:1915:istgt_lu_add_unit: ***ERROR*** LU2: no LUN0
(more stuff about no lun0)
WARNING: failed to start istgt

I haven't made any changes to the system in weeks, and the system had been running for 77 some days.

All my VMs are on the iSCSI volumes. Everything looks like it should work:
all disks are found and listed
all partitions are listed
all zfs volumes are listed

my istgt.config has: LUN0 Storage /dev/zvol/zPool_01/VM_iSCSI_01 auto (and another logical unit for LUN1)

I just can't figure out why the target provider wont start?

I really need some help to get this running.


Thanks !
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Without going and setting up iSCSI on an older FreeNAS box, mostly I'd be doing what you're doing. But the real interesting question is whether or not /dev/zvol/zPool_01/VM_iSCSI_01 exists, and whether or not the pool reports healthy ("zpool status").
 

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
zpool status shows no known data errors, all the disks are online, and the the raidz2 volumes look healthy.

There are things called VM_iSCSI_01p1 and VM_iSCSI_02p1 in the targets folder, also.


zfs list shows me my zPool_01 (its contents (.system, .system/cores, etc), and zPool_01/VM_iSCSI_01 and _02, and my NFS export.


 
Last edited:

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
I think something terrible happened to the freenas installation, somehow.
What if I re-install and try to import my pool? I can install to a new FOB, or over the existing installation.
Will that let me get back into the GUI so I can at least restore a config backup (which, should be good) ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Definitely don't overwrite the existing boot device, try on a new USB device, and try restoring a config backup. It looks like it is probably recoverable, but I just don't have an explicit set of steps for you to try.

Keeping the existing boot device unchanged gives some additional flexibility in the event that there are any problems restoring a config backup. There are instructions around here somewhere on how to go about pulling the config off an old boot device (either the running or backup config).
 

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
Before I saw this reply, I took an 8GB cruizer I had laying around, and installed freenas to it, and then headed back into the datacenter. I swapped out the boot devices, and was able to login to the GUI.

I am going to make a long story short. I made some mistakes along the way, but none were game enders. The cruizer I made at home ... 32bit version of 9.2.1.5 (doh!). bad mem and only 4gb is why the imports were crashing.

I cleaned the original cruizer, put 9.3 on it, and booted from that. I logged in, setup name and IP and all that, and then did an automatic import of the zPool.

After the problematic memory was removed, and a 64bit version of FreeNAS was up and running, the import completed in less than 10 seconds. (I had tried to restore a backup config, but it wouldnt boot afterwards).

I rebuild the parts for the (portals, targets, extents, etc) for the iSCSI targes, setup NFS, and my vmware hosts, with a little coaxing, saw everything. I got all my VMs up and running again.

There is one issue I need to resolve; only 1 of my 2 hosts can mount the iSCSI devices as a datastore. Host A is all cool in the gang, but host B wants me to format the device. It wont see that there is a signature and existing VMFS volume, and just be happy with it.

Eh, its a problem that can be solved.
 
Status
Not open for further replies.
Top