Raid disks disappeared from freeNAS, need help getting it back

Status
Not open for further replies.

karenlaw

Cadet
Joined
Oct 5, 2018
Messages
2
I am newish to freeNAS and new to this forum. FreeNAS and vmware host no longer see raid setup, how to get it visible again?

Setup: A ASUS box holding direct connect drive and on the same box has 24 drives connected to a Areca raid controller. The direct connect drive is running vmware 5.5i. Inside the vmware, a virtual machine is created and the virtual machine is running freeNAS 9.11 Until yesterday, the raid controller sees the 24 individual drives. VMware and freeNas see one single storage. But yesterday, an attempt was made to add a new fibre nic card to the system. Although VMware sees the new card, power on the freeNAS virtual machine failed. After the new card is removed. I was able to power on freeNAS vm, but the block storage has disappeared from freeNAS and VMware. It is as if the raid controller is not connected. how do I make it visible again?

Please help me,

Karen
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am newish to freeNAS and new to this forum. FreeNAS and vmware host no longer see raid setup, how to get it visible again?

Setup: A ASUS box holding direct connect drive and on the same box has 24 drives connected to a Areca raid controller. The direct connect drive is running vmware 5.5i. Inside the vmware, a virtual machine is created and the virtual machine is running freeNAS 9.11 Until yesterday, the raid controller sees the 24 individual drives. VMware and freeNas see one single storage. But yesterday, an attempt was made to add a new fibre nic card to the system. Although VMware sees the new card, power on the freeNAS virtual machine failed. After the new card is removed. I was able to power on freeNAS vm, but the block storage has disappeared from freeNAS and VMware. It is as if the raid controller is not connected. how do I make it visible again?

Please help me,

Karen
You built this wrong. I hope you had a backup of the data because it is probably irrecoverably lost.
Sorry.

"Absolutely must virtualize FreeNAS!" ... a guide to not completely losing your data.
https://forums.freenas.org/index.ph...ide-to-not-completely-losing-your-data.12714/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

karenlaw

Cadet
Joined
Oct 5, 2018
Messages
2
It was old hardware setup by people no longer around. Things is after I reinstalled raid controller driver, it shows up in configuration - storage - devices. Now if I can just add it as a new hardware in the vm, it might just work, but I don't know. Also finding errors when I run voma and I don't know how to fix it. Please let me know is there anything I can do?

Karen
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It was old hardware setup by people no longer around. Things is after I reinstalled raid controller driver, it shows up in configuration - storage - devices. Now if I can just add it as a new hardware in the vm, it might just work, but I don't know. Also finding errors when I run voma and I don't know how to fix it. Please let me know is there anything I can do?

Karen
I am pretty sure that voma (vSphere On-disk Metadata Analyzer) is not a thing to do. If the hardware array controller had a volume that was passed to vSphere and vSphere passed that volume to FreeNAS and FreeNAS built a ZFS file system on it, and if that volume can be passed back into FreeNAS, it is (a lot of if) but it is just possible that FreeNAS might be able to recognize the volume and recover. You would not want to allow any other utility to mess with the data though, because that could corrupt the things that ZFS needs to work, if it isn't already corrupted. This is just so messed up, I don't have any hope it can be recovered.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
... Yikes.

I'm with @Chris Moore on this one - you need to know how the volume was initially passed to the FreeNAS VM (local raw device mapping vs VMDK) and try to restore that link. You said that VMware previously saw the storage; was this as a datastore or just as a raw volume?

My concern is that perhaps that card was old enough to have somehow stored the RAID configuration on itself, and something got garbed in the NVRAM when the fibre HBA was installed.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Areca raid controller.
Can you supply the EXACT model number of this part?
Although VMware sees the new card, power on the freeNAS virtual machine failed.
Have you looked at your logs to find out why it did not boot/power on?
It was old hardware setup by people no longer around.
Probably for the best.
Things is after I reinstalled raid controller driver
You mean a .vib packaged driver?
Now if I can just add it as a new hardware in the vm
As noted, this all depends on how it was set up. If passthrough was used, it would need to be reconfigured. On the FreeNAS VM do you see an memory reservations configured? this may be a hint that passthrough was used. If not, we need to find out if there are any VMFSs on the RAID logical volume.
Also finding errors when I run voma and I don't know how to fix it.
Are you running voma on the direct attached disk or the RAID volume?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
you need to know how the volume was initially passed to the FreeNAS VM (local raw device mapping vs VMDK) and try to restore that link.
It could still be passthrough as noted above, if it was an RDM, the device would still be listed in the VM config and there would be a device file located with the VM for that mapping.

On the ESXi host please run the following and report the output in [ code ] tags
esxcli storage vmfs list
esxcli storage vmfs snapshot list
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
When you pass through a PCIe card to an ESXi VM... it passes it through via its PCIe address. If you then add another PCIe card, and that card causes the other cards PCIe address to shift... you're in for a world of hurt, as it becomes very hard to remove the old PCIe cards pass-through setting, since it now applies to a card that no longer exists.

If this is your problem you should be able to see it by doing some pcie trawling in esxi... i forget the exact command, but its something like lspci.

Anyway, I suspect, if this is the problem, the easiest solution would be to remove the fibre card, then remove the PCIe cards from your shutdown VMs, then re-add everything, and then re-add your passthrough PCIe cards.

Just an idea.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I am newish to freeNAS and new to this forum. FreeNAS and vmware host no longer see raid setup, how to get it visible again?

Setup: A ASUS box holding direct connect drive and on the same box has 24 drives connected to a Areca raid controller. The direct connect drive is running vmware 5.5i. Inside the vmware, a virtual machine is created and the virtual machine is running freeNAS 9.11 Until yesterday, the raid controller sees the 24 individual drives. VMware and freeNas see one single storage. But yesterday, an attempt was made to add a new fibre nic card to the system. Although VMware sees the new card, power on the freeNAS virtual machine failed. After the new card is removed. I was able to power on freeNAS vm, but the block storage has disappeared from freeNAS and VMware. It is as if the raid controller is not connected. how do I make it visible again?

We used to see a boatload of these kinds of dodgy and fragile setups, but invariably only once they had shattered into little broken bits. This was so very frustrating for those of us on the forum to try to support, and because in many cases no one was able to figure out what had gone wrong, that I originally wrote the warning not to do this kind of stuff;

Please do not run FreeNAS in production as a virtual machine -> https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/

A lot of the problem here is that there are so many permutations of things that MIGHT have happened, and it is incredibly dependent on the exact details. Your message includes lots of vagueness, some of which may not be important, some of which may. I am taking this in a different direction than several of the other posters above based on a feeling that this may actually be the right interpretation:

1) I assume by "direct connect drive" you mean "a server with direct attached storage", because a drive ("direct connect drive is running vmware 5.5i") cannot run VMware.

2) I assume that the Areca RAID card had created a single volume, because you said "VMware and freeNas see one single storage."

3) I assume that this has been running for quite some time.

4) I take from this that a likely problem is that the Areca volume/datastore has crashed, and you may have had the unhappy experience of the RAID controller recognizing this as part of the reboot.

If all four of these things are correct, this is not a FreeNAS problem. It is a VMware and Areca issue. Your FreeNAS will recover if you can fix the Areca datastore.

My *SUSPICION* is that with a 24 drive RAID volume, you may have experienced some disk failures and the volume on the Areca has crashed. If this is the case, immediately contact Areca support to see what your options are.

FreeNAS cannot do much to protect data in this kind of setup. Most of ZFS's power revolves around its control over the component devices of a ZFS pool, and the ability to pick different disks to write data to, so that loss of a disk (or in some cases, two, three, or even more) does not result in data loss or failure. Hiding all your disks behind a hardware RAID controller (and presenting ZFS with a single virtual disk) means that ZFS cannot do these things for you, so any problem with the underlying virtual disk means your ZFS pool is likely to get hosed, just like if you had a single physical hard disk and it failed.

Load up Areca's configuration utility, being very careful not to make any changes, and see what it says about the health of the system.
 
Status
Not open for further replies.
Top