Help needed...FreeNAS cant find my pool after OS reinstall...

Status
Not open for further replies.

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
I'm totally newbie in freenas installation and screwed up the data on my first experiment with server with freenas.

The story is, we have a Dell MD1200 storage in our office which attached on Dell R710 server using FreeNAS-8.0.3-BETA1-amd64 (9056), the OS is inside a USB stick. The setup was done by other people back in 2012 so we dont know that it was installed on USB stick until recently.
Something happened and we decide to reinstall the host server (The R710) with windows server. We made new RAID on the R710 for the windows server installation usin BIOS (Dell PERC 700) but we leave the poool untouched, we still can see the pool and RAID configuration in BIOS. In windows,after we did the windows installation, we cant access the pool, the windows hard disk manager showed an unformatted drive with 29800 GB size (the pool size) but with 0% usage. After that we realize that the freenas is on the USB stick, so we plugged in and boot from it again. We can still access the freenas through the ip and the web management console too. But the freenas cant recognize the pool, nor do automount, because the address where the pool sit before (mnt/bulk) is gone when we do ssh to it.

This is the hardware spec for the Dell MD1200 from the freenas web management page
System Information
Hostname FreeNAS.pustekdata.lapan.go.id
FreeNAS Build FreeNAS-8.0.3-BETA1-amd64 (9056)
Platform Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
Memory 24549MB
OS Version FreeBSD 8.2-RELEASE-p4

And this is the output from "zpool status" and "zpool import"

[root@FreeNAS] ~# zpool status
no pools available
[root@FreeNAS] ~# zpool import
pool: bulk
id: 14162367528603640976
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:

bulk UNAVAIL insufficient replicas
mfid2p1 ONLINE

I'm sorry if my story is too long and confusing, but the data inside is very important so we very desperate and we hope we still able to get it back.

Please ask me further question if there is something you need to know in order to get the data back. Any help is appreciate... thanks in advance
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
It looks like you have a single disk in freenas and are not using and redundancy. Does it have some kind of raid configured with the drives? What do you mean by 'made a new raid'? If you reconfigured your raid your data might be gone. What does the smart data day for each drive? Use the command smartctl -a /dev/<drive ID>

P.S. there is about a 90% chance your data is gone and not coming back. Using raid with zfs is a big no no and makes life harder than it needs to be.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
We made new RAID on the R710 for the windows server installation usin BIOS (Dell PERC 700) but we leave the poool untouched, we still can see the pool and RAID configuration in BIOS.
Yeah, this needs a lot more clarification...
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
Thank you all for the response...i'm sorry for the late reply, because I live in GMT+7 timezone and I'm only able to access the freenas on the workhours.

It looks like you have a single disk in freenas and are not using and redundancy. Does it have some kind of raid configured with the drives? What do you mean by 'made a new raid'? If you reconfigured your raid your data might be gone. What does the smart data day for each drive? Use the command smartctl -a /dev/<drive ID>
Yeah, this needs a lot more clarification...

I only reconfigured the RAID on the internal storage of the R710 (the host server) which is handled by PERC 700 controller. The MD1200 is handled by PERC810 controller which I not

Is it allowed to upload pictures here? If yes I'll take some pictures of the display on the server if my explanation is not enough

Because my knowledge is very limited I dont know which information is important and could give you some ideas.

P.S. there is about a 90% chance your data is gone and not coming back. Using raid with zfs is a big no no and makes life harder than it needs to be.

That's what I'm afraid of but when I checked using Dell tools (Dell Open Manage) in windows , it showed that the usage of the MD1200 is still 100%, I hope that was a sign of the data is still there. Once again I will show the screenshot if it's allowed to post picts here.

FYI, the R710 as the host has internal storage of 6 2TB HD,at first,it was configured with RAID 50 with total 7,5TB. But because windows cannot install into large partition (more than 4TB), we reconfigured it with 1 RAID 1 for 1 2 TB and RAID 5 for the rest of 5 2TB.

The MD1200 has 2 enclosure, each consist of 21 pieces of 3 TB HDD. It was configured with RAID 50. And we leave it untouched when reconfigured the internal of R710 or at least I hope we truly leave it untouched...
 

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
Guess the real question is: Do you have a backup?

The answer is "No", we did it on the attemp to back the data up.

So your conclusion is the data already lost? What parameter that make you sure ? Is it when I changed the RAID on the internal storage with BIOS is also affect the pool?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
What has me worried is this:
FYI, the R710 as the host has internal storage of 6 2TB HD,at first,it was configured with RAID 50 with total 7,5TB. But because windows cannot install into large partition (more than 4TB), we reconfigured it with 1 RAID 1 for 1 2 TB and RAID 5 for the rest of 5 2TB.

So, just guessing, but thinking that the original setup (on USB) was using this "single drive" that was presented by the R710. In wiping it and recreating the Raid, you may have hosed it... TBH, it looks like a bad design from the start using Hardware Raid with FreeNas/ZFS...

This also leads me to believe that all the drives were possibly presented to FreeNas as a "single drive".
we cant access the pool, the windows hard disk manager showed an unformatted drive with 29800 GB size (the pool size)

Only thing that has me wondering at all is you said the pool was ~30TB in size. For a single MD1200 with 12 (they hold 12 as far as I know and not 21) drives @ 3TB each in a Raid50 that would be ~27TB... So with two of them plus the R710 @ 7.5 TB = ~61.5TB... Math is not adding up... Either that or I am just too tired...
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Is it possible that some or all of the disks on R710 were part of the pool? Or were they a separate pool? It might be worth putting the BIOS settings for the R710 disks back to what they were and seeing if that helps to recognise the pool. It's not enormously likely though, as the disks will probably have been written to when the new RAID setup for Windows was started.
 

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
What has me worried is this:


So, just guessing, but thinking that the original setup (on USB) was using this "single drive" that was presented by the R710. In wiping it and recreating the Raid, you may have hosed it... TBH, it looks like a bad design from the start using Hardware Raid with FreeNas/ZFS...

This also leads me to believe that all the drives were possibly presented to FreeNas as a "single drive".


Only thing that has me wondering at all is you said the pool was ~30TB in size. For a single MD1200 with 12 (they hold 12 as far as I know and not 21) drives @ 3TB each in a Raid50 that would be ~27TB... So with two of them plus the R710 @ 7.5 TB = ~61.5TB... Math is not adding up... Either that or I am just too tired...

Looks like that was the worst case scenario.

But could you help me to make sure that it was really the case, that the configuration is really total of the internal storage and the MD1200 storage? Maybe on the way we can find something to get the data, or really it 's gone.

By the way you are right, the MD1200 hold 12 3Tb each, it was on the BIOS I saw the configuration is consist of group of 3 HDD,there's only 7 groups, the last 3 HDD act as spare I think

Is it possible that some or all of the disks on R710 were part of the pool? Or were they a separate pool? It might be worth putting the BIOS settings for the R710 disks back to what they were and seeing if that helps to recognise the pool. It's not enormously likely though, as the disks will probably have been written to when the new RAID setup for Windows was started.

That's what I wanna know. Can you help me investigate it? If it's really the case, even with my limited knowledge then I know that I wont get the data back.
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Pretty hard to actually figure out what the original pool was created out of (at least for me); since actual access to each drive was not presented to FreeNas. Normally, I would expect to see more information about each drive in a pool. But, since there is a Hardware Raid in between it is difficult to tell. However, I have never ran FreeNas 8.x (just 9.x) so am unfamiliar (maybe others can chime it).

[root@FreeNAS] ~# zpool status
no pools available
[root@FreeNAS] ~# zpool import
pool: bulk
id: 14162367528603640976
 

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
Pretty hard to actually figure out what the original pool was created out of (at least for me); since actual access to each drive was not presented to FreeNas. Normally, I would expect to see more information about each drive in a pool. But, since there is a Hardware Raid in between it is difficult to tell. However, I have never ran FreeNas 8.x (just 9.x) so am unfamiliar (maybe others can chime it).
Yes..i hope somebody will have some ideas about it
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
If the system was working previously, and the only change to the system was the modification of your internal array and now the pool is coming up as degraded, I can conclude that the pool is destroyed. I'm sorry.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Beyond trying to reverse the BIOS changes you made, I personally do have the skills to troubleshoot this further - sorry!
 

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
If the system was working previously, and the only change to the system was the modification of your internal array and now the pool is coming up as degraded, I can conclude that the pool is destroyed. I'm sorry.
Seems like it come to the worst possible situation....
Beyond trying to reverse the BIOS changes you made, I personally do have the skills to troubleshoot this further - sorry!
I tried your suggestion and sadly still no good news...

But when I start the freenas I see in the scrolling screen that there's a "bulk-01" stated...anybody know how I can find or import that "bulk-01"?
 

jde

Explorer
Joined
Aug 1, 2015
Messages
93
Your data is likely gone. But, try running the command zdb and post the output in code tags. That may give us a better idea of how the pool was originally configured and to see if there's any hope for your data.
 

Marendra

Cadet
Joined
Feb 21, 2016
Messages
7
Not if zpool import doesn't list it.

Seems like you need professional, onsite assistance.

Another bad news for me....

Your data is likely gone. But, try running the command zdb and post the output in code tags. That may give us a better idea of how the pool was originally configured and to see if there's any hope for your data.
Okay,at least I'll get confirmation of the bad news...
This is the output of the command :

[marendra@FreeNAS] /# zdb
bulk
version=15
name='bulk'
state=0
txg=17480367
pool_guid=14162367528603640976
hostid=2678975904
hostname=''
vdev_tree
type='root'
id=0
guid=14162367528603640976
children[0]
type='disk'
id=0
guid=3277950403292594653
path='/dev/mfid0p1'
whole_disk=0
metaslab_array=26
metaslab_shift=36
ashift=9
asize=7999371608064
is_log=0
DTL=54
children[1]
type='disk'
id=1
guid=14773301927844749532
path='/dev/mfid1p1'
whole_disk=0
metaslab_array=23
metaslab_shift=38
ashift=9
asize=31997501374464
is_log=0
DTL=53
 

jde

Explorer
Joined
Aug 1, 2015
Messages
93
It looks like the two "disks" presented by your two raid cards were striped together. Since you wiped the "disk" on one of the raid cards, I'd say your pool is toast.
 
Status
Not open for further replies.
Top