Empty datasets after crash

Status
Not open for further replies.
Joined
Sep 29, 2014
Messages
7
Hi,
my NAS crashed after usb flash drive accidentally removed then i rebooted it and couldn't load OS again (kernel panic)
i changed the usb flash drive to a new one, upgraded RAM to 8GB and installed nas 9.2.1.7, after the install i got stuck at "GRUB" with the cursor blinking then i tried to upgrade to 9.3-M4 (i know its not a stable release so i shouldn't do that but i gave it a shot).
I was still stuck at GRUB so i changed then the mainboard and i was able to boot. i could import the volume but my 3 datasets are empty (Sites, Backups, Dados).

DF -h shows the correct usage size (note that NAS has 6.2TB used space:

Filesystem Size Used Avail Capacity Mounted on
freenas-boot/ROOT/default 3.5G 910M 2.6G 26% /
devfs 1.0k 1.0k 0B 100% /dev
tmpfs 32M 5.3M 26M 16% /etc
tmpfs 4.0M 8.0k 4M 0% /mnt
tmpfs 2.6G 32M 2.6G 1% /var
freenas-boot/grub 2.6G 7.8M 2.6G 0% /boot/grub
NAS 8.2T 6.2T 2T 76% /mnt/NAS
NAS/.system 2T 8.6M 2T 0% /mnt/NAS/.system
NAS/.system/cores 2T 10M 2T 0% /mnt/NAS/.system/cores
NAS/.system/rrd 2T 288k 2T 0% /mnt/NAS/.system/rrd
NAS/.system/rrd-2c54c61c5181419f81f040167ddd1b51 2T 288k 2T 0% /mnt/NAS/.system/rrd-2c54c61c5181419f81f040167ddd1b51
NAS/.system/samba4 2T 3.7M 2T 0% /mnt/NAS/.system/samba4
NAS/.system/syslog 2T 1.2M 2T 0% /mnt/NAS/.system/syslog
NAS/.system/syslog-2c54c61c5181419f81f040167ddd1b51 2T 7.6M 2T 0% /mnt/NAS/.system/syslog-2c54c61c5181419f81f040167ddd1b51
NAS/Backups 2T 288k 2T 0% /mnt/NAS/Backups
NAS/Dados 2T 288k 2T 0% /mnt/NAS/Dados
NAS/Sites 2T 288k 2T 0% /mnt/NAS/Sites


My NAS specs:


Build FreeNAS-9.3-M4-f281a32-x64
Platform Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz
Memory 7908MB
Motherboard: Gigabyte g41 Combo

Hard Drivers (7x2TB Raidz2)
ada0 MS77215W0DSX9A 2.0 TB
ada1 Z2R6WZ5AS 2.0 TB
ada2 93B3L8YKS 2.0 TB
ada3 S1E0BJCS 2.0 TB
ada4 WD-WCC4M3092013 2.0 TB
ada5 WD-WCAZA4045659 2.0 TB
ada6 Z2R204UAS 2.0 TB


'zpool status NAS' output:

pool: NAS
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub in progress since Mon Sep 29 13:32:22 2014
846G scanned out of 9.34T at 176M/s, 14h6m to go
0 repaired, 8.85% done
config:

NAME STATE READ WRITE CKSUM
NAS ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/517ac1b4-761b-11e2-bb17-1c6f657ba8fd ONLINE 0 0 0
gptid/157037a4-55d8-11e3-99c6-94de809389b1 ONLINE 0 0 0
gptid/e248be0e-fdd5-11e1-b29f-1c6f657ba8fd ONLINE 0 0 0
gptid/9d1e5c84-3a81-11e4-b88e-94de809389b1 ONLINE 0 0 0
gptid/e39b5fe6-fdd5-11e1-b29f-1c6f657ba8fd ONLINE 0 0 0
gptid/8f5fefd6-4535-11e2-9ac3-1c6f657ba8fd ONLINE 0 0 0
gptid/8fae1082-71e2-11e2-82da-1c6f657ba8fd ONLINE 0 0 0

errors: No known data errors



any ideas how i could recover my data? i have no snapshots, since a huge amount of data i didn't yet get another server for replication.
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Can you see the files inside the directories, for example:

cd /mnt/NAS/Backups (or one of the other directories)
ls -l

If you saved your old configuration, you could restore it now. Otherwise, you'll have to redo it.

Whatever you do, don't upgrade your pool. Since 9.2.1.8 was just released, I recommend using it instead of 9.3.x. The latter is in alpha stage right now.
 
Joined
Sep 29, 2014
Messages
7
Can you see the files inside the directories, for example:

cd /mnt/NAS/Backups (or one of the other directories)
ls -l

If you saved your old configuration, you could restore it now. Otherwise, you'll have to redo it.

Whatever you do, don't upgrade your pool. Since 9.2.1.8 was just released, I recommend using it instead of 9.3.x. The latter is in alpha stage right now.

hello,
i can try today to downgrade to 9.2.1.8 (unfortunately i have a feeling it wont solve my problem :( )


[root@servidor] /mnt/NAS/Backups# ls -l
total 33
drwxrwxrwx 2 root wheel 2 Sep 29 11:17 ./
drwxrwxrwx 7 root wheel 17 Sep 29 13:20 ../


its empty :S
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm confused as all hell by this thread, but here's what I see....


1. You clearly didn't meet the minimum RAM (8GB) since you upgraded. This single mistake has cost many people their data.
2. You aren't using server-grade hardware. Again, this mistake has cost quite a few people their data.
3. Your df -h says that your 3 datasets are 288KB in size. Also df and du aren't good for ZFS, so you can't always trust the numbers.
4. Upgrading to software that's barely alpha was extremely dangerous. That alone could cost you your data because of some unknown bug.

So I don't have high hopes of you getting your data back. But, as df is showing only 288KB used I think it's safe to say that you have no data anymore. No clue why as there are quite a few things that could have gone wrong (see #1,2 and 4) and the chances of understanding precisely went wrong is kind of slim at this point.

If you have backups it's time to brush the dust off and get them ready. You're going to need them unless you can identify what exactly went wrong and assuming what is wrong can be undone.
 
Joined
Sep 29, 2014
Messages
7
I'm confused as all hell by this thread, but here's what I see....


1. You clearly didn't meet the minimum RAM (8GB) since you upgraded. This single mistake has cost many people their data.
2. You aren't using server-grade hardware. Again, this mistake has cost quite a few people their data.
3. Your df -h says that your 3 datasets are 288KB in size. Also df and du aren't good for ZFS, so you can't always trust the numbers.
4. Upgrading to software that's barely alpha was extremely dangerous. That alone could cost you your data because of some unknown bug.

So I don't have high hopes of you getting your data back. But, as df is showing only 288KB used I think it's safe to say that you have no data anymore. No clue why as there are quite a few things that could have gone wrong (see #1,2 and 4) and the chances of understanding precisely went wrong is kind of slim at this point.

If you have backups it's time to brush the dust off and get them ready. You're going to need them unless you can identify what exactly went wrong and assuming what is wrong can be undone.


hi,

this NAS server started with cheap hardware and i have been upgrading it. i couldnt do the investment that i wanted thats why it has those specs. thats common and we know that.
the upgrade i know it wasnt a safe thing to do but since i couldnt boot to a new install anymore and had some samba issues from time to time i tried it anyways :S

isnt there a way in ZFS to try to reconstruct the partition table or something?
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would be curious in seeing the creation date shown for the empty datasets.

zfs get creation NAS/Backups

As well as the recent history:

zpool history NAS | tail -n 40
 
Joined
Sep 29, 2014
Messages
7
I would be curious in seeing the creation date shown for the empty datasets.

zfs get creation NAS/Backups

As well as the recent history:

zpool history NAS | tail -n 40

[root@servidor] ~# zfs get creation NAS/Backups
NAME PROPERTY VALUE SOURCE
NAS/Backups creation Mon Sep 29 8:03 2014 -




[root@servidor] ~# zpool history NAS | tail -n 40
2014-09-29.11:10:32 zfs set aclinherit=passthrough NAS
2014-09-29.11:29:02 zpool export -f NAS
2014-09-29.11:29:51 zpool import -f -R /mnt 15966174468325291823
2014-09-29.11:29:55 zfs inherit -r mountpoint NAS
2014-09-29.11:29:55 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-29.11:29:55 zfs set aclmode=passthrough NAS
2014-09-29.11:30:00 zfs set aclinherit=passthrough NAS
2014-09-29.11:47:04 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 15966174468325291823
2014-09-29.11:47:04 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-29.11:47:05 zfs set aclmode=restricted NAS/Dados
2014-09-29.11:47:10 zfs set aclmode=restricted NAS/Sites
2014-09-29.12:00:20 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 15966174468325291823
2014-09-29.12:00:20 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-29.13:05:41 zpool import -f NAS
2014-09-29.13:06:19 zpool export NAS
2014-09-29.13:09:46 zpool import -f -R /mnt 15966174468325291823
2014-09-29.13:09:49 zfs inherit -r mountpoint NAS
2014-09-29.13:09:49 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-29.13:09:50 zfs set aclmode=passthrough NAS
2014-09-29.13:09:55 zfs set aclinherit=passthrough NAS
2014-09-29.13:20:39 zpool import -f -R /mnt 15966174468325291823
2014-09-29.13:20:42 zfs inherit -r mountpoint NAS
2014-09-29.13:20:42 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-29.13:20:42 zfs set aclmode=passthrough NAS
2014-09-29.13:20:47 zfs set aclinherit=passthrough NAS
2014-09-29.13:32:31 zpool scrub NAS
2014-09-30.02:54:48 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 15966174468325291823
2014-09-30.02:54:48 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-30.04:46:19 zpool import -f -R /mnt 15966174468325291823
2014-09-30.04:47:05 zfs inherit -r mountpoint NAS
2014-09-30.04:47:05 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-30.04:47:08 zfs set aclmode=passthrough NAS
2014-09-30.04:47:15 zfs set aclinherit=passthrough NAS
2014-09-30.04:47:32 zfs rename -f NAS/.system/syslog NAS/.system/syslog-adb946163d914f088dc14617dbc0bec3
2014-09-30.04:47:42 zfs rename -f NAS/.system/rrd NAS/.system/rrd-adb946163d914f088dc14617dbc0bec3
2014-09-30.06:54:54 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 15966174468325291823
2014-09-30.06:54:54 zpool set cachefile=/data/zfs/zpool.cache NAS
2014-09-30.11:45:22 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 15966174468325291823
2014-09-30.11:45:22 zpool set cachefile=/data/zfs/zpool.cache NAS




for bigger output (400) check pastebin: http://pastebin.com/sV0ByUV6

those renames at around 8 am of day 29 represents a way to get my data back? in my head i only can think in something like it recreated the datasets and thats why i see it empty
 
Last edited:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
I would try zfs unmount NAS/Backups and then see if /mnt/NAS/Backups has anything in it.

It is possible you never had three datasets; you had three subdirectories in the NAS dataset, and then you created three new datasets with the same name as those subdirectories, which has hidden them underneath.
 
Joined
Sep 29, 2014
Messages
7
I would try zfs unmount NAS/Backups and then see if /mnt/NAS/Backups has anything in it.

It is possible you never had three datasets; you had three subdirectories in the NAS dataset, and then you created three new datasets with the same name as those subdirectories, which has hidden them underneath.

first of all a huge thanks to you. you saved my day :)
unmounting i was able to access the data. the strange this is that it was real datasets (i remember always seeing those datasets in webgui with their corresponding options like changing permissions)

i will build a new nas and migrate everything but since i have to wait for the parts to arrive meanwhile i would like to try to solve it without taking risks but i think the better way for dont take any risks is just unmount the datasets and keep the nas up till i migrate everything next week or so, dont you agree?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Very nice rs225.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Well, I'm glad that was it.

If you did have the datasets previously, (but they don't seem to appear in the last 400 lines of your zpool history) then they could have been on some other mountpoint, or were themselves hidden underneath the subdirectories. The order of mounts can change what you see, and maybe that changed for some reason. Or it could have been some confusion between something in the GUI and what was actually done in the pool.

I would unmount your other two missing items, and I would take a snapshot of NAS. zfs snapshot NAS@first
 
Joined
Sep 29, 2014
Messages
7
Well, I'm glad that was it.

If you did have the datasets previously, (but they don't seem to appear in the last 400 lines of your zpool history) then they could have been on some other mountpoint, or were themselves hidden underneath the subdirectories. The order of mounts can change what you see, and maybe that changed for some reason. Or it could have been some confusion between something in the GUI and what was actually done in the pool.

I would unmount your other two missing items, and I would take a snapshot of NAS. zfs snapshot NAS@first

i unmounted the other two and did some backups and a snapshot. i will wait now for the parts to build a new nas and move everything. thanks again
 
Status
Not open for further replies.
Top