Lost data upgrade from 11.1 to 11.2-RC2

Status
Not open for further replies.

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
Hi here,

I open my own thread to explain my issue.

Friday i have seen new version of Freenas - 11.2 STABLE.
I switch to 11.2 STABLE...launch upgrade --- ALL work well and DATA is here.

After reboot...dashboard inform me to upgrade POOL....ok so i launch too upgrade the POOL --- SAME work well and DATA is here.

2 hours later i receive multiple email like :

Code:
Quota exceed on dataset RaidZ/Apps.

Used 83.66% (314.39 GB of 375.81 GB)


Code:
Refquota exceed on dataset RaidZ/Apps.

Used 83.66% (314.39 GB of 375.81 GB)



And 1 hour after receive then email i receive :

Code:
New alerts:

* The boot volume state is UNKNOWN:

* The volume RaidZ state is UNKNOWN:


Alerts:

* The boot volume state is UNKNOWN:

* The volume RaidZ state is UNKNOWN:



But i have see this just yesterday and try to see what happen.

I'm log to the dashboard and see FreeNAS not work correctly.
I go in the console and see the system :

Code:
ELF interpreter /libexec/ld-elf.so.1 not found


Ok so i decide to reboot...and same thing.
I decide to reboot to last version 11.1 ... work well...but i see no DATA in my RaidZ folder...just one folder contain data (Videos/Archives) bu not all my data to this folder and all another folder is empty.

My system is :
Code:
Asrock RACK E3C226D2I
Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz (4 cores)
16 Go ECC
4x 3To in RAIDZ1 ZFS


zpool status:
Code:
  pool: RaidZ
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:57:30 with 0 errors on Sat Dec  1 19:09:54 2018
config:

		NAME											STATE	 READ WRITE CKSUM
		RaidZ										   ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/9136db28-a413-11e4-8f11-40167e28e436  ONLINE	   0	 0	 0
			gptid/91970860-a413-11e4-8f11-40167e28e436  ONLINE	   0	 0	 0
			gptid/91f86bc7-a413-11e4-8f11-40167e28e436  ONLINE	   0	 0	 0
			gptid/9254558d-a413-11e4-8f11-40167e28e436  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: none requested
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  ada4p2	ONLINE	   0	 0	 0


zpool list:
Code:
NAME		   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
Backup			-	  -	  -		-		 -	  -	  -	  -  UNAVAIL  -
RaidZ		 10.9T   565G  10.3T		-		 -	 4%	 5%  1.00x  ONLINE  /mnt
freenas-boot  55.5G   759M  54.8G		-		 -	  -	 1%  1.00x  ONLINE  -


zfs list:
Code:
NAME															  USED  AVAIL  REFER  MOUNTPOINT
RaidZ															 420G  7.24T   151K  /mnt/RaidZ
RaidZ/.system													 182M  7.24T   151K  legacy
RaidZ/.system-f9d0f984										   26.1M  7.24T  26.1M  /mnt/RaidZ/.system-f9d0f984
RaidZ/.system/configs-76c11d7f8a944b3d8e42fe35420dbaa3		   49.5M  7.24T  49.5M  legacy
RaidZ/.system/configs-83e186d6839a4480b1a3e80d1bf343a2			128K  7.24T   128K  legacy
RaidZ/.system/configs-edf296877cd84f6a8f36da9b6b928c04		   51.9M  7.24T  51.9M  legacy
RaidZ/.system/cores											  19.9M  7.24T  19.9M  legacy
RaidZ/.system/rrd-76c11d7f8a944b3d8e42fe35420dbaa3			   30.8M  7.24T  30.8M  legacy
RaidZ/.system/rrd-83e186d6839a4480b1a3e80d1bf343a2			   11.0M  7.24T  11.0M  legacy
RaidZ/.system/rrd-edf296877cd84f6a8f36da9b6b928c04				140K  7.24T   140K  legacy
RaidZ/.system/samba4											  825K  7.24T   825K  legacy
RaidZ/.system/syslog-76c11d7f8a944b3d8e42fe35420dbaa3			2.15M  7.24T  2.15M  legacy
RaidZ/.system/syslog-83e186d6839a4480b1a3e80d1bf343a2			3.91M  7.24T  3.91M  legacy
RaidZ/.system/syslog-edf296877cd84f6a8f36da9b6b928c04			11.3M  7.24T  11.3M  legacy
RaidZ/.system/webui											   128K  7.24T   128K  legacy
RaidZ/.vm_cache												  12.0G  7.24T   128K  /mnt/RaidZ/.vm_cache
RaidZ/.vm_cache/boot2docker									   384K  7.24T   128K  /mnt/RaidZ/.vm_cache/boot2docker
RaidZ/.vm_cache/boot2docker/initrd								128K  7.24T   128K  /mnt/RaidZ/.vm_cache/boot2docker/initrd
RaidZ/.vm_cache/boot2docker/vmlinuz64							 128K  7.24T   128K  /mnt/RaidZ/.vm_cache/boot2docker/vmlinuz64
RaidZ/.vm_cache/debian-8.4.0									 12.0G  7.24T   128K  /mnt/RaidZ/.vm_cache/debian-8.4.0
RaidZ/.vm_cache/debian-8.4.0/os								  12.0G  7.24T   128K  /mnt/RaidZ/.vm_cache/debian-8.4.0/os
RaidZ/.vm_cache/debian-8.4.0/os/os							   12.0G  7.25T  1.71G  -
RaidZ/Apps														174K   350G   174K  /mnt/RaidZ/Apps
RaidZ/Backup													  494K  7.24T   140K  /mnt/RaidZ/Backup
RaidZ/Backup/Time-Machine										 180K   300G   180K  /mnt/RaidZ/Backup/Time-Machine
RaidZ/Backup/Windows											  174K  1024G   174K  /mnt/RaidZ/Backup/Windows
RaidZ/Musiques													140K   600G   140K  /mnt/RaidZ/Musiques
RaidZ/Photos													  140K   100G   140K  /mnt/RaidZ/Photos
RaidZ/Public													  174K  20.0G   174K  /mnt/RaidZ/Public
RaidZ/Videos													  398G  4.61T   140K  /mnt/RaidZ/Videos
RaidZ/Videos/Archives											 398G   626G   398G  /mnt/RaidZ/Videos/Archives
RaidZ/iocage													 4.28M  7.24T  3.53M  /mnt/RaidZ/iocage
RaidZ/iocage/download											 128K  7.24T   128K  /mnt/RaidZ/iocage/download
RaidZ/iocage/images											   128K  7.24T   128K  /mnt/RaidZ/iocage/images
RaidZ/iocage/jails												128K  7.24T   128K  /mnt/RaidZ/iocage/jails
RaidZ/iocage/log												  128K  7.24T   128K  /mnt/RaidZ/iocage/log
RaidZ/iocage/releases											 128K  7.24T   128K  /mnt/RaidZ/iocage/releases
RaidZ/iocage/templates											128K  7.24T   128K  /mnt/RaidZ/iocage/templates
RaidZ/jails													  9.69G  7.24T   174K  /mnt/RaidZ/jails
RaidZ/jails/.warden-template-pluginjail-11.0-x64				  590M  7.24T  3.49M  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180221091223   590M  7.24T  3.49M  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180221091223
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180528030421   590M  7.24T  3.49M  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180528030421
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180822184910   590M  7.24T  3.49M  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180822184910
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423   590M  7.24T  3.49M  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423
RaidZ/jails/.warden-template-standard-11.0-x64				   2.27G  7.24T  3.51M  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64
RaidZ/jails/.warden-template-standard-11.0-x64-20180301121900	2.27G  7.24T  3.51M  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64-20180301121900
RaidZ/jails/.warden-template-standard-11.0-x64-20180327203621	2.26G  7.24T  3.51M  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64-20180327203621
RaidZ/jails/plexmediaserver_1									 378K  7.24T  3.53M  /mnt/RaidZ/jails/plexmediaserver_1
RaidZ/vm														  128K  7.24T   128K  /mnt/RaidZ/vm
freenas-boot													  758M  53.0G	64K  none
freenas-boot/ROOT												 758M  53.0G	29K  none
freenas-boot/ROOT/Initial-Install								   1K  53.0G   756M  legacy
freenas-boot/ROOT/default										 758M  53.0G   756M  legacy



zpool history:
Code:
2018-09-01.21:00:22 zpool scrub RaidZ
2018-09-10.04:33:50 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-09-10.04:33:50 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-09-10.04:34:24 zfs set mountpoint=none RaidZ/jails/.warden-template-pluginjail
2018-09-10.04:34:24 zfs rename -f RaidZ/jails/.warden-template-pluginjail RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423
2018-09-10.04:34:29 zfs set mountpoint=/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423 RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423
2018-10-13.08:43:11 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-10-13.08:43:11 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-10-13.21:00:19 zpool scrub RaidZ
2018-10-15.06:00:38 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-10-15.06:00:38 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-11-17.21:00:13 zpool scrub RaidZ
2018-11-30.18:04:18 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-11-30.18:04:18 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-11-30.18:11:21 zpool upgrade RaidZ
2018-11-30.18:15:06 <iocage> zfs mount RaidZ/iocage
2018-11-30.18:15:06 <iocage> zfs mount RaidZ/iocage/download
2018-11-30.18:15:07 <iocage> zfs mount RaidZ/iocage/images
2018-11-30.18:15:07 <iocage> zfs mount RaidZ/iocage/jails
2018-11-30.18:15:08 <iocage> zfs mount RaidZ/iocage/log
2018-11-30.18:15:08 <iocage> zfs mount RaidZ/iocage/releases
2018-11-30.18:15:09 <iocage> zfs mount RaidZ/iocage/templates
2018-12-01.17:37:59 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-12-01.17:37:59 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.17:50:39 zpool import -o cachefile=none -R /mnt -f RaidZ
2018-12-01.17:50:39 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.18:12:42 zpool scrub RaidZ
2018-12-01.18:44:29 zpool import -f -R /mnt 11460063650862632770
2018-12-01.18:44:29 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.18:49:57 zfs set aclmode=passthrough RaidZ
2018-12-01.18:50:00 zfs set aclinherit=passthrough RaidZ
2018-12-01.18:50:27  zfs set mountpoint=legacy RaidZ/.system
2018-12-01.18:50:29  zfs set mountpoint=legacy RaidZ/.system/cores
2018-12-01.18:50:32  zfs set mountpoint=legacy RaidZ/.system/samba4
2018-12-01.18:50:49  zfs set mountpoint=legacy RaidZ/.system/webui
2018-12-01.19:14:49 zpool reopen RaidZ
2018-12-01.19:23:41 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 11460063650862632770
2018-12-01.19:23:41 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.20:12:08 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 11460063650862632770
2018-12-01.20:12:08 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.20:55:42 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 11460063650862632770
2018-12-01.20:55:42 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-01.21:17:37 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 11460063650862632770
2018-12-01.21:17:37 zpool set cachefile=/data/zfs/zpool.cache RaidZ
2018-12-02.06:19:50 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 11460063650862632770
2018-12-02.06:19:50 zpool set cachefile=/data/zfs/zpool.cache RaidZ



I don't know how...my data is LOST...i have just launch update freenas and update pool....:(
I think my data is really lost...just i want to confirm with you if is true...any chance to restore my pool...therfore..i clean it and restore OLD backup :(

Regards,
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Ok so i decide to reboot...and same thing.
I decide to reboot to last version 11.1 ... work well...but i see no DATA in my RaidZ folder...just one folder contain data (Videos/Archives) bu not all my data to this folder and all another folder is empty.
Do you have snapshots setup on your pool?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
RaidZ/Apps 174K 350G 174K /mnt/RaidZ/Apps RaidZ/Backup 494K 7.24T 140K /mnt/RaidZ/Backup RaidZ/Backup/Time-Machine 180K 300G 180K /mnt/RaidZ/Backup/Time-Machine RaidZ/Backup/Windows 174K 1024G 174K /mnt/RaidZ/Backup/Windows RaidZ/Musiques 140K 600G 140K /mnt/RaidZ/Musiques RaidZ/Photos 140K 100G 140K /mnt/RaidZ/Photos RaidZ/Public 174K 20.0G 174K /mnt/RaidZ/Public RaidZ/Videos 398G 4.61T 140K /mnt/RaidZ/Videos

The data-sets are still there and contain data, see the USED column. What does zfs mount say?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Intel(R) Core(TM) i5-4670 CPU @ 3.40GHz (4 cores) 16 Go ECC
i5s do not support ECC.

And 1 hour after receive then email i receive :

Code:
New alerts:

* The boot volume state is UNKNOWN:

* The volume RaidZ state is UNKNOWN:


Alerts:

* The boot volume state is UNKNOWN:

* The volume RaidZ state is UNKNOWN:


But i have see this just yesterday and try to see what happen.

I'm log to the dashboard and see FreeNAS not work correctly.
I go in the console and see the system :

Code:
ELF interpreter /libexec/ld-elf.so.1 not foun
This is the part that I don't understand. This sequence of events makes little sense to me.

In any case, there is no indication of data loss, so you probably have folders with the same name as the datasets, which got mounted on top of them, hiding them.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Since the OP author upgraded the pool (edit: and now is trying to boot 11.1sth), the below may be in effect...
2.5.1. Caveats
Be aware of these caveats beforeattempting an upgrade to 11.2:
  • Warning: upgrading the ZFS pool can make it impossible to go back to a previous version. For this reason, the update process does not automatically upgrade the ZFS pool, though the Alert system shows when newer feature flags are available for a pool. Unless a new feature flag is needed, it is safe to leave the pool at the current version and uncheck the alert. If the pool is upgraded, it will not be possible to boot into a previous version that does not support the newer feature flags

Can it be ruled out easily?

Sent from my mobile phone
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
The data-sets are still there and contain data, see the USED column. What does zfs mount say?

Hi, the USED size is not correct, before upgrade i have 7 To more in USED...and now is only 560 Go....

zfs mount:

Code:
root@freenas:~ # zfs mount
freenas-boot/ROOT/default       /
freenas-boot/grub               /boot/grub
RaidZ                           /mnt/RaidZ
RaidZ/.vm_cache                 /mnt/RaidZ/.vm_cache
RaidZ/.vm_cache/boot2docker     /mnt/RaidZ/.vm_cache/boot2docker
RaidZ/.vm_cache/boot2docker/initrd  /mnt/RaidZ/.vm_cache/boot2docker/initrd
RaidZ/.vm_cache/boot2docker/vmlinuz64  /mnt/RaidZ/.vm_cache/boot2docker/vmlinuz64
RaidZ/.vm_cache/debian-8.4.0    /mnt/RaidZ/.vm_cache/debian-8.4.0
RaidZ/.vm_cache/debian-8.4.0/os  /mnt/RaidZ/.vm_cache/debian-8.4.0/os
RaidZ/Apps                      /mnt/RaidZ/Apps
RaidZ/Backup                    /mnt/RaidZ/Backup
RaidZ/Backup/Time-Machine       /mnt/RaidZ/Backup/Time-Machine
RaidZ/Backup/Windows            /mnt/RaidZ/Backup/Windows
RaidZ/Musiques                  /mnt/RaidZ/Musiques
RaidZ/Photos                    /mnt/RaidZ/Photos
RaidZ/Public                    /mnt/RaidZ/Public
RaidZ/Videos                    /mnt/RaidZ/Videos
RaidZ/Videos/Archives           /mnt/RaidZ/Videos/Archives
RaidZ/iocage                    /mnt/RaidZ/iocage
RaidZ/iocage/download           /mnt/RaidZ/iocage/download
RaidZ/iocage/images             /mnt/RaidZ/iocage/images
RaidZ/iocage/jails              /mnt/RaidZ/iocage/jails
RaidZ/iocage/log                /mnt/RaidZ/iocage/log
RaidZ/iocage/releases           /mnt/RaidZ/iocage/releases
RaidZ/iocage/templates          /mnt/RaidZ/iocage/templates
RaidZ/jails                     /mnt/RaidZ/jails
RaidZ/jails/.warden-template-pluginjail-11.0-x64  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180221091223  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180221091223
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180528030421  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180528030421
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180822184910  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180822184910
RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423  /mnt/RaidZ/jails/.warden-template-pluginjail-11.0-x64-20180910073423
RaidZ/jails/.warden-template-standard-11.0-x64  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64
RaidZ/jails/.warden-template-standard-11.0-x64-20180301121900  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64-20180301121900
RaidZ/jails/.warden-template-standard-11.0-x64-20180327203621  /mnt/RaidZ/jails/.warden-template-standard-11.0-x64-20180327203621
RaidZ/jails/plexmediaserver_1   /mnt/RaidZ/jails/plexmediaserver_1
RaidZ/vm                        /mnt/RaidZ/vm
RaidZ/.system                   /var/db/system
RaidZ/.system/cores             /var/db/system/cores
RaidZ/.system/samba4            /var/db/system/samba4
RaidZ/.system/syslog-a421eaccddb44c098d96b72146b5211d  /var/db/system/syslog-a421eaccddb44c098d96b72146b5211d
RaidZ/.system/rrd-a421eaccddb44c098d96b72146b5211d  /var/db/system/rrd-a421eaccddb44c098d96b72146b5211d
RaidZ/.system/configs-a421eaccddb44c098d96b72146b5211d  /var/db/system/configs-a421eaccddb44c098d96b72146b5211d


i5s do not support ECC.


This is the part that I don't understand. This sequence of events makes little sense to me.

In any case, there is no indication of data loss, so you probably have folders with the same name as the datasets, which got mounted on top of them, hiding them.

For i5 i know...i change it soon...my last CPU is G3220 and support ECC...mistake to me but i dont thing the problem is CPU

I dont know how to find hiding....USED dont say my TOTAL size before upgrade....I repeat before upgrade i have 7 To and more data...now i have only 565 Go.... :(
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
Since the OP author upgraded the pool, the below may be in effect...


Can it be ruled out easily?

Sent from my mobile phone

Ok but...11.2 RC2 dont support new pool version ?

How i can verify the version on my pool and install FreeBSD maybe to see if my ZFS version and DATA is safe ?

Regards,
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Can you use cd and ls to navigate the mounted folders? ZFS reports "used" space differently in different places.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Ok but...11.2 RC2 don't support new pool version ?

How i can verify the version on my pool and install FreeBSD maybe to see if my ZFS version and DATA is safe ?

Regards,
Just slow down. Your data is probably fine. With ZFS pool version does not matter much, just feature flags in use.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You're down 7 TB? That does sound like a problem. But that would have left your pool pretty full. Were you at 90+% full?
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
Can you use cd and ls to navigate the mounted folders? ZFS reports "used" space differently in different places.

yep cd and ls...and df-h to see inside folder but

Code:
root@freenas:/mnt/RaidZ # cd Apps/
root@freenas:/mnt/RaidZ/Apps # ls
root@freenas:/mnt/RaidZ/Apps # ls -lah
total 48
drwxrwx---+  2 1001  1002      2B Nov 30 19:26 .
drwxr-xr-x  14 root  wheel    14B Dec  2 15:17 ..
root@freenas:/mnt/RaidZ/Apps # du -h
 36K    .


Empty....

Just slow down. Your data is probably fine. With ZFS pool version does not matter much, just feature flags in use.

I don't understand what do you mean ? :(

You're down 7 TB? That does sound like a problem. But that would have left your pool pretty full. Were you at 90+% full?

I was close to 90% yes but is a issue ?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I was close to 90% yes but is a issue ?
That's always a huge problem, but it shouldn't cause data loss.

So, I have close to no idea what happened here, but it was serious.
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
That's always a huge problem, but it shouldn't cause data loss.

So, I have close to no idea what happened here, but it was serious.

I have do remove thing to do not have 85%+++ data near 90%.

But yes....it was serious and i'm very SAD...

I have old backup like 1 year....(my another backup disk has failed last month...) and i have never time to replace it... (fucking work...)
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
So do you think my data is lost ?
Or do you think is always here but...hidden or just problem with FreeNAS 11.2 RC2 for zpool upgrade version ?
Thanks for you help
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I find it very likely that the data is gone, probably deleted by regular filesystem tools and not by ZFS operations. You might want to go through the logs to see if anything interesting pops up, but I don't have much hope for your data.

Sorry to be the bearer of bad news. You might want to file a bug report and see if the devs have any ideas to track this one down.
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
I find it very likely that the data is gone, probably deleted by regular filesystem tools and not by ZFS operations. You might want to go through the logs to see if anything interesting pops up, but I don't have much hope for your data.

Sorry to be the bearer of bad news. You might want to file a bug report and see if the devs have any ideas to track this one down.

Ok ok :( very bad news...but is the life...when your backup is not correct lol....

My bad to me :(
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
No another advice or expert advice for my issue ?
I haven't touch my ZPOOL actually...to believe retrieve my DATA :(
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There's not much that can be done now, beyond figuring out what exactly happened.
 

Goned

Explorer
Joined
Jan 26, 2015
Messages
78
Ok you can close this threat...my data is lost --- reinstall freenas stable and take some backup now
 
Status
Not open for further replies.
Top