Lost Data in Upgrade to 11.2

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Are your logs intact? Does every file in /var/log magically start right after the update?
Sadly, yes, on the FreeNAS machine itself.

However, I did set it up to log to a remote syslog server, so there is something still on the Linux machine. I've got the output of grep pocket * (pocket being the machine name) saved, along with the raw syslog, daemon and messages files for the time frame I did the update. More than happy to upload them somewhere.

zpool history from the last scrub to current:
Code:
boot drive:

2018-12-10.03:45:06 zpool scrub freenas-boot
2018-12-16.04:41:32 zfs snapshot -r freenas-boot/ROOT/11.1-U1@2018-12-16-04:41:32
2018-12-16.04:41:32 zfs clone -o canmount=off -o beadm:keep=False -o mountpoint=/ freenas-boot/ROOT/11.1-U1@2018-12-16-04:41:32 freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:41:38 zfs set beadm:nickname=11.2-RELEASE freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:41:52  zfs set sync=disabled freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:44:15  zfs inherit  freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:44:15 zfs set canmount=noauto freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:44:15 zfs set mountpoint=/tmp/BE-11.2-RELEASE.odZ4Lbgc freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:44:16 zfs set mountpoint=/ freenas-boot/ROOT/11.2-RELEASE
2018-12-16.04:44:16 zpool set bootfs=freenas-boot/ROOT/11.2-RELEASE freenas-boot
2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/11.1-U1
2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/9.10.2-U6
2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/Initial-Install
2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/default
2018-12-16.04:44:18 zfs promote freenas-boot/ROOT/11.2-RELEASE

pool:

2018-12-09.00:00:39 zpool scrub storage01
2018-12-16.04:50:36 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105
2018-12-16.04:50:36 zpool set cachefile=/data/zfs/zpool.cache storage01
2018-12-16.04:51:32 <iocage> zfs set org.freebsd.ioc:active=yes storage01
2018-12-16.18:12:50 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105
2018-12-16.18:12:50 zpool set cachefile=/data/zfs/zpool.cache storage01
2018-12-17.01:46:06 zpool export storage01
2018-12-17.01:49:12 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105
2018-12-17.01:49:12 zpool set cachefile=/data/zfs/zpool.cache storage01
2018-12-17.02:18:48 zpool export -f storage01


Do you mean from 11.1 to 11.2?
What kind of services were you running? I had a jail running Syncthing and a SMB share.
No jails, running multiple SMB, multiple NFS, iSCSI via file extent, rsync was turned on, AFP.

It almost seems that the SMB mounts were the primary affected shares, but that's speculation right now for me.
 
Last edited:

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
However, I did set it up to log to a remote syslog server
Holy crap, that is exactly what we needed right now! I'll try to look into it first thing in the morning.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Can you post the raw files on the bug tracker? I don't see any smoking guns in the logs you posted. Also, what's the boot device?
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Can you post the raw files on the bug tracker? I don't see any smoking guns in the logs you posted. Also, what's the boot device?
I should be able to do that later tonight after I get home from the office.

Boot drive is an SSD.
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Can you post the raw files on the bug tracker? I don't see any smoking guns in the logs you posted. Also, what's the boot device?
Ok, attached to the ticket. Hopefully there is something in there useful once you sort through all the extra cruft in there (I did say raw logs from the syslog server :) )
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Holy crap, that is exactly what we needed right now! I'll try to look into it first thing in the morning.
Did this reveal anything?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Not to my eyes. Absolutely inconspicuous logs, those I've seen. If you told me they were from a fully-functional system, I'd easily believe you. Maybe the devs will see something out of place, but the middleware log would have been more useful, I'd imagine.

@Justin The Cynical, could you try the following:
  1. Rollback to 11.1
  2. Add some data, inconsequential or not.
  3. Snapshot everything. Especially the system dataset.
  4. Do the first part of the upgrade.
  5. Do not allow FreeNAS to boot the new environment. Instead, boot into a fresh environment and snapshot the system dataset again.
  6. Allow FreeNAS to finish the upgrade.
  7. Again, do not allow FreeNAS to boot into the new environment. Repeat step 5 and see if your data is present or if there's been deletion.
  8. Finally allow the upgrade to complete.
 

SynbiosVyse

Dabbler
Joined
May 9, 2016
Messages
17
What's the output of zfs list on your system? Did you previously have snapshots of the data you're missing?

Here's ZFS list:

Code:
NAME                                                        USED  AVAIL  REFER  MOUNTPOINT                                        
fivereds                                                    783M  7.81T   199K  /mnt/fivereds                                      
fivereds/.system                                           72.4M  7.81T  1.03M  legacy                                            
fivereds/.system/configs-4a7da323489644858925be23483a2e70   156K  7.81T   156K  legacy                                            
fivereds/.system/configs-66311c036e824820af44b2dbf4c55f10   156K  7.81T   156K  legacy                                            
fivereds/.system/cores                                     48.6M  7.81T  48.6M  legacy                                            
fivereds/.system/rrd-4a7da323489644858925be23483a2e70      10.6M  7.81T  10.6M  legacy                                            
fivereds/.system/rrd-66311c036e824820af44b2dbf4c55f10      10.6M  7.81T  10.6M  legacy                                            
fivereds/.system/samba4                                     440K  7.81T   440K  legacy                                            
fivereds/.system/syslog-4a7da323489644858925be23483a2e70    263K  7.81T   263K  legacy                                            
fivereds/.system/syslog-66311c036e824820af44b2dbf4c55f10    383K  7.81T   383K  legacy                                            
fivereds/.system/webui                                      156K  7.81T   156K  legacy                                            
fivereds/.vm_cache                                          625K  7.81T   156K  /mnt/fivereds/.vm_cache                            
fivereds/.vm_cache/boot2docker                              469K  7.81T   156K  /mnt/fivereds/.vm_cache/boot2docker                
fivereds/.vm_cache/boot2docker/initrd                       156K  7.81T   156K  /mnt/fivereds/.vm_cache/boot2docker/initrd        
fivereds/.vm_cache/boot2docker/vmlinuz64                    156K  7.81T   156K  /mnt/fivereds/.vm_cache/boot2docker/vmlinuz64      
fivereds/data                                               170K  7.81T   170K  /mnt/fivereds/data                                
fivereds/iocage                                            6.13M  7.81T  5.22M  /mnt/fivereds/iocage                              
fivereds/iocage/download                                    156K  7.81T   156K  /mnt/fivereds/iocage/download                      
fivereds/iocage/images                                      156K  7.81T   156K  /mnt/fivereds/iocage/images                        
fivereds/iocage/jails                                       156K  7.81T   156K  /mnt/fivereds/iocage/jails                        
fivereds/iocage/log                                         156K  7.81T   156K  /mnt/fivereds/iocage/log                          
fivereds/iocage/releases                                    156K  7.81T   156K  /mnt/fivereds/iocage/releases                      
fivereds/iocage/templates                                   156K  7.81T   156K  /mnt/fivereds/iocage/templates                    
fivereds/jails                                              649M  7.81T   156K  /mnt/fivereds/jails                                
fivereds/jails/.warden-template-pluginjail-11.0-x64         649M  7.81T  3.73M  /mnt/fivereds/jails/.warden-template-pluginjail-11.0-x64                                                                                                                        
fivereds/jails/syncthing_1                                  383K  7.81T  3.73M  /mnt/fivereds/jails/syncthing_1                    
fivereds/rj3                                                206K  7.81T   206K  /mnt/fivereds/rj3                                  
fivereds/syncthing                                          156K  7.81T   156K  /mnt/fivereds/syncthing                            
freenas-boot                                               10.7G  16.9G    64K  none                                              
freenas-boot/ROOT                                          10.7G  16.9G    29K  none                                              
freenas-boot/ROOT/11.0-U1                                   185K  16.9G   734M  /                                                  
freenas-boot/ROOT/11.0-U2                                   172K  16.9G   737M  /                                                  
freenas-boot/ROOT/11.0-U3                                   166K  16.9G   725M  /                                                  
freenas-boot/ROOT/11.0-U4                                   179K  16.9G   727M  /                                                  
freenas-boot/ROOT/11.1-RELEASE                              252K  16.9G   825M  /                                                  
freenas-boot/ROOT/11.1-U1                                   381K  16.9G   825M  /                                                  
freenas-boot/ROOT/11.1-U2                                   367K  16.9G   832M  /                                                  
freenas-boot/ROOT/11.1-U3                                   378K  16.9G   832M  /                                                  
freenas-boot/ROOT/11.1-U4                                   265K  16.9G   836M  /                                                  
freenas-boot/ROOT/11.1-U5                                   562K  16.9G   838M  /                                                  
freenas-boot/ROOT/11.1-U6                                  9.26G  16.9G   838M  /                                                  
freenas-boot/ROOT/11.2-RELEASE                              687M  16.9G   687M  /                                                  
freenas-boot/ROOT/Initial-Install                             1K  16.9G   734M  legacy                                            
freenas-boot/ROOT/Wizard-2017-07-09_18-09-11                  1K  16.9G   734M  legacy                                            
freenas-boot/ROOT/default                                   140K  16.9G   734M  legacy                                            
freenas-boot/ROOT/default-20181216-222736                   760M  16.9G   760M  legacy                                            
freenas-boot/grub                                          6.85M  16.9G  6.85M  legacy 
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Not to my eyes. Absolutely inconspicuous logs, those I've seen. If you told me they were from a fully-functional system, I'd easily believe you. Maybe the devs will see something out of place, but the middleware log would have been more useful, I'd imagine.

@Justin The Cynical, could you try the following:
  1. Rollback to 11.1
  2. Add some data, inconsequential or not.
  3. Snapshot everything. Especially the system dataset.
  4. Do the first part of the upgrade.
  5. Do not allow FreeNAS to boot the new environment. Instead, boot into a fresh environment and snapshot the system dataset again.
  6. Allow FreeNAS to finish the upgrade.
  7. Again, do not allow FreeNAS to boot into the new environment. Repeat step 5 and see if your data is present or if there's been deletion.
  8. Finally allow the upgrade to complete.
OK, don't think I'm going to be able to try this tonight, but I want to make sure I'm thinking the same procedure as you. :)

  1. Rollback via 11.2 GUI, reimport my pool
  2. Add data to the current empty datasets
  3. Snapshot the pool and boot
  4. Switch the train to 11.2-STABLE, download
  5. Reboot into a 'fresh environment'. Ok, so are you thinking a shell on a FreeNAS install CD/USB, or would single mode be enough? (I'm leaning toward the CD/USB myself)
  6. Snapshot via the command line the boot and pool
  7. Reboot, let it do the first part of the upgrade and reboot into the CD/USB bit and snapshot from the command line again. See if the data is still there
  8. Reboot as normal into 11.2, see if the data is still there
Sound about right for what you were thinking?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I imagine single-user mode would be enough, but I'd personally use a different install in regular multi-user mode.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It does sound familiar.
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
I imagine single-user mode would be enough, but I'd personally use a different install in regular multi-user mode.
So I was able to do this tonight:
  1. copied 1.24 gig to each of the affected filesystems via smb
  2. snapshot of boot and storage
  3. reboot to USB stick install (11.1-U6 upgraded from 9.x) multiuser, files and snapshots are intact
  4. snapshot #2 made, reboot to SSD install
  5. initial upgrade run, reboot, boot to usb stick
  6. data present, snapshot made, rebooted to SSD install
  7. finished install, data and snapshots are still present
Unfortunately (?), it doesn't appear that the nuking of filesystems has happened this time, which could be due to it being seemingly random.

I'm going to leave it running overnight to see what happens as I could have sworn that after the initial boot, everything looked fine and I didn't find the missing data until later
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Just issuing all the delete operations could take a while, so that's a possibility.
 

JDCynical

Contributor
Joined
Aug 18, 2014
Messages
141
Just issuing all the delete operations could take a while, so that's a possibility.
Unfortunately (?) I'm seeing no issues on the second attempt, and the data I copied over is still there.

Sadly, it looks like I'm one of the unlucky ones and most of the data I had on there is gone with no hint as to what happened to cause it during the original upgrade process. :confused:
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
This leaves the investigation pretty much at square one, unfortunately.
 

jchan94

Explorer
Joined
Jul 30, 2015
Messages
55
Just wanted to jump in here, because it looks like I've been affected by this too, but I have snapshots.

My boot drive with 9.11 had crashed, and I figured I'd upgrade in the process.

After upgrading to 11.2, I imported the pool, and upgraded it.

My dataset is still in tact for the one that had snapshots, but my other ones are nuked. When trying to read the dataset via shell, it won't read any data.
 
Top