- Joined
- Feb 15, 2014
- Messages
- 20,194
Right, there's that, too.It also would have taken the snapshots with it, which hasn't happened.
Right, there's that, too.It also would have taken the snapshots with it, which hasn't happened.
Unfortunately, that's correct.Do I understand correctly that you did not have snapshots?
Has anyone that has lost all data in these upgrades, actually been able to recover everything via the snapshots?The fact that FreeNAS is wiping data on upgrades is troublesome, to say the least--more so since it doesn't seem that the cause has been isolated. But it boggles my mind that people don't set up snapshots on their servers--they're quick, they're simple, and they shouldn't take a great deal of storage space if you're managing things properly.
As a precaution, would it help to detach all volumes before attempting an update from legacy FreeNAS to 11.2? If it would, it might be sensible to issue an advisory to that effect.it doesn't seem that the cause has been isolated
Yes, and I think that's been noted in this thread--anyone who had snapshots has been able to roll back to those.Has anyone that has lost all data in these upgrades, actually been able to recover everything via the snapshots?
I'd expect it would, but I don't know where the downloaded data gets stored. OTOH, "do a recursive snapshot of your entire pool" seems like good advice.would it help to detach all volumes before attempting an update from legacy FreeNAS to 11.2?
System dataset, it turns out. That will probably mean "boot device", if the main pool is gone.I don't know where the downloaded data gets stored.
Yes, i was able to recover from snapshots. Snapshot data was completely ok (in my case just not up-to-date, but yeah, thats a learning doing updates).Has anyone that has lost all data in these upgrades, actually been able to recover everything via the snapshots?
Just asking a dumb question, or more so wondering if the (local) snapshots are vanishing as well.
As a precaution, would it help to detach all volumes before attempting an update from legacy FreeNAS to 11.2? If it would, it might be sensible to issue an advisory to that effect.
Throw my hat into the ring. I upgraded last night and everything seemed to be ok. This morning when I sat down to tinker there was a screen saying "Connecting to NAS.... Make sure the NAS system is powered on..." I rebooted and on my server console it would just hang saying it couldn't find /etc/netcli.sh. Over the course of several hours I tried to fall back to 11.1u7. Ended up having to install a fresh 11.2 via an ISO. Once that was complete I imported the volumes but nothing is in them. I have 2 pools and both show space being used.
If the zpools are showing space being used, then it might be a different problem.
df -h
zfs list
zpool history
The general consensus is that the deletions come from a POSIX command (eg., "rm -rf /"). It's likely your snapshots are still taking up space on the zpool because they contain 100% of your data. If you restore your snapshot you should find the data. But be prepared to restore from backups in the back of your mind.
We have yet to encounter a person who has enough mirrored/backedup so that the issue can be duplicated.
What you should be looking at isI'm at work and can't post the actual data output, but when I do athe used space reports back as 0% for each folder. When I do aCode:df -h(I think that was the command) it displays the space as being used and the sizes are what I would expect.Code:zfs list
zfs list -o space -r [I]poolname[/I]
, where poolname is obviously the name of your pool. The space shortcut lists the most relevant space usage data.Possibly, nobody's been able to reproduce this with any semblance of repeatability. I would like to ask that you capture the console output and, if possible, set up a syslog server to try and preserve the logs. The devs also have some requests in the bug ticket for this.I could try the process again. If that would help?
What you should be looking at iszfs list -o space -r [I]poolname[/I]
, where poolname is obviously the name of your pool. The space shortcut lists the most relevant space usage data.
Possibly, nobody's been able to reproduce this with any semblance of repeatability. I would like to ask that you capture the console output and, if possible, set up a syslog server to try and preserve the logs. The devs also have some requests in the bug ticket for this.
zfs list -o space -r [I]poolname[/I]
root@freenas[~]# zfs list -o space -r mediapool NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD mediapool 3.26T 7.28T 0 166K 0 7.28T mediapool/.bhyve_containers 3.26T 141K 0 141K 0 0 mediapool/.system 3.26T 61.8M 0 166K 0 61.7M mediapool/.system-1eaf0a35 3.26T 1.03G 0 1.03G 0 0 mediapool/.system/configs-810048d7feed436fae88f4409435135f 3.26T 4.62M 0 4.62M 0 0 mediapool/.system/configs-9d613bc4d69d4caa9ab03b2439285b53 3.26T 141K 0 141K 0 0 mediapool/.system/configs-c2484c3f88124e51b79845e4fb993a70 3.26T 268K 0 268K 0 0 mediapool/.system/cores 3.26T 12.4M 0 12.4M 0 0 mediapool/.system/rrd-810048d7feed436fae88f4409435135f 3.26T 153K 0 153K 0 0 mediapool/.system/rrd-9d613bc4d69d4caa9ab03b2439285b53 3.26T 12.8M 0 12.8M 0 0 mediapool/.system/rrd-c2484c3f88124e51b79845e4fb993a70 3.26T 16.8M 0 16.8M 0 0 mediapool/.system/samba4 3.26T 390K 0 390K 0 0 mediapool/.system/syslog-810048d7feed436fae88f4409435135f 3.26T 13.2M 0 13.2M 0 0 mediapool/.system/syslog-9d613bc4d69d4caa9ab03b2439285b53 3.26T 454K 0 454K 0 0 mediapool/.system/syslog-c2484c3f88124e51b79845e4fb993a70 3.26T 377K 0 377K 0 0 mediapool/.system/webui 3.26T 141K 0 141K 0 0 mediapool/HomeDir 3.26T 141K 0 141K 0 0 mediapool/iocage 3.26T 4.82M 0 4.00M 0 844K mediapool/iocage/download 3.26T 141K 0 141K 0 0 mediapool/iocage/images 3.26T 141K 0 141K 0 0 mediapool/iocage/jails 3.26T 141K 0 141K 0 0 mediapool/iocage/log 3.26T 141K 0 141K 0 0 mediapool/iocage/releases 3.26T 141K 0 141K 0 0 mediapool/iocage/templates 3.26T 141K 0 141K 0 0 mediapool/jails 3.26T 626M 0 153K 0 626M mediapool/jails/.warden-template-pluginjail-11.0-x64 3.26T 625M 621M 3.68M 0 0 mediapool/jails/nextcloud_1 3.26T 396K 0 396K 0 0 mediapool/jails/plexmediaserver_1 3.26T 435K 0 435K 0 0 mediapool/shared_3d 3.26T 122G 122G 179K 0 0 mediapool/shared_data 3.26T 378G 378G 153K 0 0 mediapool/shared_movies 3.26T 5.47T 5.47T 243K 0 0 mediapool/shared_music 3.26T 78.4G 78.4G 230K 0 0 mediapool/shared_pictures 3.26T 179G 179G 141K 0 0 mediapool/shared_tv 3.26T 1.06T 1.06T 192K 0 0 root@freenas[~]#
root@freenas[~]# zfs list -o space -r personalpool NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD personalpool 6.88T 163G 0 26.6K 0 163G personalpool/.system 6.88T 4.43M 0 32.0K 0 4.40M personalpool/.system/configs-c2484c3f88124e51b79845e4fb993a70 6.88T 29.3K 0 29.3K 0 0 personalpool/.system/cores 6.88T 310K 0 310K 0 0 personalpool/.system/rrd-c2484c3f88124e51b79845e4fb993a70 6.88T 3.91M 0 3.91M 0 0 personalpool/.system/samba4 6.88T 66.6K 0 66.6K 0 0 personalpool/.system/syslog-c2484c3f88124e51b79845e4fb993a70 6.88T 65.3K 0 65.3K 0 0 personalpool/.system/webui 6.88T 29.3K 0 29.3K 0 0 personalpool/.vm_cache 6.88T 117K 0 29.3K 0 87.9K personalpool/.vm_cache/boot2docker 6.88T 87.9K 0 29.3K 0 58.6K personalpool/.vm_cache/boot2docker/initrd 6.88T 29.3K 0 29.3K 0 0 personalpool/.vm_cache/boot2docker/vmlinuz64 6.88T 29.3K 0 29.3K 0 0 personalpool/DBCloud 6.88T 22.3G 22.3G 30.6K 0 0 personalpool/images 6.88T 29.3K 0 29.3K 0 0 personalpool/personal 6.88T 120G 120G 34.6K 0 0 personalpool/vm-storage 6.88T 20.3G 0 29.3K 0 20.3G personalpool/vm-storage/DBixHDD 6.89T 20.3G 0 3.51G 16.8G 0 root@freenas[~]#