JDCynical
Contributor
- Joined
- Aug 18, 2014
- Messages
- 141
Sadly, yes, on the FreeNAS machine itself.Are your logs intact? Does every file in /var/log magically start right after the update?
However, I did set it up to log to a remote syslog server, so there is something still on the Linux machine. I've got the output of grep pocket * (pocket being the machine name) saved, along with the raw syslog, daemon and messages files for the time frame I did the update. More than happy to upload them somewhere.
zpool history from the last scrub to current:
Code:
boot drive: 2018-12-10.03:45:06 zpool scrub freenas-boot 2018-12-16.04:41:32 zfs snapshot -r freenas-boot/ROOT/11.1-U1@2018-12-16-04:41:32 2018-12-16.04:41:32 zfs clone -o canmount=off -o beadm:keep=False -o mountpoint=/ freenas-boot/ROOT/11.1-U1@2018-12-16-04:41:32 freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:41:38 zfs set beadm:nickname=11.2-RELEASE freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:41:52 zfs set sync=disabled freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:44:15 zfs inherit freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:44:15 zfs set canmount=noauto freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:44:15 zfs set mountpoint=/tmp/BE-11.2-RELEASE.odZ4Lbgc freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:44:16 zfs set mountpoint=/ freenas-boot/ROOT/11.2-RELEASE 2018-12-16.04:44:16 zpool set bootfs=freenas-boot/ROOT/11.2-RELEASE freenas-boot 2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/11.1-U1 2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/9.10.2-U6 2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/Initial-Install 2018-12-16.04:44:16 zfs set canmount=noauto freenas-boot/ROOT/default 2018-12-16.04:44:18 zfs promote freenas-boot/ROOT/11.2-RELEASE pool: 2018-12-09.00:00:39 zpool scrub storage01 2018-12-16.04:50:36 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105 2018-12-16.04:50:36 zpool set cachefile=/data/zfs/zpool.cache storage01 2018-12-16.04:51:32 <iocage> zfs set org.freebsd.ioc:active=yes storage01 2018-12-16.18:12:50 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105 2018-12-16.18:12:50 zpool set cachefile=/data/zfs/zpool.cache storage01 2018-12-17.01:46:06 zpool export storage01 2018-12-17.01:49:12 zpool import -c /data/zfs/zpool.cache.saved -o cachefile=none -R /mnt -f 14557875918316947105 2018-12-17.01:49:12 zpool set cachefile=/data/zfs/zpool.cache storage01 2018-12-17.02:18:48 zpool export -f storage01
No jails, running multiple SMB, multiple NFS, iSCSI via file extent, rsync was turned on, AFP.Do you mean from 11.1 to 11.2?
What kind of services were you running? I had a jail running Syncthing and a SMB share.
It almost seems that the SMB mounts were the primary affected shares, but that's speculation right now for me.
Last edited: