encypted vol appears EMPTY directories after replications.. collectd/statvfs errors

Status
Not open for further replies.

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Hi Chaps, Anyone know why this might be?

After replication (reports up to date) , my encypt 8TB vol appears EMPTY! all directories are there but empty. First time was a heart stopper. If detached and re imported with key and pass all files, snaps etc are present and correct. But until I reimport the volume my logs are filled with

Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/.warden-template-pluginjail) failed: No such file or directory
Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/.warden-template-standard) failed: No such file or directory
Oct 30 02:23:15 nas collectd[16336]: statvfs(/mnt/backup/jails/customplugin_1) failed: No such file or directory
etc etc etc for ever..

cant find anything else in any logs?

zfs and zpool reports all online and all normal and mounted. Backup disks shows all ok in freenas web interface?

mount nfs or shell and no files present in directories

FreeNAS-9.10.1-U2
AMD FX4130 Quad-Core Processor
GIGABYTE 990FX
Memory
16081M
5*2TB RED Z1, 3*2TB Purple stripe, 1* 8TB USB3
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Try updating to the latest freenas. U2 had some issues.

Sent from my Nexus 5X using Tapatalk
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Which is empty - push or pull?
Where are you seeing those errors - push or pull?
Are you snapshoting each push dataset individually or do you have the Recursive option enabled?
Are you replicating into the main dataset on Pull (bad), or into a sub-dataset (good)?
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Yes sorry, didnt do such a great job of describing the problem. Its a local replication to 127.0.0.1 and I'm trying to push the entire contents recursive , media libs, iscsi Vols etc to a backup USB3 encrypted ZFS drive. I am attempting to push into the main dataset on the backup drive. Everything looks good and GUI says uptodate. but on examining the the backup drive it has directory structure but no files anywhere. I detached, then import the volume with key+pass and as if by magic the files are back? soon after replication the errors from collectd & statvfs start taking over the logs. Cant find any thing between replication and errors


Nov 21 02:28:42 nas autorepl.py: [common.pipesubr:66] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs destroy -d
'backup/iscsi@auto-20161116.0100-4d'"
Nov 21 02:28:48 nas autorepl.py: [common.pipesubr:66] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs destroy -d 'backup@auto-20161023.0100-4w'"
Nov 21 02:28:48 nas autorepl.py: [common.pipesubr:66] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs destroy -d 'backup@auto-20161117.0100-4d'"
Nov 21 02:28:48 nas autorepl.py: [common.pipesubr:66] Popen()ing: /usr/bin/ssh -i /data/ssh/replication -o BatchMode=yes -o StrictHostKeyChecking=yes -o ConnectTimeout=7 -p 22 localhost "zfs destroy -d 'backup@auto-20161116.0100-4d'"
Nov 21 02:28:53 nas collectd[21011]: statvfs(/mnt/backup/backup/workstation) failed: No such file or directory
Nov 21 02:28:53 nas collectd[21011]: statvfs(/mnt/backup/jails/.warden-template-VirtualBox-4.3.12) failed: No such file or directory
Nov 21 02:28:53 nas collectd[21011]: statvfs(/mnt/backup/jails/.warden-template-pluginjail) failed: No such file or directory
Nov 21 02:28:53 nas collectd[21011]: statvfs(/mnt/backup/jails/.warden-template-standard) failed: No such file or directory
Nov 21 02:28:53 nas collectd[21011]: statvfs(/mnt/backup/jails/customplugin_1) failed: No such file or directory

I have just, Thanks to SweetAndLow for the info (didnt realise I was outadate already :) updated to U4

I have reimported, snaps and replication run overnight so try again tomorrow
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You keep saying the 'files are back', what files?

Sent from my Nexus 5X using Tapatalk
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
I updated and it went well.

I unlocked and mounted the backup volume which is about 5T mixed zvol's exported iscsi to VM's and various datasets mostly exported with NFS

Files are all present and correct upto date as of yesterdays replication job.

The snapshot then ran at 1am of the main zraid, then replication at 2am of the main zraid to the usb3 completes

soon as replication had finished (report uptodate) the collectd error starts flooding logs.

Nov 22 08:01:17 nas collectd[12135]: statvfs(/mnt/backup/raid) failed: No such file or directory

On examining the backup volume via nfs/ssh/locally that has received the snapshots and the entire volume is nothing but empty directories.

I detach and reattached the backup volume.

All thes file that should be on the backup volume are back present and correct as thought nothing in the world is wrong.

Files that have previously been replicated plus the newly replicated files are all present and correct.

But until detach/reattach it is as thought the backup target volume becomes empty of all files soon after replication has completed...?

zfs, zpool, freenas all look ok. mounted unlocked clean etc.

BUT collectd, ssh , nfs all see empty directories and logs fill with collectd errors

This I have tried to solve since 30th of last month, reading up and searching all sorts of related material.

I tried leaving the backup vol for another replication run the next night without reattach (huge days logs full of collectd/statvfs) and the replication went ahead normal and reported uptodate and again if i reattach everything is fine?

I have been unable to locate anything in any log other than the collectd stuff

delete old snaps on target, then puff like magic an empty dir tree
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
FREENAS DOING something very strange..... I just noticed every night same time as the files disappear as replication completes....My XENSERVERS generate multipath errors. mmmhh Have isci vol exported but not from the volume that vanishes
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Well, you didn't answer my questions, but from the sounds of it, your data is being successfully replicated using ZFS and then the "pull" file systems are being unmounted. This happened to me a while ago on 9.3.1 and I"ve seen it on a couple users on 9.10.1 and I think there were a couple bug reports about this. I would search there and see if it looks familiar.
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Yes that would be about right.... FN, ZFS, mount, all think its mounted, but on examining the "pull" filesystem it is as thought it is no longer mounted. However the next night I successfully replicate to an "unmounted" Filesystem?

Thanks for the help. I will explore bugs you mentioned. Nice to know it is not just me.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
However the next night I successfully replicate to an "unmounted" Filesystem?
ZFS replication works at a level below the filesystem mounting, so the datasets don't need to be mounted for replication to happen.
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
Brilliant :smile: thanks so much. The very simple answer to all my problems. umount the thing! Don't need or really want it mounted anyway. Read alot of the threads you guided my to. Found them very informative. I could not seam to find any kind of idea how to fix or indeed the root cause. Started for me with 9.10U2 and is the same with U4. Anyhow HAPPY NOW :smile: so thanks again for your patients
 

Florence

Dabbler
Joined
Oct 27, 2016
Messages
11
AHAHAHAHAH! would you believe it.... Two nights in a row. Didnt believe it myself first time. FREENAS remounts it. Twice I umount....FN tries remounts after replication time. Then spends the rest of the night complaining stat collectd .........NO WAY. Any way to stop this mad behaviour?


umounting does not solve my problem? FN tries to remount
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Until the issue gets resolved, you could create a script and run it as a cronjob to unmount it after replication.
 
Joined
Feb 28, 2017
Messages
5
To others who may have this issue and google it, to prevent the logs from filling up after every replication I followed the suggestion above and created a cron job. I have two replication jobs that kick off at 2:00am and 5:00am respectively. They can take anywhere from a couple of minutes up to 30 mins to complete depending on how much data changed in the last 24 hours. So I added a cron job to run at 3:00am and 6:00am each morning to execute the script /root/statvfs-error-workaround. Here is the extremely simple script I used:

Code:
[root@elisha] ~# cat statvfs-error-workaround
#!/bin/bash
#
# Script to stop statvfs error from filling up the logs on Elisha
#
# Written by DMM 02/28/2017

umount /mnt/ElishaStorage/ElijahBackup/WingStorage
umount /mnt/ElishaStorage/ElijahBackup/TimeMachineBackups/Backups
umount /mnt/ElishaStorage/ElijahBackup/TimeMachineBackups
sleep 3
zfs mount -a
logger "/root/statvfs-error-workaround run"


Obviously you will have to modify the umount lines above with the exact syntax as reported in the logs for your particular statvfs errors.

:D
 
Status
Not open for further replies.
Top