Mount pool and recover data.

Status
Not open for further replies.

panz

Guru
Joined
May 24, 2013
Messages
556
Situation (see attached JPEG "Active Volumes"): I need to recover some data from the «backup» pool.

As we can see, FreeNAS can't see pool's content, but zfs list shows that data is there.

My problem is that I can't share the backup pool via CIFS, because FreeNAS doesn't "see" any content.

I tried to go to the CLI, but I can't use neither "ls" nor "cp".

I tried to mount the pool and get the message "pool already mounted". If I unmount then remount the pool FreeNAS can't see (nor share) it.

Any suggestions?

(all started with bug #52963 https://bugs.freenas.org/issues/5293)

Active Volumes.jpg
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you can't "ls" to the directory from the CLI then that part of the pool is corrupt and unusable.
 

panz

Guru
Joined
May 24, 2013
Messages
556
If you can't "ls" to the directory from the CLI then that part of the pool is corrupt and unusable.

Cyberjock, it was so simple... I don't know how to explain this, so I need some help!
(and then I want to explain this to the developers...)

When I met Josh on our webex conference he asked for a shell (Putty SSH session) and did some

Code:
zfs list


and discovered that data was still there. He couldn't ls into the datasets too.

I showed him that I couldn't

Code:
diff - qr /mnt/storage/data /mnt/backup/data


to double check my backups so he tried to mount the pool [the backup pool, a.k.a. the destination pool (the pull Volume), let's remember this because it's important] but he received the message that the pool was already mounted.

He said that the data was there, with all the snapshots, so we assumed that I did something wrong somewhere else. I thanked for the kind help and said goodbye.

But that was not solved for me, so I wanted to study the problem because, yes, the data are there but I can't access them so... these data are useless.

Now the fun part of the story.

When I created the "backup" pool I thought that replication would take care of all the "cloning" process of my storage pool to the backup pool.

I was terribly WRONG.

Last night I did a replication "by hand" with zfs send/recv and discovered a simple thing:

DESTINATION POOL HAS DIFFERENT PERMISSIONS!

When I created the storage pool I setup the datasets (data, media, etc.) with Windows ACLs (from the GUI) because of CIFS sharing.

BUT I DIDN'T SETUP ANYTHING ON THE BACKUP POOL, assuming that the Replication task would have handled that for me.

This generated all the problems described in my #5293 bug!

So, I simply used the GUI to setup Windows ACLs on the backup pool (the destination Volume) and BINGO!

THE REPLICATION IS NOW WORKING AS EXPECTED

and the problem (Bug #5293) DISAPPEARED! :cool:
 
  • Like
Reactions: DJ9

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, you've confused me even more. See if this clears things up:

1. When you mount a pool you are technically mounting the pool AND its datasets. Each dataset is its own file system in some ways, but not in others.
2. Some of the information on the dataset is external to the dataset (aka you can do zfs list on a dataset without mounting it) but some is only available if you can mount the dataset (for example, the files themselves). So don't assume that since you can't mount the dataset but could do a zfs list that your files either ARE accessible or will be accessible. That depends on the dataset.
3. If you are mounting the pool and the dataset isn't mounting you have problems with the dataset's metadata. If you've been using the pool you likely won't be able to get to that data ever again.

So it goes back to my initial thoughts, if the dataset doesn't mount you likely have some kind of damage to the dataset that is making it unmountable.
 

panz

Guru
Joined
May 24, 2013
Messages
556
The data were all there: after my "discovery" I finally did ls, cp and all the things I needed to copy them from the backup pool.

To keep it short: when I first setup my machine (this was the same day that v. 9.2.1.3 came out) I created 2 pools:

1. storage
2. backup

Then I concentrated my efforts to setup the storage pool (creating all the datasets, setting the permissions, sharing via CIFS, etc.).

I left the backup pool untouched.

After all my data was migrated from my Windows server to the FreeNAS server I setup the Periodic Snapshots & the Replication tasks. Doing this I didn't touch the destination ( = the backup) pool, because I thought that Replication would have done all the work for me.

Only now I'm aware that this didn't happen.

Then, the #5293 bug.

I'm posting now the Putty log of our investigation (Saturday, the 28th of June):
(note: see all "No such file or directory" for the "ls" attempts on the backup pool!)

Code:
login as: root
Authenticating with public key "rsa-key-20131229"
Last login: Sat Jun 28 01:13:48 2014 from 192.168.1.35
FreeBSD 9.2-RELEASE-p9 (FREENAS.amd64) #0 r262572+91ebf13: Thu Jun 26 22:08:53 P                                    DT 2014
 
        FreeNAS (c) 2009-2014, The FreeNAS Development Team
        All rights reserved.
        FreeNAS is released under the modified BSD license.
 
        For more information, documentation, help or support, go here:
        http://freenas.org
Welcome to FreeNAS
[root@freenas] ~# zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
backup                                    1.20T  9.45T  432K  /mnt/backup
backup/.system                            54.7M  9.45T  352K  /mnt/backup/.system
backup/.system/cores                        480K  9.45T  288K  /mnt/backup/.system/cores
backup/.system/rrd                          480K  9.45T  288K  /mnt/backup/.system/rrd
backup/.system/samba4                      951K  9.45T  759K  /mnt/backup/.system/samba4
backup/.system/syslog                      52.2M  9.45T  52.0M  /mnt/backup/.system/syslog
backup/K50IN                                895M  9.45T  894M  /mnt/backup/K50IN
backup/clonezilla                          186G  9.45T  186G  /mnt/backup/clonezilla
backup/completed_downloads                  711K  9.45T  376K  /mnt/backup/completed_downloads
backup/data                                44.8G  9.45T  43.5G  /mnt/backup/data
backup/jails                              5.01G  9.45T  543K  /mnt/backup/jails
backup/jails/.warden-template-pluginjail    889M  9.45T  874M  /mnt/backup/jails/.warden-template-pluginjail
backup/jails/plexmediaserver_1            4.14G  9.45T  4.87G  /mnt/backup/jails/plexmediaserver_1
backup/media                                995G  9.45T  995G  /mnt/backup/media
backup/scambio_file                        53.0M  9.45T  52.6M  /mnt/backup/scambio_file
backup/scripts                            7.73M  9.45T  7.37M  /mnt/backup/scripts
storage                                    1.20T  9.45T  432K  /mnt/storage
storage/.system                            54.7M  9.45T  352K  /mnt/storage/.system
storage/.system/cores                      464K  9.45T  288K  /mnt/storage/.system/cores
storage/.system/rrd                        464K  9.45T  288K  /mnt/storage/.system/rrd
storage/.system/samba4                      983K  9.45T  807K  /mnt/storage/.system/samba4
storage/.system/syslog                    52.2M  9.45T  52.0M  /mnt/storage/.system/syslog
storage/K50IN                              895M  9.45T  894M  /mnt/storage/K50IN
storage/clonezilla                          186G  9.45T  186G  /mnt/storage/clonezilla
storage/completed_downloads                855K  9.45T  376K  /mnt/storage/completed_downloads
storage/data                              44.8G  9.45T  43.4G  /mnt/storage/data
storage/jails                              5.03G  9.45T  543K  /mnt/storage/jails
storage/jails/.warden-template-pluginjail  890M  9.45T  874M  /mnt/storage/jails/.warden-template-pluginjail
storage/jails/plexmediaserver_1            4.16G  9.45T  4.87G  /mnt/storage/jails/plexmediaserver_1
storage/media                              995G  9.45T  995G  /mnt/storage/media
storage/scambio_file                      53.2M  9.45T  320K  /mnt/storage/scambio_file
storage/scripts                            8.09M  9.45T  7.48M  /mnt/storage/scripts
[root@freenas] ~# zfs list -t snapshot| grep storage | grep data
storage/data@auto-20140621.1730-2w                              1.17G      -  44.0G  -
storage/data@auto-20140622.0030-2w                              1.42M      -  43.5G  -
storage/data@auto-20140622.1251-2w                              1.47M      -  43.5G  -
storage/data@auto-20140627.1536-2w                              22.6M      -  43.5G  -
[root@freenas] ~# zfs list -t snapshot | grep backup | grep data
backup/data@auto-20140621.1730-2w                                1.17G      -  44.0G  -
backup/data@auto-20140622.0030-2w                                1.37M      -  43.5G  -
backup/data@auto-20140622.1251-2w                                1.44M      -  43.5G  -
backup/data@auto-20140627.1536-2w                                    0      -  43.5G  -
[root@freenas] ~# date
Sat Jun 28 01:33:06 CEST 2014
[root@freenas] ~# ls /mnt/storage/data
./                          archivio/          ebooks/            updates_collection/
../                apps/              asus/              master/            zpark/
.windows            apps_acquistate/    documenti/          parked/
Asrock_desktop/           documenti ufficio/  scambio_file/
[root@freenas] ~# ls /mnt/backup/data
./  ../
[root@freenas] ~# ls /mnt/backup/data/.zfs
ls: /mnt/backup/data/.zfs: No such file or directory
[root@freenas] ~# zfs list
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
backup                                    1.20T  9.45T  432K  /mnt/backup
backup/.system                            54.8M  9.45T  352K  /mnt/backup/.system
backup/.system/cores                        496K  9.45T  288K  /mnt/backup/.system/cores
backup/.system/rrd                          496K  9.45T  288K  /mnt/backup/.system/rrd
backup/.system/samba4                      967K  9.45T  759K  /mnt/backup/.system/samba4
backup/.system/syslog                      52.2M  9.45T  52.0M  /mnt/backup/.system/syslog
backup/K50IN                                895M  9.45T  894M  /mnt/backup/K50IN
backup/clonezilla                          186G  9.45T  186G  /mnt/backup/clonezilla
backup/completed_downloads                  871K  9.45T  376K  /mnt/backup/completed_downloads
backup/data                                44.8G  9.45T  43.4G  /mnt/backup/data
backup/jails                              5.03G  9.45T  543K  /mnt/backup/jails
backup/jails/.warden-template-pluginjail    889M  9.45T  874M  /mnt/backup/jails/.warden-template-pluginjail
backup/jails/plexmediaserver_1            4.16G  9.45T  4.87G  /mnt/backup/jails/plexmediaserver_1
backup/media                                995G  9.45T  995G  /mnt/backup/media
backup/scambio_file                        53.2M  9.45T  320K  /mnt/backup/scambio_file
backup/scripts                            8.01M  9.45T  7.45M  /mnt/backup/scripts
storage                                    1.20T  9.45T  432K  /mnt/storage
storage/.system                            54.7M  9.45T  352K  /mnt/storage/.system
storage/.system/cores                      464K  9.45T  288K  /mnt/storage/.system/cores
storage/.system/rrd                        464K  9.45T  288K  /mnt/storage/.system/rrd
storage/.system/samba4                      983K  9.45T  807K  /mnt/storage/.system/samba4
storage/.system/syslog                    52.2M  9.45T  52.0M  /mnt/storage/.system/syslog
storage/K50IN                              895M  9.45T  894M  /mnt/storage/K50IN
storage/clonezilla                          186G  9.45T  186G  /mnt/storage/clonezilla
storage/completed_downloads                855K  9.45T  376K  /mnt/storage/completed_downloads
storage/data                              44.8G  9.45T  43.4G  /mnt/storage/data
storage/jails                              5.03G  9.45T  543K  /mnt/storage/jails
storage/jails/.warden-template-pluginjail  890M  9.45T  874M  /mnt/storage/jails/.warden-template-pluginjail
storage/jails/plexmediaserver_1            4.16G  9.45T  4.87G  /mnt/storage/jails/plexmediaserver_1
storage/media                              995G  9.45T  995G  /mnt/storage/media
storage/scambio_file                      53.2M  9.45T  320K  /mnt/storage/scambio_file
storage/scripts                            8.09M  9.45T  7.48M  /mnt/storage/scripts
[root@freenas] ~# ls /mnt/backup/data/.zfs/snapshot
ls: /mnt/backup/data/.zfs/snapshot: No such file or directory
[root@freenas] ~# ls /mnt/backup/data/.zfs/snapshots
ls: /mnt/backup/data/.zfs/snapshots: No such file or directory
[root@freenas] ~# zfs get all backup/data | less
NAME        PROPERTY              VALUE                  SOURCE
backup/data  type                  filesystem            -
backup/data  creation              Fri Jun 20 21:48 2014  -
backup/data  used                  44.8G                  -
backup/data  available            9.45T                  -
backup/data  referenced            43.4G                  -
backup/data  compressratio        1.06x                  -
backup/data  mounted              yes                    -
backup/data  quota                none                  default
backup/data  reservation          none                  default
backup/data  recordsize            128K                  default
backup/data  mountpoint            /mnt/backup/data      default
backup/data  sharenfs              off                    default
backup/data  checksum              on                    default
backup/data  compression          lz4                    inherited from backup
backup/data  atime                on                    default
backup/data  devices              on                    default
backup/data  exec                  on                    default
backup/data  setuid                on                    default
backup/data  readonly              off                    default
backup/data  jailed                off                    default
backup/data  snapdir              hidden                default
backup/data  aclmode              passthrough            inherited from backup
backup/data  aclinherit            passthrough            inherited from backup
backup/data  canmount              on                    default
backup/data  xattr                off                    temporary
backup/data  copies                1                      default
backup/data  version              5                      -
backup/data  utf8only              off                    -
backup/data  normalization        none                  -
backup/data  casesensitivity      sensitive              -
backup/data  vscan                off                    default
backup/data  nbmand                off                    default
backup/data  sharesmb              off                    default
backup/data  refquota              none                  default
backup/data  refreservation        none                  default
backup/data  primarycache          all                    default
backup/data  secondarycache        all                    default
backup/data  usedbysnapshots      1.43G                  -
backup/data  usedbydataset        43.4G                  -
backup/data  usedbychildren        0                      -
backup/data  usedbyrefreservation  0                      -
backup/data  logbias              latency                default
backup/data  dedup                off                    default
backup/data  mlslabel                                    -
backup/data  sync                  standard              default
backup/data  refcompressratio      1.06x                  -
backup/data  written              0                      -
backup/data  logicalused          45.7G                  -
backup/data  logicalreferenced    44.5G                  -
backup/data  volmode              default                default
[root@freenas] ~# zfs mount backup/data
cannot mount 'backup/data': filesystem already mounted
[root@freenas] ~# zfs unmount backup/data
[root@freenas] ~# zfs mount backup/data
[root@freenas] ~# ls /mnt/backup/data/.zfs/snapshots
ls: /mnt/backup/data/.zfs/snapshots: No such file or directory
[root@freenas] ~# ls /mnt/backup/data/
./                        archivio/          ebooks/            updates_collection/
../                apps/              asus/              master/            zpark/
.windows            apps_acquistate/    documenti/          parked/
Asrock_desktop/            documenti ufficio/  scambio_file/
[root@freenas] ~# ls /mnt/backup/data/.zfs/snapshots
ls: /mnt/backup/data/.zfs/snapshots: No such file or directory
[root@freenas] ~# ls /mnt/backup/data/.zfs/
./        ../      shares/  snapshot/
[root@freenas] ~# ls /mnt/backup/data/.zfs/snapshot
./                    auto-20140621.1730-2w/ auto-20140622.1251-2w/ auto-20140628.0135-2w/
../                    auto-20140622.0030-2w/ auto-20140627.1536-2w/
[root@freenas] ~#
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Is there another OS that can mount the pool readonly and see what is visible?
 

panz

Guru
Joined
May 24, 2013
Messages
556
ZFS can send/receive with the destination pool set to read only: I just tried

zfs set readonly=on /mnt/backup

and it receives correctly.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
You cannot do
zfs set readonly=on /mnt/backup
OK, you can..., but that does not change anything. See my example below:
Code:
[root@freenas ~]# zfs get readonly ZFStest
NAME      PROPERTY  VALUE  SOURCE
ZFStest   readonly  off    local
[root@freenas ~]# zfs set readonly=on /mnt/ZFStest
cannot open '/mnt/ZFStest': invalid dataset name
[root@freenas ~]# zfs get readonly ZFStest
NAME      PROPERTY  VALUE  SOURCE
ZFStest   readonly  off    local
[root@freenas ~]# zfs set readonly=on ZFStest
[root@freenas ~]# zfs get readonly ZFStest
NAME      PROPERTY  VALUE  SOURCE
ZFStest   readonly  on      local
[root@freenas ~]#
So it is possible that you are not writing to a readonly dataset <<== this is how I had read your post, that you were writing to a readonly dataset using zfs receive

Please, just ignore me if I am confused
 

David E

Contributor
Joined
Nov 1, 2013
Messages
119
I ran into a similar problem which had me crazily confused. On my backup system I could ls -la within a replicated child dataset and it would show empty, despite there being tons of data in the source dataset - meanwhile replication was happily proceeding and not throwing any errors at all. It turns out I had to unmount and remount the dataset then it showed up. Very weird.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I have done replication of my entire array to my new array and I was using the CLI with the following, I just need to created a recursive snapshot of the zvol:

zfs send -vvR source@last_snaphot | zfs receive -F -d destination

According to Oracle there seem to be two major flavors of the replication process:

The -r or -R option, not too clear about the difference, except maybe the latter one seems to retain the source structure a dataset would have come from.
I was able to perform transfer in excess of 5TB of data in about 6-7 hours.
The -vv option allows to output dataset processed and size. The last step of the replication process updates all the dataset to make them active within the system, otherwise they would not expose their contents.
I had to restart Freenas otherwise I would not be able to get all the dataset listed.
One done I was able to compare presence of all the snapshots both on source and destination.
All dataset permission seems to have been updated as well on the destination dataset.
Doing the replication using Putty I had to log as root, and while creating some shell script and output to help check the replication step, these files where not visible once I exited Root. The dataset I was working in would look like empty.
Going on Freenas Web interface, I then was able to reassign dataset to use my user name and made the entire content available again.

Before I tried the above command, I went a different route and wasn't too successful.
I was actually replicating the main zvol using:

zfs send -vv source@first_snaphot | zfs receive -F destination@first_snaphot

But only the dataset next in the tree were replicated, some having a dataset of there own would created, but no snapshot would have been replicated.

To correct for it I would have to run

zfs send -vv source/dataset1@first_snaphot | zfs receive -F destination/dataset1@first_snaphot

I would have to run the command for every dataset and then run an incremental replication:

zfs send -vv -i source@first_snaphot source@last_snaphot| zfs receive destination@first_snaphot
zfs send -vv -i source/dataset1@first_snaphot source/dataset1@last_snaphot| zfs receive destination/dataset1@first_snaphot

There are major issues with this sequence, is that it is easy to get confused, and extremely tedious to proceed.

I didn't get everything right and doing a single dataset replication could esealy takes hours to finalyze.

Also as an experiment, I kill the replication task after a few snapshot went through and checking dataset availability, the dataset may not show in the Freenas storage tree but the snapshot would be there still.=, as they would appear running the "zfs list -t snapshot" command.
Cloning would work, and contents seemed accessible as well. As as said , this method was a bit unclear and confusing.

However, it is clear that replicating the recursive zvol (not using the -R option of the top commnand ) dataset will not replicate the content of the underlying dataset, it will just create the dataset, but not copy the snapshots to it.

Overall, before stating that replication was successful, comparing list of snapshots is the safest way to go.

Also, using the -R option, the "error getting the available space" will show until the last snapshot within that dataset has been commited to the destination zvol. It will show even when snapshots have been committed, but the last one.
 
Last edited:
Status
Not open for further replies.
Top