trying to snapshot but out of space

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
Hi,

I am trying to migrate from FreeNAS 11.1-U7 to TrueNAS 13.0-U3.1. I have 2 datasets on the old box. I want to use replication to move the data over but I need to take a snapshot in order to do that. I have read a few other posts where people seem to have similar issues but I haven't managed to find out a way to free up some space (hopefully its actually possible) so I can take one snapshot and begin replicating. Here is the output from some of the commands I found on other posts to try and troubleshoot.

Code:
Apr 20 17:06:03 lsqus008 /autosnap.py: [tools.autosnap:535] Failed to create sna                                                                                      pshot 'DATA2/DATA2@auto-20230420.1706-1w': cannot create snapshot 'DATA2/DATA2@a                                                                                      uto-20230420.1706-1w': out of space
Apr 20 17:07:04 lsqus008 /autosnap.py: [tools.autosnap:535] Failed to create sna                                                                                      pshot 'DATA2/DATA2@auto-20230420.1707-1w': cannot create snapshot 'DATA2/DATA2@a                                                                                      uto-20230420.1707-1w': out of space
root@lsqus008:~ # zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
DATA1         1.81T   185M  1.81T         -     0%     0%  1.00x  ONLINE  /mnt
DATA2         5.44T  2.07T  3.37T         -    20%    37%  1.00x  ONLINE  /mnt
freenas-boot    58G  1.57G  56.4G         -      -     2%  1.00x  ONLINE  -
root@lsqus008:~ # zfs list
NAME                                                     USED  AVAIL  REFER  MOUNTPOINT
DATA1                                                   1.27T   493G    88K  /mnt/DATA1
DATA1/.system                                            183M   493G    96K  legacy
DATA1/.system/configs-b3147af497d748f98f845f65b942be53   177M   493G   177M  legacy
DATA1/.system/cores                                     4.84M   493G  4.84M  legacy
DATA1/.system/rrd-b3147af497d748f98f845f65b942be53        88K   493G    88K  legacy
DATA1/.system/samba4                                     124K   493G   124K  legacy
DATA1/.system/syslog-b3147af497d748f98f845f65b942be53     88K   493G    88K  legacy
DATA1/DATA1                                             1.27T  1.76T    56K  -
DATA2                                                   2.55T   982G   176K  /mnt/DATA2
DATA2/DATA2                                             2.55T  2.13T  1.38T  -
freenas-boot                                            1.57G  54.6G    64K  none
freenas-boot/ROOT                                       1.56G  54.6G    29K  none
freenas-boot/ROOT/11.1-U7                               1.55G  54.6G   751M  /
freenas-boot/ROOT/Initial-Install                          1K  54.6G   836M  legacy
freenas-boot/ROOT/default                               2.57M  54.6G   838M  legacy
freenas-boot/grub                                       6.96M  54.6G  6.96M  legacy
root@lsqus008:~ # zfs list -o space,quota
NAME                                                    AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD  QUOTA
DATA1                                                    493G  1.27T         0     88K              0      1.27T   none
DATA1/.system                                            493G   183M         0     96K              0       183M   none
DATA1/.system/configs-b3147af497d748f98f845f65b942be53   493G   177M         0    177M              0          0   none
DATA1/.system/cores                                      493G  4.84M         0   4.84M              0          0   none
DATA1/.system/rrd-b3147af497d748f98f845f65b942be53       493G    88K         0     88K              0          0   none
DATA1/.system/samba4                                     493G   124K         0    124K              0          0   none
DATA1/.system/syslog-b3147af497d748f98f845f65b942be53    493G    88K         0     88K              0          0   none
DATA1/DATA1                                             1.76T  1.27T         0     56K          1.27T          0      -
DATA2                                                    982G  2.55T         0    176K              0      2.55T   none
DATA2/DATA2                                             2.13T  2.55T         0   1.38T          1.17T          0      -
freenas-boot                                            54.6G  1.57G         0     64K              0      1.57G   none
freenas-boot/ROOT                                       54.6G  1.56G         0     29K              0      1.56G   none
freenas-boot/ROOT/11.1-U7                               54.6G  1.55G      840M    751M              0          0   none
freenas-boot/ROOT/Initial-Install                       54.6G     1K         0      1K              0          0   none
freenas-boot/ROOT/default                               54.6G  2.57M         0   2.57M              0          0   none
freenas-boot/grub                                       54.6G  6.96M         0   6.96M              0          0   none
root@lsqus008:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/11.1-U7@2018-05-04-16:58:20  2.18M      -   836M  -
freenas-boot/ROOT/11.1-U7@2020-02-21-18:31:53  4.67M      -   838M  -
root@lsqus008:~ #
root@lsqus008:~ # zfs list -t volume DATA2/DATA2
NAME          USED  AVAIL  REFER  MOUNTPOINT
DATA2/DATA2  2.55T  2.13T  1.38T  -
root@lsqus008:~ # zfs list -t snapshot -o space
NAME                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
freenas-boot/ROOT/11.1-U7@2018-05-04-16:58:20      -  2.18M         -       -              -          -
freenas-boot/ROOT/11.1-U7@2020-02-21-18:31:53      -  4.67M         -       -              -          -
root@lsqus008:~ #


DATA2/DATA2 is what I need to snapshot. As far as I understand, there should be enough space. It is a zvol that I am using as an iSCSI target.

Thanks in advance
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It is possible that you have limits set;
> zfs get all pool/vol NAME PROPERTY VALUE SOURCE pool/vol snapshot_limit none default pool/vol snapshot_count none default
That shows a newly created zVol without any snapshot limit set, nor any snapshots.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
I ran that command for "DATA2/DATA". It reports that there are no limits or existing snap shots.

Code:
NAME         PROPERTY                 VALUE                    SOURCE
DATA2/DATA2  type                     volume                   -
DATA2/DATA2  creation                 Wed May  9 22:26 2018    -
DATA2/DATA2  used                     2.55T                    -
DATA2/DATA2  available                2.13T                    -
DATA2/DATA2  referenced               1.38T                    -
DATA2/DATA2  compressratio            1.06x                    -
DATA2/DATA2  reservation              none                     default
DATA2/DATA2  volsize                  2.51T                    local
DATA2/DATA2  volblocksize             16K                      -
DATA2/DATA2  checksum                 on                       default
DATA2/DATA2  compression              lz4                      inherited from DATA2
DATA2/DATA2  readonly                 off                      default
DATA2/DATA2  copies                   1                        default
DATA2/DATA2  refreservation           2.55T                    local
DATA2/DATA2  primarycache             all                      default
DATA2/DATA2  secondarycache           all                      default
DATA2/DATA2  usedbysnapshots          0                        -
DATA2/DATA2  usedbydataset            1.38T                    -
DATA2/DATA2  usedbychildren           0                        -
DATA2/DATA2  usedbyrefreservation     1.17T                    -
DATA2/DATA2  logbias                  latency                  default
DATA2/DATA2  dedup                    off                      default
DATA2/DATA2  mlslabel                                          -
DATA2/DATA2  sync                     standard                 default
DATA2/DATA2  refcompressratio         1.06x                    -
DATA2/DATA2  written                  1.38T                    -
DATA2/DATA2  logicalused              1.40T                    -
DATA2/DATA2  logicalreferenced        1.40T                    -
DATA2/DATA2  volmode                  default                  default
DATA2/DATA2  snapshot_limit           none                     default
DATA2/DATA2  snapshot_count           none                     default
DATA2/DATA2  redundant_metadata       all                      default
DATA2/DATA2  org.freenas:description                           local


Any other ideas?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Is iSCSI active during the snapshot attempt? You may need to temporarily stop iSCSI to quiesce the zvol before snapshotting.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
yes iSCSI is running. I will disconnect the drive and stop the iSCSI service later this evening and try again.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
sadly that didn't work either. Even rebooted the FreeNAS server just incase. Still fails to take a snapshot. Am I right in thinking that there is 2.13TB available space? The one bit I don't really understand is the refreservation at 2.55TB. I see other people talking about it but I don't know if this is causing a problem or why its so high.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Remember, ZFS is a copy-on-write filesystem. When you write into a zvol, a new block replaces the old block, but the old block is still there. Refreservation is basically all the updated blocks since zvol creation. So your zvol occupies a total of 2.55 TB, but there's only 2.13 TB available to hold the initial snapshot. Your only option is to enlarge your pool.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
I take it you don't have any old snapshots you can remove?

Also, ZFS performance starts to degrade when you get your pools over 80ish percent full. You probably should enlarge your pool already anyway.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
I see. Its going to be a bit more work that I was hoping but that should be possible as the other DATA1 set is now redundant so I can delete that and use the 2 drives in that to expand the pool.

I don't suppose its possible to do a replication without the snapshot? If everything is off-lined so nothing will change until the replication has completed. Though even if possible this might not be an option due to the downtime it would probably require to copy the 1.5TB on the drives.

One other thought that crossed my mind is to reduce the zvol size? The actual data on the drives is only 1.5TB like I said. I would be nervous about shrinking it though as this is production and even though I have backups, restoring could take a while.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
I take it you don't have any old snapshots you can remove?

Also, ZFS performance starts to degrade when you get your pools over 80ish percent full. You probably should enlarge your pool already anyway.
no never took any snapshots before. You are right but since I am migrating the data to a new box, not too worried about that. Just need a way to move it over really.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
OK, what about moving some of your data off the array via USB or something to clear up some space? Not sure if that will work since ZFS is Copy On Write or not.

Maybe the ZFS gurus can answer that.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
What about using rsync over SSH to copy the files? Make duplicate datasets on the new box, then use rsync to copy the data one dataset at a time.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
Of course, if you have zVols, that's not gonna work. That's one reason I use NFS instead of iSCSI for my VM storage.
 

kevsworld

Cadet
Joined
Apr 20, 2023
Messages
7
Of course, if you have zVols, that's not gonna work. That's one reason I use NFS instead of iSCSI for my VM storage.
yeah thats not an option sadly. I could copy the files using robocopy or such like via SMB as that is what the iSCSI is serving but I was hoping to avoid that if at all possible.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
This is a kludge, but will transfer only the active blocks to another zvol as a "snapshot".
dd if=/dev/zvol/DATA2/DATA2 of=- use dd to pull out just the active blocks from your zvol, and echo it to STDOUT.
Use netcat to transport the STDOUT stream to another system.
On the other system, create an empty zvol slightly larger than your source zvol. Then run netcat to receive the other netcat stream, and pipe that into dd if=- of=/dev/zvol/path/to/receiving/zvol.
 

2twisty

Contributor
Joined
Mar 18, 2020
Messages
145
Sounds like robocopy is in your future. :( I've had to do that kind of thing before too.
 
Top