Sudden Reduction in Total Disk Space

Status
Not open for further replies.

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
Searching around, I have the same problem as this person. His fix (detach/reimport pool) did not work for me.

FreeNAS-9.2.0-RELEASE-x64 (ab098f4)
6 x 4TB disks in RAID Z2 pool for a total of 16TB, or a de facto 14TB of usable storage.
Recycling Bin is NOT enabled for CIFS share, my only share.
I use btsync, that's it.

PROBLEM:
Upon deleting about 3TB or so worth of files, my total available storage was reduced from 14TB to 11TB, rather than free up 3TB of space from the 14TB total. As well, I had about 1GB left of my 14TB when I decided to delete 3TB, and since reducing my total storage to 11TB, my 1GB is been progressively getting smaller by the minute until it hit 0 bytes, as seen in the screenshots below. (Btsync was stopped and so could not have contributed to this.)
Detaching and reimporting the volume did not help.

Does anyone know what's wrong?


zpool list and df-h results:
Clipboard_Image24-05-2014_11-54-02_PM.jpg


Here are the capacity displays from FreeNas and Windows:
Clipboard_Image24-05-2014_11-57-53_PM.jpg
Clipboard_Image24-05-2014_11-58-33_PM.jpg
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
What about gpart show ? Do you have the same amount (likely 3.7TiB) allocated to freebsd-zfs on each drive?
 

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
What about gpart show ? Do you have the same amount (likely 3.7TiB) allocated to freebsd-zfs on each drive?


I do indeed have snapshots enabled, at 2 minute intervals which I assume accounts for why the storage has been going down. Thanks, good thinking. The bigger problem remains nonetheless.
==============

Here are the metrics for gpart show, it is indeed 3.7 per drive
Clipboard_Image25-05-2014_12-41-24_AM.jpg
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Do you have .system ? What is its size?

zdb -C RAIDZ2 | grep asize
Is it around 24000000000000 ?
 

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
Hey solaris, check the gpart image, there seems to be an issue with the last one. We posted within seconds of each other so heres my post just in case you missed it
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
No, let me stress that, no delete operation would give you your storage back. The deleted files would be now only in snapshots, as opposed to being in snapshot(s) and real dataset.

Stop and delete snapshots...
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I think that everything is fine with your gpart output. The last part of your output is FreeNAS operating system that resides on a USB memory device.
 

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
I meant that the 2 minute snapshots explains why my 1GB went down to 0 bytes, but doesn't solve the issue of 14TB to 11TB. I trust that "no delete option would give you your storage back" referred to the latter; that deleting snapshots won't get me back from 11TB to 14TB?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
I have to leave. I might be not here for another 24 hours. If you can, then try to stop and delete snapshots (make them expire very soon). I think that will give you your space back.
 

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
Thanks, Solaris. Upon deleting 400/5700 snapshots, it seems only to be increasing my existing space from 0 bytes to 45 GB, and not my total space. Anyone else have any suggestions?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
df is not compatible with ZFS. Those numbers are bogus and mean nothing. So thinking you only have 11TB is wrong. ;)

df and du are both pretty 'broken' because they do not understand ZFS's snapshots, reservations, etc.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't know for your exact scenario.. but I'd bet money it has to do with you not fully understanding how snapshots, reservations, and quotas work. That's what 99% of people that think they have problems with free space issues don't understand.
 

jlau3450

Dabbler
Joined
Dec 30, 2013
Messages
10
Is there a way to delete snapshots en mass using command line? Detaching/reimporting my volume made me lose the snapshot settings entree that was set up originally on the volume and so I can't make them all just expire as previously suggested by Solaris.
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
Is there a way to delete snapshots en mass using command line? Detaching/reimporting my volume made me lose the snapshot settings entree that was set up originally on the volume and so I can't make them all just expire as previously suggested by Solaris.



http://docs.oracle.com/cd/E19120-01/open.solaris/817-2271/ghzuk/index.html

look at point 9.

you need to change it a bit as, it think "@auto-" is what you are after but it has been some time since i looked at this.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Code:
zfs list -H -o name -t snapshot | wc -l
bash
for std in `zfs list -H -o name -t snapshot | grep @auto`
do
zfs destroy $std
done
zfs list -H -o name -t snapshot
exit


Assuming that you want to destroy all and every automatic snapshot.

std = snapshot to destroy (could be anything, just chosen a meaningful one)
 
Joined
Sep 26, 2015
Messages
1
Hi, I’m observing the same behavior on my freeNAS system
Some of you seems to be overlooking the facts and judging quickly our “knowledge” of snapshots and stuff… The problem never was with the “available space left” the problem is the total size of the volume is decreasing and THAT just doesn’t make sense.
Let me describe what I see:
2 years ago, I created a single volume with a single Dataset.
Snapshots are NOT enabled. I don’t have any Jails. Just a simple CIFS share with recycle bin disabled.
After I originally created my volume and dataset, the volume size was 17.1TB.
I could see that from the MAP drive of the share in windows explorer as well as the UI in freeNAS where the Volume Available space (17.0) + Used space (0.1) got to 17.1TB
Now, 2 years later, the map drive in windows shows a Total Size of 16.5 and the UI in freeNAS for my volume shows Available space 4.5TB and Used space 12.1 …. For 16.5TB.
How can the TOTAL size of the volume decrease? Or why does windows and freeNAS UI see a difference between now and 2 years ago…
In any other software or hardware raid I did in my life, never I have seen the total size of one array or volume or whatever you want to call it DECREASE. Only the available space fluctuates depending on what you do with it.
 
Status
Not open for further replies.
Top