/.system ate all my space!

Status
Not open for further replies.

murple

Cadet
Joined
Nov 30, 2014
Messages
3
Hello,

Long time listener, first time caller.

I am running FreeNAS 9.2.1.3.

I have one ZFS pool which contains two disks which are mirrored.

A while ago I found out that my free space was basically 0. It seemed that automatic snapshots had eaten up all space. Disk was extremely slow and there were several thousands of snapshots.

I found a command which i ran it was:
Code:
zfs list -H -o name -t snapshot | xargs -n1 zfs destroy

This was successful and many gigabytes were freed.

Now I checked back again and see that free space is zero again. In the reporting graphs I can see that free space is decreasing, used space is not increasing.

I ran:
Code:
zfs list -o space

and result is:
Code:
NAME                 AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
pool                     0  1.78T         0   1.29T              0       506G
pool/.system             0   505G         0    168K              0       505G
pool/.system/cores       0   144K         0    144K              0          0
pool/.system/samba4      0  3.54M         0   3.54M              0          0
pool/.system/syslog      0   505G         0    505G              0          0
pool/jail                0  1.05G         0   1.05G              0          0
pool/plugins             0   303M         0    303M              0          0

USEDCHILD is hogging 505 GB of data in /.system. I have no idea what this is and couldn't find any information about it either when Googling.

I hope you can help me understand this.

Best regards,
murple
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Your syslog seems to be absolutely gigantic. What the hell happened here?

Did you happen to run very frequent snapshots on the system dataset?
 

murple

Cadet
Joined
Nov 30, 2014
Messages
3
Yes probably had set up some stupid snapshot schedule and left it running for a year or two. Am I screwed now?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Hello,

Long time listener, first time caller.

I am running FreeNAS 9.2.1.3.

I have one ZFS pool which contains two disks which are mirrored.

A while ago I found out that my free space was basically 0. It seemed that automatic snapshots had eaten up all space. Disk was extremely slow and there were several thousands of snapshots.

I found a command which i ran it was:
Code:
zfs list -H -o name -t snapshot | xargs -n1 zfs destroy

This was successful and many gigabytes were freed.

Now I checked back again and see that free space is zero again. In the reporting graphs I can see that free space is decreasing, used space is not increasing.

I ran:
Code:
zfs list -o space

and result is:
Code:
NAME                 AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
pool                     0  1.78T         0   1.29T              0       506G
pool/.system             0   505G         0    168K              0       505G
pool/.system/cores       0   144K         0    144K              0          0
pool/.system/samba4      0  3.54M         0   3.54M              0          0
pool/.system/syslog      0   505G         0    505G              0          0
pool/jail                0  1.05G         0   1.05G              0          0
pool/plugins             0   303M         0    303M              0          0

USEDCHILD is hogging 505 GB of data in /.system. I have no idea what this is and couldn't find any information about it either when Googling.

I hope you can help me understand this.

Best regards,
murple
You can safely destroy the .system dataset "zfs destroy -fR pool/.system" and reboot. Fix your snapshot schedule.
 

murple

Cadet
Joined
Nov 30, 2014
Messages
3
You can safely destroy the .system dataset "zfs destroy -fR pool/.system" and reboot. Fix your snapshot schedule.
Code:
NAME          AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
pool           505G  1.29T         0   1.29T              0      1.40G
pool/jail      505G  1.05G         0   1.05G              0          0
pool/plugins   505G   303M         0    303M              0          0

Worked wonders! Thank you very much!
 
Status
Not open for further replies.
Top