Plex jail filling up my drive

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
This is the third time this has happened and I don't understand why. When this happens the dashboard shows my NAS as completely full and I can't save files to it.
It looks like plex jail has copied everything in my main file share?? For example the music folder is not even monitored by plex, just the TV and Movies folders.

The first time this happened I ran NCDU and it showed my file system as normal and it did go back to normal and I had no idea why it happened. This time I caught the below. After seeing it was plex I told it to update the jail (none found) and then told it to restart the jail. After the restart I was back to normal and NAS was 55% full where it should be.

NCDU and DF -h appear to be telling me different things or maybe I'm just not understanding them.

root@parentstruenas[~]# df -h Filesystem Size Used Avail Capacity Mounted on boot-pool/ROOT/13.0-U4 214G 1.3G 213G 1% / ~ parents/busync 14T 13T 1.4T 90% /mnt/parents/busync parents/iocage 1.4T 8.6M 1.4T 0% /mnt/parents/iocage ~

This 7.9 TB in use for busync is correct, how the heck is iocage showing the same?

ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help --- /mnt/parents ------------------------------- . 7.9 TiB [#############################] /iocage 7.9 TiB [############################ ] /busync Total disk usage: 15.8 TiB Apparent size: 16.0 TiB Items: 820509

and

ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help --- /mnt/parents/iocage/jails/pms/root/mnt -- /.. 3.1 TiB [#############################] /Backup to server 2.7 TiB [######################### ] /TV 2.0 TiB [################## ] /Movies 40.2 GiB [ ] /Music 6.0 KiB [ ] .DS_Store Total disk usage: 7.9 TiB Apparent size: 8.0 TiB Items: 205665
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Look at zfs list first...

zfs list -o name,used parents/iocage
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
It says 8.3 GB right this minute, which still seems high. I will check in again tomorrow, seems like this is happening daily so we will see.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
Tiny increase over night. Will check back when things get out of control again.

root@parentstruenas[~]# zfs list -o name,used parents/iocage NAME USED parents/iocage 8.45G root@parentstruenas[~]# zfs list -o name,used parents NAME USED parents 7.91T
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Well, 8GB is just about 0.1% of 8TB, so I'm not sure if you need to be worrying about such a small amount in your pool.

The Plex DB will (unless you redirected it elsewhere with a mount) be in your jail filesytem and will be growing with newly added content in your library, nothing shocking about that.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
Ok, now the dashboard says we are 69% full. I think I can see your point about plex DB itself but where is the rest of the disk space going? It's like it's a mirror of the main file share. IOcage shouldn't be 4+ TB should it? The content plex needs to play is in a read only mount, is that what you were referring to?

root@parentstruenas[~]# zfs list -o name,used parents/iocage NAME USED parents/iocage 8.55G

root@parentstruenas[~]# df -h Filesystem Size Used Avail Capacity Mounted on parents/busync 14T 10T 4.4T 69% /mnt/parents/busync parents/iocage 4.4T 8.7M 4.4T 0% /mnt/parents/iocage

ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help --- /mnt/parents ------------------------------------------------------------------------------------------------------------------------------------------------------- . 8.1 TiB [########################] /iocage 8.1 TiB [####################### ] /busync

ncdu 1.16 ~ Use the arrow keys to navigate, press ? for help --- /mnt/parents/iocage/jails/pms/root/mnt ----------------------------------------------------------------------------------------------------------------------------- /.. 3.1 TiB [########################] /Backup to server 2.7 TiB [#################### ] /TV 2.2 TiB [################ ] /Movies 40.2 GiB [ ] /Music 6.0 KiB [ ] .DS_Store
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
do you have snapshots enabled? each snapshot will use space for ALL changes.

zfs list -t snapshot | grep /mnt/parents/iocage/jails/pms/

additionally, PLEX can cache things like trans-coded media. if it's caching to the jail, the jail will grow.
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
df and ncdu may be following nullfs mounts to add to the totals.

What the disk is really holding is covered by zfs list.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
do you have snapshots enabled? each snapshot will use space for ALL changes.

zfs list -t snapshot | /mnt/parents/iocage/jails/pms/

additionally, PLEX can cache things like trans-coded media. if it's caching to the jail, the jail will grow.

Interesting. But I don't think that's the space use issue.

root@parentstruenas[~]# zfs list -t snapshot | /mnt/parents/iocage/jails/pms/ zsh: permission denied: /mnt/parents/iocage/jails/pms/ root@parentstruenas[~]# sudo zfs list -t snapshot | /mnt/parents/iocage/jails/pms/ zsh: permission denied: /mnt/parents/iocage/jails/pms/ Sorry, user root is not allowed to execute '/usr/local/sbin/zfs list -t snapshot' as root on parentstruenas.local.

Inside the jail it's command not found, no datasets available.
 

Attachments

  • snapshots.png
    snapshots.png
    145.2 KB · Views: 80

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
df and ncdu may be following nullfs mounts to add to the totals.

What the disk is really holding is covered by zfs list.
That's what I was wondering, this isn't REAL disk usage, just an accounting error.

BUT the reason I'm here in the first place is the NAS sent me 80, 90, and 100% full notifications and then my rsync tasks to push backups to this started failing. So honestly it's really acting as if the drive is full.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
zfs list -t snapshot | /mnt/parents/iocage/jails/pms/
This needs to be different in order to work... you're working with zfs which doesn't know anything about mounted paths...

zfs list -t snapshot | grep parents/iocage/jails/pms

On top of that, you might just want to do something more clear and get the output of the main filesystem with this:

zfs list | grep parents/iocage

That should list out each of the jails' root filesystems and show you if there's anything there that's of concern.

You can also widen the snapshots list too if that hasn't shown you where the problem really is:

zfs list -t snapshot | grep parents/iocage
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
zfs list -t snapshot | parents/iocage/jails/pms
Shouldn't that read zfs list -t snapshot | grep parents/iocage/jails/pms?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
my bad. i forgot the grep. corrected the original post..
zfs list -t snapshot | grep /mnt/parents/iocage/jails/pms/
believe there is a way to do that within the zfs list command but I never remember it
this would not work within a jail.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
zfs list -t snapshot -r pool/path/to/dataset

Depending on the total number of snapshots on your system this might be orders of magnitude faster than using grep.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
At the moment I think we are looking for a little over 2.5TB. The NAS that is sending data to this one is only using 8.19TB and that includes some things that don't get sent to this remote backup. So this remote NAS should always have less than 8.19TB but somehow it inflated to 10.9TB right now.

root@parentstruenas[~]# zfs list -o name,used parents NAME USED parents 10.9T
The data above came from this NAS
root@BayberryFreeNAS:~ # zfs list -o name,used Main NAME USED Main 8.19T

The grep one ends with nothing returned but I took out the pipe and came up with this.

root@parentstruenas[~]# zfs list -t snapshot | grep /mnt/parents/iocage/jails/pms/ root@parentstruenas[~]# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT boot-pool/ROOT/13.0-U4@2023-02-10-18:32:53 1.84M - 1.29G - boot-pool/ROOT/13.0-U4@2023-03-18-05:31:25 1.93M - 1.29G - parents/.system/samba4@wbc-1676154798 176K - 389K - parents/.system/samba4@wbc-1676238683 144K - 373K - parents/.system/samba4@wbc-1676588044 149K - 389K - parents/.system/samba4@wbc-1676740468 149K - 389K - parents/.system/samba4@update--2023-03-18-12-32--13.0-U3.1 139K - 389K - parents/.system/samba4@wbc-1679142867 123K - 389K - parents/iocage/jails/pms@ioc_update_13.1-RELEASE-p7_2023-05-05_17-29-53 346K - 400K - parents/iocage/jails/pms@ioc_update_13.1-RELEASE-p7_2023-05-06_17-20-15 357K - 432K - parents/iocage/jails/pms/root@ioc_update_13.1-RELEASE-p7_2023-05-05_17-29-53 540M - 2.93G - parents/iocage/jails/pms/root@ioc_update_13.1-RELEASE-p7_2023-05-06_17-20-15 540M - 3.19G -



root@parentstruenas[~]# zfs list | grep parents/iocage parents/iocage 8.60G 3.52T 8.73M /mnt/parents/iocage parents/iocage/download 835M 3.52T 128K /mnt/parents/iocage/download parents/iocage/download/13.0-RELEASE 401M 3.52T 401M /mnt/parents/iocage/download/13.0-RELEASE parents/iocage/download/13.1-RELEASE 434M 3.52T 434M /mnt/parents/iocage/download/13.1-RELEASE parents/iocage/images 128K 3.52T 128K /mnt/parents/iocage/images parents/iocage/jails 4.44G 3.52T 128K /mnt/parents/iocage/jails parents/iocage/jails/pms 4.44G 3.52T 421K /mnt/parents/iocage/jails/pms parents/iocage/jails/pms/root 4.44G 3.52T 3.21G /mnt/parents/iocage/jails/pms/root parents/iocage/log 133K 3.52T 133K /mnt/parents/iocage/log parents/iocage/releases 3.34G 3.52T 128K /mnt/parents/iocage/releases parents/iocage/releases/13.0-RELEASE 1.63G 3.52T 128K /mnt/parents/iocage/releases/13.0-RELEASE parents/iocage/releases/13.0-RELEASE/root 1.63G 3.52T 1.63G /mnt/parents/iocage/releases/13.0-RELEASE/root parents/iocage/releases/13.1-RELEASE 1.71G 3.52T 128K /mnt/parents/iocage/releases/13.1-RELEASE parents/iocage/releases/13.1-RELEASE/root 1.71G 3.52T 1.71G /mnt/parents/iocage/releases/13.1-RELEASE/root parents/iocage/templates 128K 3.52T 128K /mnt/parents/iocage/templates root@parentstruenas[~]# df -h Filesystem Size Used Avail Capacity Mounted on boot-pool/ROOT/13.0-U4 214G 1.3G 213G 1% / devfs 1.0K 1.0K 0B 100% /dev tmpfs 32M 10M 22M 32% /etc tmpfs 4.0M 8.0K 4.0M 0% /mnt tmpfs 2.6G 27M 2.6G 1% /var fdescfs 1.0K 1.0K 0B 100% /dev/fd parents 3.5T 128K 3.5T 0% /mnt/parents parents/busync 14T 11T 3.5T 76% /mnt/parents/busync parents/iocage 3.5T 8.7M 3.5T 0% /mnt/parents/iocage parents/iocage/images 3.5T 128K 3.5T 0% /mnt/parents/iocage/images parents/iocage/templates 3.5T 128K 3.5T 0% /mnt/parents/iocage/templates parents/iocage/jails 3.5T 128K 3.5T 0% /mnt/parents/iocage/jails parents/iocage/download 3.5T 128K 3.5T 0% /mnt/parents/iocage/download parents/iocage/releases 3.5T 128K 3.5T 0% /mnt/parents/iocage/releases parents/iocage/log 3.5T 134K 3.5T 0% /mnt/parents/iocage/log parents/iocage/download/13.1-RELEASE 3.5T 434M 3.5T 0% /mnt/parents/iocage/download/13.1-RELEASE parents/iocage/jails/pms 3.5T 421K 3.5T 0% /mnt/parents/iocage/jails/pms parents/iocage/download/13.0-RELEASE 3.5T 401M 3.5T 0% /mnt/parents/iocage/download/13.0-RELEASE parents/iocage/releases/13.0-RELEASE 3.5T 128K 3.5T 0% /mnt/parents/iocage/releases/13.0-RELEASE parents/iocage/releases/13.1-RELEASE 3.5T 128K 3.5T 0% /mnt/parents/iocage/releases/13.1-RELEASE parents/iocage/jails/pms/root 3.5T 3.2G 3.5T 0% /mnt/parents/iocage/jails/pms/root parents/iocage/releases/13.1-RELEASE/root 3.5T 1.7G 3.5T 0% /mnt/parents/iocage/releases/13.1-RELEASE/root parents/iocage/releases/13.0-RELEASE/root 3.5T 1.6G 3.5T 0% /mnt/parents/iocage/releases/13.0-RELEASE/root
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
honestly it's really acting as if the drive is full.
OK, so we need to see proof of that:

zpool list

What I can see from the above is that 11 of the 14 TB on the "parents" pool is consumed by the busync dataset. The iocage and child datasets seem to be insignificant in comparison.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
These two NAS contain the same files, minus some things that don't get sent to the remote backup. Unfortunately, right this second the problem isn't bad, it would make more sense to do this again when it says it is full again, and I will do that.

Main is ZFS2, backup2 and parents remote backup are ZFS1.

Code:
root@BayberryFreeNAS:~ # zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Main            29T  16.9T  12.1T        -         -     0%    58%  1.00x    ONLINE  /mnt
backup2       21.8T  12.3T  9.46T        -         -     0%    56%  1.00x    ONLINE  /mnt
freenas-boot   118G  17.1G   101G        -         -      -    14%  1.00x    ONLINE  -
root@BayberryFreeNAS:~ #


Code:
root@parentstruenas[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool   222G  2.59G   219G        -         -     0%     1%  1.00x    ONLINE  -
parents    21.8T  16.3T  5.48T        -         -     0%    74%  1.00x    ONLINE  /mnt
root@parentstruenas[~]#


At any rate I've worked with FreeNAS/TrueNAS for years at home and loved it, but it was always just a NAS to me and I never asked anything more from it. I use plex at home but it's in a VM with a file share mount to the NAS. When I set this remote backup target I deployed plex in the jail because it was quick and easy, and it did work. But then these alerts started rolling in.

qFNvuzP.png
 
Last edited by a moderator:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
So the most recent one at the top seems to agree with the zpool list output (i.e. alert cleared as it's under 80% now.)

I confirm that 74% used is a good calculation based on the amount allocated and the total size.

I think you need to use the tools from the posts above again when the used % goes above 80 again so we can see where the additional data is appearing.
 

Maxburn

Explorer
Joined
Oct 26, 2018
Messages
60
This morning 5AM; 87%full, got the 80% alert at 11 last night. About 54% full is normal for this NAS...

I know I have a rsync task running right now sending data to folder parents/busync/TV, but again the source dataset is smaller than what this NAS is reporting.

So there looks to be something big going on that doesn't show up under NCDU, but does show in zfs list. BUT I'm having a problem looking into deeper directories with zfs list.


zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT parents 21.8T 18.7T 3.08T - - 1% 85% 1.00x ONLINE /mnt


root@parentstruenas[~]# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT boot-pool/ROOT/13.0-U4@2023-02-10-18:32:53 1.84M - 1.29G - boot-pool/ROOT/13.0-U4@2023-03-18-05:31:25 1.93M - 1.29G - parents/.system/samba4@wbc-1676154798 176K - 389K - parents/.system/samba4@wbc-1676238683 144K - 373K - parents/.system/samba4@wbc-1676588044 149K - 389K - parents/.system/samba4@wbc-1676740468 149K - 389K - parents/.system/samba4@update--2023-03-18-12-32--13.0-U3.1 139K - 389K - parents/.system/samba4@wbc-1679142867 123K - 389K - parents/iocage/jails/pms@ioc_update_13.1-RELEASE-p7_2023-05-05_17-29-53 346K - 400K - parents/iocage/jails/pms@ioc_update_13.1-RELEASE-p7_2023-05-06_17-20-15 357K - 432K - parents/iocage/jails/pms/root@ioc_update_13.1-RELEASE-p7_2023-05-05_17-29-53 540M - 2.93G - parents/iocage/jails/pms/root@ioc_update_13.1-RELEASE-p7_2023-05-06_17-20-15 543M - 3.19G - root@parentstruenas[~]#

backup sync folder is way bigger than it should be, which is a surprise to me.

root@parentstruenas[/mnt/parents]# zfs list -o name,used parents/busync NAME USED parents/busync 12.5T root@parentstruenas[/mnt/parents]# zfs list -o name,used parents/iocage NAME USED parents/iocage 8.62G


Why can I not walk directories with zfs?

root@parentstruenas[~]# zfs list -o name,used parents/busync NAME USED parents/busync 12.5T root@parentstruenas[~]# zfs list -o name,used parents/busync/Movies cannot open 'parents/busync/Movies': dataset does not exist root@parentstruenas[~]# zfs list parents/busync/TV cannot open 'parents/busync/TV': dataset does not exist root@parentstruenas[/mnt/parents/busync]# ls -l total 61 -rwxrwxr-x+ 1 rw_user rw_user 10244 Apr 15 09:27 .DS_Store drwxrwxr-x+ 10 rw_user rw_user 11 May 9 00:01 Backup to server drwxrwxr-x+ 4 rw_user rw_user 5 May 8 02:00 Movies drwxrwxr-x+ 4 rw_user rw_user 5 May 6 19:00 Music drwxrwxr-x+ 44 rw_user rw_user 45 May 12 20:00 TV


This is the source dataset;

--- /mnt/Main 8.2 TiB [########################] /Main 10.4 MiB [ ] /iocage e 512.0 B [ ] /jails Total disk usage: 8.2 TiB Apparent size: 8.3 TiB Items: 234806 --- /mnt/Main/Main 3.1 TiB [########################] /Backup to server 2.8 TiB [##################### ] /TV 2.2 TiB [################ ] /Movies 54.5 GiB [ ] /Temp 40.5 GiB [ ] /Music

This is the NAS that is "too full"

--- /mnt/parents/busync 3.1 TiB [########################] /Backup to server 2.8 TiB [##################### ] /TV 2.2 TiB [################ ] /Movies 40.2 GiB [ ] /Music
 
Top