jawa
Cadet
- Joined
- May 27, 2016
- Messages
- 2
Build FreeNAS-9.10-STABLE-201605021851 (35c85f7)
Platform AMD FX(tm)-6300 Six-Core Processor
Memory 16306MB
Drives 2x4TB WD Red, 2x4TB SG NAS, 2x3TB WD Red
Problem: Woke up this morning to find nearly all of my free space (~3TB) had suddenly been used up. I'm having difficulty determining what is using up all this space. I've been running this server for several years now as a home media and backup server. Have had to replace a few failed drives before... didn't have any problems with that, but I've never had to do any deep digging on a problem so I'm a little new to this. Any help is much appreciated.
Here is some basic info:
Running another scrub right now... (also, updated to 9.10 from 9.3 a few weeks back and hadn't updated the pool yet)
The only telling thing I've been able to find, is that the space was suddenly started to be consumed around the exact time my weekly scheduled scrub was taking place (Wednesdays at 4AM).
Here is the disk space report showing the incremental decrease in available space starting about the time the scrub started:
Any ideas? Any further information I can provide or commands I can run to try and identify what happened?
Platform AMD FX(tm)-6300 Six-Core Processor
Memory 16306MB
Drives 2x4TB WD Red, 2x4TB SG NAS, 2x3TB WD Red
Problem: Woke up this morning to find nearly all of my free space (~3TB) had suddenly been used up. I'm having difficulty determining what is using up all this space. I've been running this server for several years now as a home media and backup server. Have had to replace a few failed drives before... didn't have any problems with that, but I've never had to do any deep digging on a problem so I'm a little new to this. Any help is much appreciated.
Here is some basic info:
Code:
zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT NAS1 9.97T 9.59T 391G - 53% 96% 1.00x ONLINE /mnt freenas-boot 14.6G 1.52G 13.1G - - 10% 1.00x ONLINE -
Code:
zfs list NAME USED AVAIL REFER MOUNTPOINT NAS1 9.59T 71.8G 2.93T /mnt/NAS1 NAS1/.system 149M 71.8G 74.3M legacy NAS1/.system/configs-5414c6b27866452c89effc1a7ff30cfb 6.66M 71.8G 6.66M legacy NAS1/.system/configs-f1ae6c68bbe041c7bb38cadeec088781 144K 71.8G 144K legacy NAS1/.system/cores 33.1M 71.8G 33.1M legacy NAS1/.system/rrd-5414c6b27866452c89effc1a7ff30cfb 144K 71.8G 144K legacy NAS1/.system/rrd-f1ae6c68bbe041c7bb38cadeec088781 144K 71.8G 144K legacy NAS1/.system/samba4 2.90M 71.8G 2.90M legacy NAS1/.system/syslog-5414c6b27866452c89effc1a7ff30cfb 5.42M 71.8G 5.42M legacy NAS1/.system/syslog-f1ae6c68bbe041c7bb38cadeec088781 25.9M 71.8G 25.9M legacy NAS1/Backup 2.62T 71.8G 2.62T /mnt/NAS1/Backup NAS1/Media 4.00T 71.8G 4.00T /mnt/NAS1/Media NAS1/jails 35.2G 71.8G 944K /mnt/NAS1/jails NAS1/jails_2 144K 71.8G 144K /mnt/NAS1/jails_2 freenas-boot 1.52G 12.7G 31K none freenas-boot/ROOT 1.49G 12.7G 25K none freenas-boot/ROOT/FreeNAS-9.3-STABLE-201604150515 38K 12.7G 524M / freenas-boot/ROOT/FreeNAS-d55ab9177fa7bbcd849b9f0687646c3d 1.49G 12.7G 490M / freenas-boot/ROOT/Initial-Install 1K 12.7G 508M legacy freenas-boot/ROOT/default 33K 12.7G 509M legacy freenas-boot/grub 19.9M 12.7G 6.33M legacy
Running another scrub right now... (also, updated to 9.10 from 9.3 a few weeks back and hadn't updated the pool yet)
Code:
zpool status pool: NAS1 state: ONLINE status: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub in progress since Fri May 27 14:30:47 2016 807G scanned out of 9.59T at 188M/s, 13h39m to go 0 repaired, 8.22% done config: NAME STATE READ WRITE CKSUM NAS1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/c76aa507-a71d-11e3-9787-6805ca213737 ONLINE 0 0 0 gptid/c7bb4187-a71d-11e3-9787-6805ca213737 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 gptid/181ed5f7-13c3-11e6-b217-6805ca213737 ONLINE 0 0 0 gptid/ce3994d3-ae84-11e5-b049-6805ca213737 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 gptid/7b576e99-fcfb-11e4-8bae-6805ca213737 ONLINE 0 0 0 gptid/7c1e0009-fcfb-11e4-8bae-6805ca213737 ONLINE 0 0 0 errors: No known data errors
The only telling thing I've been able to find, is that the space was suddenly started to be consumed around the exact time my weekly scheduled scrub was taking place (Wednesdays at 4AM).
Here is the disk space report showing the incremental decrease in available space starting about the time the scrub started:
Last edited: