No space left on device

Status
Not open for further replies.

evelrin

Cadet
Joined
Jan 6, 2014
Messages
7
This is my first use of FreeNAS.
I installed the FreeNAS-9.1.1-RELEASE-x64 (a752d35), and created 2 zfs volumes on it. Then I configured NFS and CIFS on each of volumes. When used space became 100%, then I can't remove any file from the windows or from the ssh by using rm. In the console i get "No space left on device" error.
Promlem was resolved by dd if=/dev/null of='file path' and then rm.
What I have to do in web interface, to avoid this problem in future.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Either add more storage to your server and/or prevent it from growing past 90%.

Monitor your FreeNAS daily report for disk usage.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Like gpsguy said.. you should NOT be letting pools get above 90% full. Bad nasty things can happen up to and including killing your pool.
 

evelrin

Cadet
Joined
Jan 6, 2014
Messages
7
Thank's all for your help, but this isn't good solution for me. I am already have infrastructure with vsphere cluster and SAN, and resive so many reports from system. I'm look up some inexpensive product instead of tape library. When it growing, it is simpler for me delete some files, but i want to do this easy for example from the windows, and i would like to have zfs functional and some functional which is in the FreeNAS. Also i have only 4 slots for disk. I am plane to use the system for archive copy of online backups.
May be some quota rules can help me, or FreeNAS is not for me?
 

aamsid

Cadet
Joined
Jun 14, 2013
Messages
1
I was searching to find a way of deleting files from a full ZFS volume in Freenas and it would delete and reappear. I found the solution at a different site and want to share to help others. First thing is to use the shell and to get to the mount, then the directory from where the file need to be deleted and then upload a Null file to replace the file and then actually deleting it. Once a file is deleted, I was able to delete others thru Windows.

Here are the commands and cut and pastes:

"How can I delete files in my home directory. I get: rm: cannot remove file: Disk quota exceeded


On ZFS, the filesystem that carries our homedirs, you may find yourself unable to delete files with full disk quota:
bfguser@bwui:~> cp testfile1 testfile2
cp: cannot create regular file `testfile2': Disk quota exceeded

Unfortunately you can not remove a file using the 'rm' command. E.g.:
bfguser@bwui:~> rm testfile1
rm: cannot remove file `testfile1': Disk quota exceeded

Workaround:
The trick is to copy /dev/null to the file you want to delete:
bfguser@bwui:~> ls -lah testfile1
-rw-r--r-- 1 bfguser bfggroup 16M 2009-03-23 10:44 testfile1

bfguser@bwui:~> cp /dev/null testfile1
bfguser@bwui:~> ls -lah testfile1
-rw-r--r-- 1 bfguser bfggroup 0 2009-03-23 11:41 testfile1

bfguser@bwui:~> rm testfile1
bfguser@gwui:~> ls -lah testfile1
/bin/ls: testfile1: No such file or directory

Explanation:
ZFS is a copy-on-write filesystem, so a file deletion transiently takes slightly more space on disk before a file is actually deleted. It has to write the metadata involved with the file deletion before it removes the allocation for the file being deleted. This is how ZFS is able to always be consistent on disk, even in the event of a crash."
 

evelrin

Cadet
Joined
Jan 6, 2014
Messages
7
Thank's all for your help.
As I said, I don't want to do anything from console, I'm just look up the simple way to delete unnecessary information. It does not matter what it will be, quota's or anything else, I don't need to do many steps every time when disk's is full or quota is exceeded.
 

scurrier

Patron
Joined
Jan 2, 2014
Messages
297
Since you searched the forum before asking this question, I'm sure you found THIS thread.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah.. sometimes you have to use the CLI. Especially when you do things like let the pool get 100% full. Frankly, if your pool is more than 80% full you should be buying bigger drives or more drives already. In short, make your pool bigger or be ready to learn the CLI. That's the reality of it. :(
 

evelrin

Cadet
Joined
Jan 6, 2014
Messages
7
Yeah.. sometimes you have to use the CLI. Especially when you do things like let the pool get 100% full. Frankly, if your pool is more than 80% full you should be buying bigger drives or more drives already. In short, make your pool bigger or be ready to learn the CLI. That's the reality of it. :(

I said this is my 1st use of FreeNas, but i don't said this is 1st use of unix, CLI not hard for me, but sometimes it is uncomfortably. Also as i said I am already have SAN whith HP and IBM storages, I'm just need some simple solution.
Thank's all for help, now i use my backup sofware to monitor free space and to delet old files.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
Yes, you can set a dataset quota to prevent your users from completely filing the pool. You can find the details in the documentation: http://doc.freenas.org/index.php/Volumes#Creating_ZFS_Datasets

this not work
I created a 500 MB quota dataset with unlimited all other quota for children or reserved.. on a 250 GB empty hdd and the same issue
after the dataset was filled 100% the files deletion become impossible (with more than 230 GB free on the rest of hdd drive)

this issue is worst than to not exists user accounts and permissions...
any standard user can destroy your storage with a simple copy and paste of a big file (movie for example) couple of times
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
this not work
I created a 500 MB quota dataset with unlimited all other quota for children or reserved.. on a 250 GB empty hdd and the same issue
after the dataset was filled 100% the files deletion become impossible (with more than 230 GB free on the rest of hdd drive)

this issue is worst than to not exists user accounts and permissions...
any standard user can destroy your storage with a simple copy and paste of a big file (movie for example) couple of times



Where are you going with this?

If i understand what you wrote, you created a dataset with a 500MB quota and filled it and then wondered why you hit the deletion issue on a transaction (COW) based filesystem?

How much free space was available on the rest of the drive is IRRELEVANT when you put a quote on. If it was not, what would be the point of the quota if you can exceed it?

Running out of disk space is a problem on ANY filesystem which is why you monitor and control (reservations/quota's) usage.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
I put a quota, I want that quota! if one user can lock my server by a simple copy paste that exceed the quota limit, what is the sense of this quota ?
it is a simple disk space limit that can be reached 10 times/day with irresponsible users and a manual intervention every time using shell and "echo > file_to_delete_path" is not a good solution.

I understand that the ZFS filesystem need some space in order to delete files, but this should be done internal, using a reserved space, a buffer or anything.
I thought that the "reserved quota" and "child quota" is for this kind of issues, but seems to have other purposes.

I need a (automatic) solution in order the user to be able to delete files on full filled dataset.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I tried to reproduce the situation, but I was unable to. I tried datasets with quota and refquota, I completely filled the dataset (available = 0) and I was still able to delete files. Move/rename was not possible, but delete worked. Which version of FreeNAS are you using (my test was with 9.2.0)?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I tried to reproduce the situation, but I was unable to. I tried datasets with quota and refquota, I completely filled the dataset (available = 0) and I was still able to delete files. Move/rename was not possible, but delete worked. Which version of FreeNAS are you using (my test was with 9.2.0)?

Now you see why I've deliberately avoided spending time on this whole thread. If you are filling pools to 100% capacity you are failing as an administrator. When a pool hits 80% of its limit you should already have more drives or a new server on order.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
Now you see why I've deliberately avoided spending time on this whole thread. If you are filling pools to 100% capacity you are failing as an administrator. When a pool hits 80% of its limit you should already have more drives or a new server on order.


if you as admin can accept that a normal user can lock the file server by a simple copy-past I'm afraid about your job quality:(
for me is unacceptable

I tried to reproduce the situation, but I was unable to. I tried datasets with quota and refquota, I completely filled the dataset (available = 0) and I was still able to delete files. Move/rename was not possible, but delete worked. Which version of FreeNAS are you using (my test was with 9.2.0)?


more info I have to say: before this happened I rolledback a snapshot (~300 MB) on that 500 MB test dataset (cifs, win 7 workstation)
the data restores ok and after I tried to full fill the rest of space (I copied a folder with many small files until the space finished and I got "not enough space") in order to try again rollback
and here start the issue: I wasn't able to delete file from share!

freenas v.9.2.0
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I think a constructive outcome here would be to identify a way to prevent this issue. Is there a way to limit a dataset with a quota that stops allowing file creation and editing at some capacity, but still leaves enough room for deletions?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I think a constructive outcome here would be to identify a way to prevent this issue. Is there a way to limit a dataset with a quota that stops allowing file creation and editing at some capacity, but still leaves enough room for deletions?

If you create a dataset that is 99% of the pool's size and then store every single file in that dataset you could avoid this problem.

But, you are doing yourself more of a disservice if you think that's the solution. ZFS has no defrag, and once you go past 80% or so performance begins to bottom out and fragmentation rates increase rapidly. You'll eventually always have a crappy-performing pool no matter what you do because of the fragmentation and you'll be VERY unhappy with ZFS.

If you really wanted to be smart, set the dataset quote to 80% of the pool's size and leave it like that. Of course, that last 20% will never be used and people will be shocked when they can't save their file to the server but the server doesn't appear full.
 
Status
Not open for further replies.
Top