SOLVED Disk full - Can't delete any files. Please Help!

Status
Not open for further replies.

PeterB

Dabbler
Joined
Mar 25, 2014
Messages
14
I will be placing this into my own KB - I don't understand exactly why the file is deleted by issuing the command (in my case)

echo > /mnt/DATA/backupfolder/thisfileorthatfile

It's unclear to me as to why rm will not kill the file; perhaps it is because the system does not have any space to work with (on a full volume) to mark the file as deleted, or permit for some kind of recovery?

I do have one off-topic question, and will post it somewhere else if this does not get picked up, but trying to SSH into the Freenas box - using Putty - does not accept the root password. I have enabled SSH on the services list, and I am able to get the prompt (on a Windows box after I accept the cert); it accepts the username, but rejects the password. Really weird.

Using the terminal through the Freenas web console will allow me to run the commands. However, entering a command and hitting the 'Enter' key does nothing. I have to Ctrl-M (carriage return in good old ASCII) to send the command. Why is that?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Root login via ssh with a password is disabled by default as a security measure. The preferred alternatives are (1) ssh in as another user and su to root, (2) ssh in as another user and sudo whatever you need to, or (3) use ssh certificates for a root login. However, you can configure the ssh service to allow root login with a password--it's a checkbox in the services configuration.
 

PeterB

Dabbler
Joined
Mar 25, 2014
Messages
14
Dan, I found the option to allow root logon with password in the services area; had read another post and followed along. No reboot, all good now. I prefer PuTTY to the web console.

Thanks!
 

Tofu

Cadet
Joined
Oct 19, 2014
Messages
2
Hi All,
Would going in to edit the volume options and setting the "Quota for this volume" to about 80% of the total pool size avoid this situation?

upload_2014-10-20_1-22-18.png


Thanks,
Vu
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No. It means once your dataset hits the quota you'll be in the same situation.

The fix is to manage your disk space and not get that full.
 

Tofu

Cadet
Joined
Oct 19, 2014
Messages
2
Hi Cyberjock,
Does this mean that this problem affects every dataset you create?
e.g.
If you have a 5TB pool, and create a dataset to share that has a 4TB quota, then proceed to fill the 4TB, you would still run into this problem?

Thanks,
Vu
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yes.

Once a ZFS file system hits 100% full, you ahve problems.

A file system is the zpool or dataset. So both will suffer the same fate.
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
No. It means once your dataset hits the quota you'll be in the same situation.

The fix is to manage your disk space and not get that full.
Aw crap! That's not how I viewed this situation either. I thought that by setting a quota that I was reserving free space on my disks. In fact, I'm still not convinced of your answer.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, if you want a reservation you set it with the "reserve" space. But that doesn't stop you from filling it to 100%.

Quotas set an artificial limit to what the given dataset can store. Once it hits 100% you cannot add more transactions since transactions take up space. And since a transaction has to be written to delete a file, you are in the same spot as a pool that hits 100% full.

No, I am quite sure that I am correct. Seen this problem many times over the years.

I do have to say, you are hilarious for even claiming I don't know what I'm talking about. The problem is easy to determine the outcome.

If the file system is full then a transaction cannot be written. Then you have a problem.

If the file system is *not* full, then there is no problem.

Now as for why the echo > /some/file/here works and a "rm" doesn't, I don't full understand that. Someone explained it to me once but I forget what it was. I did explain it somewhere in the forums a while back. But it doesn't change the end result. Full filesystems just don't work.
 
Last edited:

macxs

Dabbler
Joined
Nov 7, 2013
Messages
21
But that's not a good solution. Why can't the FS reserve at least some KB for it's next transaction? Or why can't I reserve space that won't make the filesystem accidentially go dead? When the problem is known, why are there no mechanisms to prevent this?
In fact this means, that you cannot offer network storage from FreeNAS to users or services, especially where you cannot estimate the final size.

Bye! Marco
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
But that's not a good solution. Why can't the FS reserve at least some KB for it's next transaction? Or why can't I reserve space that won't make the filesystem accidentially go dead? When the problem is known, why are there no mechanisms to prevent this?
In fact this means, that you cannot offer network storage from FreeNAS to users or services, especially where you cannot estimate the final size.

Bye! Marco

There are mechanisms to prevent this.

You get an e-mail warning you if your pool is over 80% full. Since performance tanks around that mark, it's a silly idea to fill it further.
 

macxs

Dabbler
Joined
Nov 7, 2013
Messages
21
Hi Eric,

There are mechanisms to prevent this.

You get an e-mail warning you if your pool is over 80% full. Since performance tanks around that mark, it's a silly idea to fill it further.

No, this is not a mechanism to prevent this. It is a notification.

Imagine a backup-scenario or a person who saves some large files on a share, this notification is useless as you cannot react in time, especially when you are sleeping :smile:. So a user or an unusual/unpredicted large backup set bricks up the storage dataset and requires manual intervention from an administrator.

So this is a known bug. Why isn't that fixed? Shouldn't a file system remain operational even in a state that is not the best use case?

Please, stop instructing me to not fill a ZFS Volume/Dataset over 80%, as I know it is a bad idea. I do not intend to do this. But sometimes you cannot be aware of anything, so bad things happen.

So would it be possible to file a bug?


Bye Marco
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, this is not a mechanism to prevent this. It is a notification.

I'm not sure he meant that the notification was the mechanism..

It isn't really a bug, it's a general issue for at least some CoW filesystems. It gets a bit complicated because you always need to be able to allocate new space prior to freeing the old, which means that each potential strategy to protect against this has at least some issues.

The usual solution is to avoid filling the pool entirely, but if you can't guarantee that isn't going to happen, you can try one of the various fixes like creating a dataset with a space reservation of 10% of your pool. You will still run into issues if you fill the pool, but at least you won't need to determine which files to sacrifice to the truncation gods.

The best mechanism to prevent your pool from filling is to make sure you don't fill your pool. Putting a backup on the pool? Make sure there's free space BEFORE you start, don't just start throwing data at the thing and pray that it fits. It /can/ be done. ;-) ZFS supports a rich set of space management features including datasets, reservations, and quotas.
 

macxs

Dabbler
Joined
Nov 7, 2013
Messages
21
Thanks for your answer!

It isn't really a bug, it's a general issue for at least some CoW filesystems. It gets a bit complicated because you always need to be able to allocate new space prior to freeing the old, which means that each potential strategy to protect against this has at least some issues.
For me this is a bug, at least in design because a legal transaction leads to an inoperable status.
Either ZFS should preserve transaction space or it should exclude delete-transactions from counting into quota. I know this is a ZFS thing, maybe this is just the wrong place to discuss this.

And, yes,
It /can/ be done
, but
  • ZFS is always announced as a robust solution. Shouldn't a robust FS work in any situation? (Even when performance is low?)
  • FreeNAS is announced as low-cost solution. And yes, it fits and is good (I love it). When there is not too much space (limitations of hardware for any reason like cost, free HDD slots etc), it is hard to predict space requirements.

Bye Marco
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I would think the proper way to handle this "bug" would be for ZFS to check before beginning a transaction. At least once available space drops below some threshold, every transaction would first check if there is at least enough room to complete the requested transaction and increase space by performing a transaction that deletes blocks. If there isn't, it is aborted. Of course the specifics of how much needs to be reserved and what threshold triggers this additional check is left as an exercise for the reader.

Can this issue be cleared by replacing the drives with larger capacity and resilvering?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, I agree at one level that it is an nonobvious and counterintuitive thing to happen, but on the flip side it's just another aspect of letting space run out. You don't complain that FreeNAS corrupted your backup because the disk filled and it got truncated at some point. Why don't you? You understand that when space runs out, bad things happen. I do hope that OpenZFS comes up with a slightly more comprehensive solution at some point, but I suspect that space reservation is going to be the primary component of that fix, and we can do that already.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
  • ZFS is always announced as a robust solution. Shouldn't a robust FS work in any situation? (Even when performance is low?)
  • FreeNAS is announced as low-cost solution. And yes, it fits and is good (I love it). When there is not too much space (limitations of hardware for any reason like cost, free HDD slots etc), it is hard to predict space requirements.
I disagree with both of those statements. Well, at least in parts.

First, ZFS is robust, but that doesn't mean it needs to work in **every** situation. And it certainly doesn't mean it won't have performance constraints.

Second, I don't think I'd describe FreeNAS as "low-cost". Certainly the software can't get much cheaper (I don't expect iX to send me money to use it, though I do have a shirt and hat from them... Thanks much). The hardware requirements are anything but. You certainly don't need to use extremely high-end equipment, but you won't end up very happy trying to use bargain-basement tech or that spare laptop that's been gathering dust. I would agree that FreeNAS can cover the needs that previously would have required high-cost solutions and in that sense it is low-cost.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would think the proper way to handle this "bug" would be for ZFS to check before beginning a transaction. At least once available space drops below some threshold, every transaction would first check if there is at least enough room to complete the requested transaction and increase space by performing a transaction that deletes blocks. If there isn't, it is aborted. Of course the specifics of how much needs to be reserved and what threshold triggers this additional check is left as an exercise for the reader.

Can this issue be cleared by replacing the drives with larger capacity and resilvering?

The real problem is that you can fill things to the point where some metadata block that needs to be updated is larger than the free space available on the disk. ZFS is a copy-on-write system so if it cannot write out the updated metadata, you're hosed - even if that metadata would be part of an operation that ultimately removes a file and clears up space. I'm simplifying this just a bit.
 
Status
Not open for further replies.
Top