SOLVED No Space left on datasets

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
Hi,
there was a problem in our clean up routine and so one dataset was filling up until the end.

freenas.jpg


The problem is now that I can't delete anything because of the error:
No space left on device

I can still access the files.

A reboot of FreeNAs won't help.
What can I do to solve this problem?
We are using FreeNAS 9.3

Thanks for your help.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Copy off all the content, recreate the dataset(s), copy content back.

When a dataset or pool is 100% full, you have no other option.
 
Joined
Oct 7, 2016
Messages
29
Hi,
there was a problem in our clean up routine and so one dataset was filling up until the end.

View attachment 34947

The problem is now that I can't delete anything because of the error:
No space left on device

I can still access the files.

A reboot of FreeNAs won't help.
What can I do to solve this problem?
We are using FreeNAS 9.3

Thanks for your help.
Looks like you have some datasets with space free and so some space in the pool.
If those datasets have reservations you could try reducing the size of the reservation of one of them.
That would give you some space in the dataset that's full and might enable you to remove some files from that dataset.

Paul
 

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
Thanks for your answer.

The space of the datasets are "shared". Wouldn't it help If copy all data of one set to a different storage and delete this dataset, so that the others have free space left?
So that I have to do your solution only one time...

Or is the dataset competently broken after it reaches 0B free?

(my post is overlapping with Paul answer :) )
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Trexman,

Know that you are now at a critical moment with a significant risk of loosing everything. Should your pool hit 100%, you may become unable to access it.

As Paul said, some of your space is free but reserved for other datasets. You must use this space to get your FreeNAS back working by allowing your other datasets to use it.

Once done, to free up space in FreeNAS, you must first delete snapshots. To delete just files will not free up any space as long as snapshots are pointing to these files.

After deleting snapshots, your next step will be to increase your pool capacity for your data to usage to drop below 50% or better, below 25%. For that, you can deploy a new FreeNAS and do a ZFS send / receive from this one to the new one.

Another option is to add vDevs in your pool. Be sure to keep the same vDev architecture if you do that (ex: add mirrors if your vDevs are mirrors, add 6 disks RaidZ2 if that is your vDev, ...).

Last option to increase your pool is auto-expand : replace each drive, one at a time with a complete re-silver between each, by larger drives. Once your last one is changed, the pool will expand to the new capacity.

But as of now, you are very high risk with such a too high load in your pool.
 
Joined
Oct 7, 2016
Messages
29
Only if the pool has no space left you have a problem but your pool still has 2.5 TiB left.
It's just that that free space is reserved and cannot be used by the datasets that are now 'full'.
Reducing reservations might be a solution, or deleting a snapshot if you have those.
Can you access the command line interface and paste the output of 'zfs list -r -o space EonStore_SAN2' and 'zfs list -t snapshot' ?

Paul
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Pool is marked as loaded up to 84%. Anything over 80% is entering the danger zone. Considering that increasing a pool can take very long time, it is way better to start sooner than later.

Also, to increase a pool by 10% does not really worth it. When you need it seriously like when over 80%, you must at least double the pool. That often translates to new hardware, ton of disks, ... So one needs time to absorb the cost.

The amount of free space is not the key indicator. The % of usage is...
 
Joined
Oct 7, 2016
Messages
29
Pool is marked as loaded up to 84%. Anything over 80% is entering the danger zone. Considering that increasing a pool can take very long time, it is way better to start sooner than later.

Also, to increase a pool by 10% does not really worth it. When you need it seriously like when over 80%, you must at least double the pool. That often translates to new hardware, ton of disks, ... So one needs time to absorb the cost.

The amount of free space is not the key indicator. The % of usage is...
AFAIK it's just that performance may be impacted when a pool is over 80%. Data integrity should not be in danger just because of this.
The performance impact may be severe and resilvers may take a very long time of you ever need to replace a disk in a pool this full ( too long to be running with reduced redundancy and run the risk of another disk failing).
But you should not lose any data just because the pool has over 80% usage.
 

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
Maybe I should explain a little bit more about the whole pool:
- Backup-woaz, Temp_IMAPDump and woaz_archiv are dataset which are used as CIFS shares
- iscsi-* are ZVOLs which are used for iSCSI targets

Also the FreeNAS has "no build in HDDs". It is an old construct of an EonStor SAN.
So the FreeNAS only sees on big 18TB disk.

Once done, to free up space in FreeNAS, you must first delete snapshots. To delete just files will not free up any space as long as snapshots are pointing to these files.

We don't use snapshots. So there should be nothing to do, right?

Can you access the command line interface and paste the output of 'zfs list -r -o space EonStore_SAN2' and 'zfs list -t snapshot' ?


Code:
# zfs list -r -o space EonStore_SAN2
NAME                                 AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
EonStore_SAN2                            0  15.7T         0     96K              0      15.7T
EonStore_SAN2/Backup-woaz                0  8.18T         0   8.18T              0          0
EonStore_SAN2/Temp_IMAPDump              0   749G         0    749G              0          0
EonStore_SAN2/iscsi-asterix-test      411G  1.85T         0   1.45T           411G          0
EonStore_SAN2/iscsi-datevbak-woaz     337G   508G         0    171G           337G          0
EonStore_SAN2/iscsi-hupdbbak-woaz     348G   508G         0    160G           348G          0
EonStore_SAN2/iscsi-mailarchiv-woaz   488G  2.03T         0   1.55T           488G          0
EonStore_SAN2/iscsi-rsnapshot-woaz    473G  1.73T         0   1.27T           473G          0
EonStore_SAN2/woaz_archiv                0   234G         0    234G              0          0


Code:
# zfs list -t snapshot
NAME                                                            USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/default@2016-02-29-08:27:54                  2.89M      -   513M  -
freenas-boot/ROOT/default@2016-03-09-16:52:41                  1.12M      -   514M  -
freenas-boot/ROOT/default@2016-03-09-17:49:05                  1.15M      -   514M  -
freenas-boot/ROOT/default@2016-03-24-18:46:43                  3.03M      -   516M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602031011  25.5K      -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2016-03-09_17:49:05         26K      -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2016-03-24_18:46:43          1K      -  6.79M  -


I'm going to move the smallest dataset to a different storage and delete it than from here.
After this I'll have a look at this again.
 
Last edited:

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
OK the deletion of one small dataset solved the problem.
Now I can write and delete all other datasets too.

Thanks all for your help.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
AFAIK it's just that performance may be impacted when a pool is over 80%. Data integrity should not be in danger just because of this.

Performance is clearly impacted at this level, but if you do some research in the forum, you will see that some lost their pool once they reached 100%. That mark can loose you your pool and your data.

The 80% mark by itself will not have this effect, but when you reach that one, it is clear evidence that you are heading toward the 100% mark. As explained, to return the pool to a lower ratio requires to double it or more, so requires long time, preparation, money, ... If you wait up to 98%, you will not have the required time to avoid the 100% mark. Here, at 84%, the situation is clearly one that must be addressed.

Being over 80% is terrible practice for performance, but also high risk for availability.
 

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
I have one final question. The solution for this is to define a quota for all dataset or reserved space?
I see that you can setup a value inherit on a parent dataset.

What is the best practice to say: I want to keep at least 100GiB of free Space on all Datasheets.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Trex,

First step of the process is from the planning phase. You measure how much data you have, you estimate how much you will add per year, you give yourself a solid 5 years, You double that a first time to count for snapshots and unexpected needs and re-double it to target a 50% mark. That gives you a minimum size of usable space for your pool.

Once online, you monitor the server and disk usage. As soon as you see that you will go over the estimated size, you start thinking about it. How far are you from your mark ? What options are available ? How much time you have ? ...

To ensure some space will remain free no matter what, to fix quota for all datasets and their childrens is a way to ensure even an unmonitored server will not hit the 100% mark.

Still, better to monitor the server on a regular basis but such a safety is surely another good option.
 

trexman

Dabbler
Joined
Mar 26, 2015
Messages
17
Hi Heracles,

I agree what you say about planning and the calculation. But we missed this "planning phase"
To be honors this FreeNAS is mostly a BackupStorage. So it wouldn't be nice if we loose anything, but it wouldn't be "deadly".

So let me ask more direct: What do I have to change or configure, so that the storage stops before it "hit the 100% mark" as you say?

Is still don't understand the disk usage and how I can fix reserve e.g. 100GiB free space.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
@trexman , You can assign a quota to a dataset that would allow you to monitor the space, but will give you the ability to relax that quota when it becomes critical whithout having to worry about deleting existing data. Regardless, you will need to increase the size of your pool to make it possible.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Trex,

One way is to create an new dataset, you call it EMPTY and do not put any stuff in it. Considering you have about 475G of free space as of now and that is about 15%, I would save at least one third of that space to limit the pool around 95%. So configure the EMPTY dataset with a reserved space for it of about 150 - 175G. That way, this space will not be offered to any other dataset.

Still, this is to reduce the risk but that pool still need more space. If your pool is made of vDevs created as mirrors, you can create a new mirror and add it to the pool. Be sure NOT to add a single drive vDev in your pool. You would ruin your security should you that. Also, to add a vDev of a different type than the one you already have is possible but not recommended.

Be careful with that server and good luck in your effort to expand the pool,
 
Top