NAS DOWN - Please help.

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
If you're sure you've offloaded dataset2, then you can zfs destroy -rf Vol1/dataset2. This should free up a good chunk of space, and allow you to set the mountpoint option for dataset1.

Please don't use USB for VDEVs. USB controllers can't really handle the usage patterns of ZFS. They're really only capable of dealing with FAT32 partitions.

Hi

Will the command zfs destroy -rf Vol1/dataset2 also destroy the mount point under /Vol ?

I'm just asking because if it does destroy the mountpoint, i may not be able to create it again since there is no space (assuming the worst case scenario does not free up the space when destroying the dataset).

I am positive I have saved all of the dataset2 data.

many thanks
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Will the command zfs destroy -rf Vol1/dataset2 also destroy the mount point under /Vol ?

Yes. If you're concerned this may not free up space on Vol1, you can run zfs remap Vol1 to kick off a scan for free space, which should be available after the dataset2 destroy. Then try zfs get mountpoint Vol1/dataset1. This should be set to /mnt/Vol1/dataset1. If it is, then try zfs mount Vol1/dataset1.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi

I have now destroyed dataset2 (26GB)

However I got an error when trying to run the remap command.

1596809095440.png


I know that file deletes happen asyncronously. Should I way for a while to see if the space is reclaimed or is there anything else I can do.

Do I need to "upgrade" the volume first ?

Thanks.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, please run zfs upgrade Vol1 and then the remap.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Sorry, I meant zpool upgrade Vol1.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi No problem. I appreciate all the help.

The upgrade worked successfully and I have started the remap

1596811090841.png


No change yet though to the free space and so I cant mount the dataset1

1596811193195.png


Many Thx
 

Attachments

  • 1596811142677.png
    1596811142677.png
    25 KB · Views: 178

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
That's the wrong mountpoint. Try zfs set mountpoint="/mnt/Vol1/dataset1" Vol1/dataset1. Then try the mount again.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
You Sir are a genius :)

1596816185953.png


At least I can now copy all of the data off onto the new NAS.

If you have any ideas about reducing the space (because just removing the files does not help), then it would be appreciated.

Many Thanks.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If you have any ideas about reducing the space (because just removing the files does not help, then it would be appreciated.

Unfortunately, no. This pool is too far gone to recover space. All that can be done is to destroy it once you've recovered all the files safely off to another location.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi

OK. I hope my stupidity is of help to someone in the future who may be reading this post. I will spend the next few days copying over the data and then provide an update.

Once again many thanks.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
You Sir are a genius :)

View attachment 40649

At least I can now copy all of the data off onto the new NAS.

If you have any ideas about reducing the space (because just removing the files does not help), then it would be appreciated.

Many Thanks.
Personally, I would try to isolate the snapshots that hold deleted files or folders.
When snapshots are enabled and you are trying to recover space by deleting files, you will only be able to recover the space of the deleted files when you destroy the snapshot that is referring to them.
Even if a snapshot referring to the file is deleted, other snapshot can still hold the space to those files. So it is a matter finding which snapshot hold what.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi

I dont believe I have any snapshots of this volume as I had to install freenas 11 due to the corruption of the freenas 9 usb stick and I had not setup any snapshots.

However - big news.

I deleted a few large files just using the rm method and I can see that the amount of used space was going down but the free space was still zero.

I ran the remap command again and checked again AND I NOW HAVE SOME FREE SPACE :)
1596817482334.png


I will now copy the data off and then delete to get down to the 80% level.

This forum is awesome - Thank you all so much.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hi

I dont believe I have any snapshots of this volume as I had to install freenas 11 due to the corruption of the freenas 9 usb stick and I had not setup any snapshots.

However - big news.

I deleted a few large files just using the rm method and I can see that the amount of used space was going down but the free space was still zero.

I ran the remap command again and checked again AND I NOW HAVE SOME FREE SPACE :)
View attachment 40650

I will now copy the data off and then delete to get down to the 80% level.

This forum is awesome - Thank you all so much.
Snapshots are part of the volume.
Out of curiosity, what does the following command returns?
zfs list -t snapshot -r Vol1

From you original post, it seems you have "Compression" set to "OFF". I would suggest enabling LZ4 compression has it should help you with the space issue to some extent.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Snapshots are part of the volume.
Out of curiosity, what does the following command returns?
zfs list -t snapshot -r Vol1

From you original post, it seems you have "Compression" set to "OFF". I would suggest enabling LZ4 compression has it should help you with the space issue to some extent.

Hi

it says no datasets available (and I did not delete any snapshots during this problem)

1596818520529.png
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hi

it says no datasets available (and I did not delete any snapshots during this problem)

View attachment 40651
I can confirm running the command on a volume or dataset that doesn't contains snapshot will return the "no datasets available" message.
So to summarize, you don't have any snapshots.
This also means deleting files will allow recovery of the allocated space. When space is recovered form deleting snapshots, it sometimes takes a while (several minutes) from the space to be entirely freed. Maybe this is the same type of behavior here.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I think you should still destroy and recreate your pool once you save your data. If you observe, your pool Vol1 is mounted at /Vol1, not /mnt/Vol1. Unfortunately, this can only be fixed by destroying and recreating the pool.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
I think you should still destroy and recreate your pool once you save your data. If you observe, your pool Vol1 is mounted at /Vol1, not /mnt/Vol1. Unfortunately, this can only be fixed by destroying and recreating the pool.

Noted. I wont reboot for now until I clear down and copy off the data.

I have now trimmed the Volume size down to 89%. and have a popup on the GUI reminding me that optimal is to be below 80%.

Is this is the case on large volumes (18TB RAIDZ2 12TB usable)

1596822400150.png


Many Thx.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
The warning is because performance gets increasingly worse around 80% full (not to say it doesn’t degrade earlier, it does) as ZFS starts running out of room for sequential writes. You may get a little further with very large pools, and, really, once you are at 80% you’re so full you want to do something about it. I doubled my pool space when I hit 75%.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
The warning is because performance gets increasingly worse around 80% full (not to say it doesn’t degrade earlier, it does) as ZFS starts running out of room for sequential writes. You may get a little further with very large pools, and, really, once you are at 80% you’re so full you want to do something about it. I doubled my pool space when I hit 75%.

WOW -Noted. I will keep clearing down for now. Are you runnning RAIDZ2 ?

Using 75% as the limit means that I am getting an effective capacity rate of 50% of the total number of drives (on the basis that I have 6 3TB RED CMR drives) because I lost 2 for redunandcy and now can only fill the remaining 4 to 75%. I'm not complaining but just noting it for anyone reading this.

Many Thanks,
 
Top