NAS DOWN - Please help.

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi

My Freenas 9 NAS (HP Microserver 6 drives) has been rock solid for years but yesterday I shut it down to move it and upon rebooting I have corruption on the boot usb drive so that /usr folder is gone.

All of my data drives look ok from the BIOS. I would appreciate any advice on whether I can simply reinstall freenas on a new USB stick and I believe it should be able to find and import my ZFS volume. If this is the case, should I be reinstalling freenas 9 or going to the current verswion ?

Any help would be appreciated - Many Thx
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
whether I can simply reinstall freenas on a new USB stick and I believe it should be able to find and import my ZFS volume
That should work.

should I be reinstalling freenas 9 or going to the current verswion ?
If you have jails, it will be a little more complicated, but if it was just serving files, a build to the latest version should be pretty straightforward.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Thank you.

Somedays nothing goes right :)

The Volume has 2 datasets (dataset1 and dataset2) and was 99% full before the crash. I am in the process of building a second NAS when this happened. The server also had 4GB of RAM but was only sharing an SMB share and nothing else. I have ordered another 8GB SIMM to bring the total to 16GB

So today, I first upgraded the RAM to 12GB (I had 2 8GB simms for the new NAS but one did not work). I then reinstalled the latest version of freenas and reimported the volume succcessfully.

However, I believe that I damaged one of the SATA cables as I can see 1 drive was reporting errors so for the moment I have a degraded volume (Im running RAID-Z2 so no data loss). I'm just awaiting a new cable to fix that problem.

1596566131543.png


The remaining problem I have is that upon reboot dataset1 was not mounted because it could not create the mount point because the volume is full. I have spent some time today removing data from dataset2 but deleting the file does not seem to reclaim any space. Does any one know why this may be ?

1596566292793.png


Once I can mount the main dataset I can remove data to get down to the 80% limit recommended by Freenas.

Any help would be appreciated.

Thank you.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Bump :)

I have made some progress but I still have the issue that the Volume is 100% full and although I can delete some data from the dataset, the freespace does not increase even by a single block

1596728361224.png


I dont believe I have any snapshots

1596728429627.png


or recycle bins

Any help would be appreciated.

Many Thanks.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Sadly I have rebooted multiple times but no improvement.
Next time, at 90% full... Stop filling...
Nothing I can help you with now, but it's kinda asking for it if you 99% fill a zfs volume...
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
the Volume is 100% full
Oof. If you're truly 100% you're in a pickle.

Try truncating and then deleting single file via:

Code:
# cat /dev/null > /file/to/delete
# rm /file/to/delete


Do you have any snapshots showing via zfs list -t snapshot in the command line? (Outside of the boot pool of course.)
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Since your pool is full, there's really only one remaining dice to throw: Unmount your pool, and see if you can mount it read-only. This may allow you to access your datasets to evacuate data off the pool.

Code:
zpool export -f Vol1
zpool import -o readonly="on" Vol1


ZFS is really unhappy being that full, as there's no space for copy-on-write housekeeping for things like atime.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Hi

nullifying the file does not seem to work but removing it does (but it does not change the free space)

Oof. If you're truly 100% you're in a pickle.

Try truncating and then deleting single file via:

Code:
# cat /dev/null > /file/to/delete
# rm /file/to/delete


Do you have any snapshots showing via zfs list -t snapshot in the command line? (Outside of the boot pool of course.)

I dont have any snapshots apart from the boot pool.

1596744156350.png
 

Attachments

  • 1596744087109.png
    1596744087109.png
    38.8 KB · Views: 186

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Since your pool is full, there's really only one remaining dice to throw: Unmount your pool, and see if you can mount it read-only. This may allow you to access your datasets to evacuate data off the pool.

Code:
zpool export -f Vol1
zpool import -o readonly="on" Vol1


ZFS is really unhappy being that full, as there's no space for copy-on-write housekeeping for things like atime.
Hi

I mounted the volume readonly successfully but of course I can then nolonger remove files from the dataset.

1596744563397.png
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Deleting files isn't possible in a read-only mount. Did dataset1 mount? That was the point to try a read-only mount. The only thing you can do is to get files off to other media.

Once your data is safely offloaded, you can destroy the pool, recreate it, and put data back below the 80% limit.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
I have also replaced the failed SATA cable to the Volume is nolonger degraded and ensured that there is 16GB RAM available.

So my issue now is not that I cannot remove files but that the space of the removed files is not reclaimed.

I did notice that I get lots of errors on starting the server as follows which I assume is related to the volume being full

1596744863119.png


1596744922684.png


1596744946276.png


Thank you for all the help so far.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
@HoneyBadger, that won't work, as there's no more space for copy-on-write delete. Evacuating this pool is the only option left before blowing it away.
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Deleting files isn't possible in a read-only mount. Did dataset1 mount? That was the point to try a read-only mount. The only thing you can do is to get files off to other media.

Once your data is safely offloaded, you can destroy the pool, recreate it, and put data back below the 80% limit.

Hi dataset1 did not mount because the mount point did not exist. Dataset 2 did mount.

I exported the volume again and tried to imported it but now its taking a long time !

1596745205992.png


and had not completed yet.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
@HoneyBadger, that won't work, as there's no more space for copy-on-write delete. Evacuating this pool is the only option left before blowing it away.
Another option would be adding a vdev to the pool (even if it's just a mirror of 250gb harddrives) and moving the data after that...
 

lessdrama

Dabbler
Joined
Aug 3, 2020
Messages
21
Another option would be adding a vdev to the pool (even if it's just a mirror of 250gb harddrives) and moving the data after that...

Hi I have now backed up dataset2 onto seperate media so I can do anything I want with that dataset including removing it if needed (although Im not sure that will actually free up the space either)

I dont have any free sata ports left. Would adding a vdev of 2 drives connected via USB be acceptable ?

Finally, 99% of the data is consumed by dataset1 which I cannot mount as the mount point not exist

Would it b e possibleto umount dataset 2 and mount dataset1 on its mount point. Then I could copy everything to the new NAS that I was setting up

1596806030333.png


I have not lost any data (yet) but I do feel like Im running out of options.

(yes I have the data on other sources but I would like to not have to rebuild the data if possible).

many Thx/
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If you're sure you've offloaded dataset2, then you can zfs destroy -rf Vol1/dataset2. This should free up a good chunk of space, and allow you to set the mountpoint option for dataset1.

Please don't use USB for VDEVs. USB controllers can't really handle the usage patterns of ZFS. They're really only capable of dealing with FAT32 partitions.
 
Top