Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Wipe Empty Space

Joined
Feb 8, 2012
Messages
24
Thanks
0
#1
I'm running FreeNAS 8, ZFS, 2x1TB mirror RAID. I understand that it's not possible to securely delete files and plan to use encryption going forward as a quasi-replacement.

Is there a way to wipe free space on my disks so I can remove traces of historical data?

As a work-around I supposed I can just write a dummy file over and over until the disk is full, then erase it?
 
Joined
Mar 25, 2012
Messages
19,151
Thanks
1,861
#2
In the GUI if you view disks there is a button you can click to "wipe disk".

Edit: ignore this if you have data. I just realized the "free space" part. :p
 
Joined
Feb 8, 2012
Messages
24
Thanks
0
#3
In the GUI if you view disks there is a button you can click to "wipe disk".

Edit: ignore this if you have data. I just realized the "free space" part. :p
I do have data on the disk. Another work-around would presumably be to unmount one disk, repartition it, format it, then re-add it to the RAID. Then repeat with the other disk.
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#4
Old thread, about wiping ZFS empty space, but:

Has anybody thought about creating a new ZFS Dataset (on same volume if you have space or a new temp volume), then make snapshot & use snapshot send, recv to duplicate the ZFS Dataset and after verifying all data is duplicated delete the original ZFS Dataset.

If using same volume, you now have a new ZFS Dataset starting with fresh empty space. If using a temp volume, to move ZFS Dataset, you would then recreate your ZFS dataset on original volume and snapshot send / recv back to original volume and get rid of your temp volume.

This should effectively get rid of any empty space containing deleted data and you now have a new ZFS Dataset with all the original data but fresh blank empty space.

Anybody have thoughts on this? – I have notes of a procedure, to do this, if anybody wants it?

Also, prior message suggested dismounting disk, formatting and resilvering back to the RAID but that would NOT work as the ZFS RAID has all the information and the newly resilvered disk would just be right back to where it was when you started.
 
Joined
Nov 6, 2013
Messages
5,897
Thanks
982
#5
Datasets are not hdd's so you can't clear the free space in the disk. This whole thread is dumb, no one would ever do this. Just use encryption if you care that much.
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#6
The title of this thread is “Wipe Empty Space” and NOT referring to disks. Could be called “Wipe Dataset Empty Space” for that matter. It is OK for someone, without encryption, to ask about the empty space and its relationship to deleted data whether something can or cannot be done.

Even the original poster, of this thread, mentions they plan to use encryption moving forward but wanted to know what could be done with the existing dataset.

ZFS Dataset is very complicated because ZFS allows for snapshots, CoW, multiple storage pools, policy-based allocation, clustering, remote replication & so on.

ZFS is a copy-on-write filesystem, even if you were to write random data to the pool it won't overwrite the old deleted data unless you were to write data down to the last byte / inode – new versions of FreeNAS keep 3% of space to prevent you from being locked out of the volume (in old FreeNAS versions you could fill 100% of a volume then lose access to manage it) – so even if you wrote data down to the last 3% of the dataset you still would not wipe all the empty space and may dork your volume.

Therefore, the dataset empty space has many relationships to the function of the system – making management of the empty space almost impossible – meaning unless the developers wanted a dataset empty space management feature (which they do NOT) the Dataset empty space is like the unknown universe - to the end user at least.

To me the question was about if something can be done so that the historical deleted data cannot be recovered from the dataset empty space– much like the 100s of utilities for MS Windows (even open source projects) that write over empty space in operating systems like Windows (of course Windows is direct disk access and nothing like a ZFS Dataset – not even close). Heck even FreeNAS has wipe utilities for the actual disks – but again this thread is about the empty space in a dataset and its relationship to any previously deleted data.

I got to this thread by accident as I was following a discussion on deleted datasets & if they could be recovered – from all the threads it appeared that if a dataset is deleted, and the system runs for a certain period of time the deleted dataset & its data could not be recovered – and there were people trying very complicated procedures to recover data (rolling back the pools txg (using scripts), use an older uberblock and attempting to rollback a previous transaction log – it seemed every attempt to recover the deleted dataset or data failed – I only mention this as if someone actually copied their data to a new dataset and deleted the old dataset it is more than likely any historical deleted data (in the original dataset) could NOT be recovered – I am not saying it wrote over all the empty space but just that the old deleted data appears to be un-recoverable.
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,117
Thanks
547
#7
...
Also, prior message suggested dismounting disk, formatting and resilvering back to the RAID but that would NOT work as the ZFS RAID has all the information and the newly resilvered disk would just be right back to where it was when you started.
Uh, why would this NOT work?

ZFS only restores data IN USE. So the newly resilved disk has exactly ZERO empty space data from before. Keep in mind most "format" processes either do it at such a low level all the bits are re-written empty. Or, after the format, a media test, (aka write pattern, verify pattern), is performed.

If this is for security reasons, @SweetAndLow's suggestion of encryption is better.

Otherwise simply keep writting to the pool. It's likely the old data will eventually be over-written. Further, ZFS is such an un-common file system, recovery by casual hackers or data thieves is un-likely.
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#8
Arwen:

YOU PROBABLY ARE CORRECT e.g. removing disk in ZFS array, wipe it, and then resilver it back – the empty space should be in fact empty (I mention wipe meaning that every block, on the disk, is written over, more than once, before resilvering – wiping a disk, by writing over each block, can be done multiple times – wiping a disk is a whole thread by itself e.g. random writes, DOD standards, disk type, etc.).

ZFS & resilver, I had seen where the FreeNAS literature (online manual) mentions that ZFS only copies the blocks that are in use during resilver as opposed to most hardware raids where they have to copy every block – making ZFS faster and is also interruptible where ZFS resumes where it left off should an interruption occur – great benefits for ZFS for sure and I think your reasons for asking why removing / wiping / resilvering would not work – again here I say you are probably or for certain correct.

I was thrown off by discussions of the development of ZFS itself – new feature flags, commands, etc. and if this ZFS development is affecting any parts of the resilver process where the general notes would not state - specifically what is getting resilvered in regard to the blocks & latest features? If the general notes on ZFS, taken at face value, then your statement is correct – a resilvered disk would have fresh empty space and only bring back the blocks that are in use.

I even read recently where the ZFS developers want to add a feature to allow recovering deleted datasets – not sure how they would even start such a project but there are discussions on this as a future enhancement to ZFS but the function would probably have to be enabled before the dataset was deleted – kind of like a dataset snapshot if you will. Just saying, ZFS is very complicated.

My suggestion here, for making the empty space unrecoverable, on an existing NON-encrypted dataset, was to create a new dataset then either use ZFS send/receive or rsync to copy the data then delete the original dataset. This does not write over the empty space but makes any kind of recovery, on the original deleted dataset, almost impossible and as you mention the longer the system runs the empty space gets written over (I could tell by that statement you know a lot about this subject)

From what I have read if you delete a file from a ZFS dataset the file most likely can be recovered as opposed to making a new dataset, moving the data, then deleting the original dataset – the latter making recovery near impossible – at least I have not read where anybody was able to recover data from a deleted dataset - & I have read some pretty complicated attempts at doing so (fortunately I have never needed such a method but there have been users that accidentally deleted a dataset they wanted back and spent much time trying – most of us have redundant backups, etc.).

Also, I am not sure but would guess that ZFS send / recv only copies data as well to the new dataset – ZFS send / receive is the fastest way, I have found, to transfer data between datasets.

RE: Encryption – yes even the original poster of this message knows that encryption is the best method for protecting data (the poster even says so) and we all agree but the poster wanted to know the status of the empty space & the historical deleted data and how to insure it cannot be recovered on a NON-Encrypted dataset – I myself have been curious about this for some time.

Heck, data erasure / clearing / wiping has been a serious issue for people who store data and for operating systems that write directly to the disk blocks, there are 100s of utilities and even the DOD lists many random write methods acceptable for disposing of hard drives. SSD drives are another animal. Of course, ZFS datasets are totally different and you cannot just write over the physical blocks directly e.g. the reason for this thread – I realize most people do not care but when you get in the weeds this subject can be quite challenging.

In Closing per your comment:
Detaching, Wiping & resilvering each disk probably will leave all the empty space truly empty if ZFS performs as stated in the FreeNAS manual – meaning resilver only brings back blocks that are in use reducing the time to rebuild the vdev & thereby leaving the empty space truly empty.

A bit risky to be detaching disks and resilvering and I am not encouraging anybody in doing this – you do not want to lose your array should another disk or two fail while doing such a procedure

This is a conversation on the status of the empty space, in a NON-encrypted ZFS dataset, where files have been deleted – can they be recovered or what can be done so that the deleted files, in the empty space, cannot be recovered – in my opinion a very complicated subject.
 

garm

FreeNAS Expert
Joined
Aug 19, 2017
Messages
1,136
Thanks
305
#10
even if you were to write random data to the pool it won't overwrite the old deleted data
Of course it will, maybe not with crypto level accuracy but any released block is available for new transaction groups
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#11
A single pass is needed to erase a disk https://en.m.wikipedia.org/wiki/Data_erasure#Number_of_overwrites_needed

Yes, for modern platter drives (not flash or SSD), 2014 or later (modern drives have a “verify pass” that scans all sectors against what character should be there), a single pass overwrite is widely considered to be enough to permanently erase the hard drive. Special software is used to meet certain standards that may require verification & even more than one write but again recent studies do show a single pass write, on modern drives, is a secure erase.
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#12
Of course it will, maybe not with crypto level accuracy but any released block is available for new transaction groups
You left out the whole paragraph where I do mention the empty space (e.g. released block) is available to be overwritten but you would have to overwrite all free blocks to insure the released free block, where the deleted data used to be, no longer contains the old deleted data, specifically I mentioned “unless you were to write data down to the last byte / inode” – I meant write new data, since you deleted data, to every block down to the last freely released block / inode (e.g. 100% of the pool – of course impossible and nobody would do this)

FreeNAS now prevents writing to more than 97% of the blocks so you do not lose access or dork your volume – the old versions of FreeNAS would let you write to 100% of the blocks and then you would lose control of your pool. So even if you executed a script to write to the freed blocks you could only overwrite 97% of the freed blocks and this is not a procedure anybody should attempt for unknown and not recommended outcomes – and even if you wrote fresh data to 97% of the blocks 3% still remain that could be freed blocks that had contained deleted data.

I was trying to reference the status of the dataset empty space (released blocks), in a NON-encrypted ZFS dataset, where files have been deleted (no snapshot involved) – can the deleted data be recovered, from the freed blocks, or what can be done so that the deleted files, in the freed blocks, cannot be recovered? Stated another way - if one does not overwrite the pool, since you deleted a file, or any new writeable activity, is it possible to recover the deleted data from the freed / released blocks on ZFS?

There are threads with procedures for recovering the deleted data from the freed blocks e.g. very complicated procedures / scripts to recover data (rolling back the pools txg (using scripts), use an older uberblock and attempting to rollback a previous transaction log (I myself have not attempted such a recovery but what got me curious about this thread and what the status really is of the freed block in the NON-encrypted ZFS dataset). I guess I consider data very important and want to know if I delete my data on a NON-encrypted ZFS dataset that it cannot be recovered? If the data can be recovered, then what can be done to make it un-recoverable?
 

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,048
Thanks
3,894
#13
What's the concern here? Someone with access to your server? You have bigger problems. Someone rummaging through your trash picking up a hard drive? Use full-disk encryption.
 

garm

FreeNAS Expert
Joined
Aug 19, 2017
Messages
1,136
Thanks
305
#14
I guess I consider data very important
So do I, only I don’t consider deleting data to in any way make it unrecoverable. I make data unrecoverable with file level encryption.. you delete data to make space for new data, that are two very different workflows.. yet you haven’t proven your point, is recovery probable if an attacker manage to get a single drive from a ZFS RAID pool?
 
Joined
Apr 19, 2015
Messages
11
Thanks
5
#15
So do I, only I don’t consider deleting data to in any way make it unrecoverable. I make data unrecoverable with file level encryption.. you delete data to make space for new data, that are two very different workflows.. yet you haven’t proven your point, is recovery probable if an attacker manage to get a single drive from a ZFS RAID pool?
Here is what the question is (not the question you are asking or anything about physical disks):
Take a NON-Encrypted ZFS Dataset and delete some data (this releases the blocks for new transaction overwriting). Now do not write any new information to the dataset or volume– can the deleted data be recovered and if so is there anything that can be done to insure the released blocks are erased (ZFS is not physical disks and we are not discussing physical disks).

This thread has nothing to do with data recovery from a single drive removed from a ZFS RAID pool (a truly impossible task and nothing being discussed here even relates to such or would I think such a thing possible in anyway). You would have to read this threads details to understand why we talked about detaching & reattaching a drive from the pool – it was about making previously released ZFS dataset block space empty – that was what Arwen was referring to and what I was responding to (we were not discussing recovering data from a physical disk of the array or is anybody asking if the data can be recovered from a removed disk of an array)

This thread is about the empty space / released block from a NON-encrypted ZFS dataset (ZFS has nothing to do with physical disk blocks & is why it is so efficient & special) and if it in fact the released block contains the deleted data until the block is overwritten (the block is free to be written over but until that happens it may contain the deleted data information).

The original poster (not me, but I have been curious about this) wanted to know if the released ZFS dataset block can be recovered or needs to have its space wiped (were not talking about physical disks we are talking about ZFS datasets & the blocks). Your responses are taking this thread to ideas not being discussed – probably why you think this does not make sense.

Others keep talking about full disk encryption and that goes without saying – we all know that. Look at the 1st post in this thread – the poster states they are moving their data to full disk encryption but wanted to know if after deleting the original data, on the NON-encrypted dataset, if the deleted data can be recovered from the original dataset (this would be referencing the freed blocks available, after the data was deleted from the dataset, for overwriting.

Below is another way to state this:
What is the status of the dataset empty space (released blocks), in a NON-encrypted ZFS dataset, where files have been deleted (no snapshot involved) – can the deleted data be recovered, from the freed blocks, or what can be done so that the deleted files, in the freed blocks, cannot be recovered? Stated another way - if one does not overwrite the pool, since you deleted a file, or any new writeable activity, is it possible to recover the deleted data from the freed / released blocks on ZFS dataset?, if so can anything be done to make the released blocks truly empty?
 

garm

FreeNAS Expert
Joined
Aug 19, 2017
Messages
1,136
Thanks
305
#16
This thread has nothing to do with data recovery from a single drive removed from a ZFS RAID pool
Not true.. OP asks to wipe free space on disk..
Others keep talking about full disk encryption
Ya, but I didn't..

what can be done so that the deleted files, in the freed blocks, cannot be recovered?
Encrypt the file..
is it possible to recover the deleted data
as the blocks are intact I would say it’s possible, just not probable.. an attacker needs access to your entire system and for most users that mean they are in custody and have bigger headache in life..
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,117
Thanks
547
#17
...
A bit risky to be detaching disks and resilvering and I am not encouraging anybody in doing this – you do not want to lose your array should another disk or two fail while doing such a procedure
...
If someone is doing this on a regular bases, then they should have a free disk slot and use ZFS replace with new disk. This is new type of replace that old hardware RAID did not have. One that I had wanted since late last century, because I WAS bit by a RAID-5 replacement, which found another bad disk during re-sync. (Later firmware of that hardware disk array started to include weekly scrubs of the RAID-5 disks.)

Here is the ZFS replace command I refer to;
Code:
zpool replace [-f] [-o property=value] pool device [new_device]

If you remove the old disk first, then you do loose a layer of redundancy. But, if you install the replacement disk first, then no loss of redundancy.

Basically ZFS will use the source disk for any blocks it can, and use any available method, (RAID-Zx parity, Mirror disk(s), or extra ZFS copies), for blocks that are un-recoverable on the source disk. This allows maintaining as complete redundancy as possible during the disk replacement. Meaning if you are replacing a known good disk with a recently wiped disk, the only potential loss of data is that you may find a previously unknown bad block. But, that is what scrubs are for, finding the bad data BEFORE you loose redundancy.
 
Last edited:

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,117
Thanks
547
#18
I have considered requesting a new ZFS feature on this subject. Basically a Dataset / Zvol settable parameter for over-writing data that is deleted. And potentially multiple times. So turning the feature "=on" would be 1 over-write. Using "=5" would over-write deleted data 5 times. I'd have it as a Dataset / Zvol feature, (but would have a feature flag turned on at the pool level). This allows datasets to have different uses, some with the feature off. Others with it on, and some with it set to 5 or 10.

However, I have since dropped the idea for multiple reasons:
  • SSDs use new blocks for writes. Thus, it would be completely un-needed for SSD. Any re-cycled block would be wiped by the firmware before re-use. (Barring firmware bugs of course...)
  • Native ZFS Dataset / Zvol encryption removes the need
  • Some hard disks, (like my Seagate Archive / Shingled drive), move data around so much that during normal writes, deleted data can get so fragmented it's practically useless.
Of course, un-encrypted normal HDD are still a target for that feature. In some ways it would not be that hard. At present, ZFS includes something called async delete, which could be modified to include an over-write feature. But, I won't make that feature request, as I personally don't need it.
 
Top