Sergej31231
Dabbler
- Joined
- Apr 26, 2017
- Messages
- 14
the way its supposed to work, is that individual blocks have checksums. Each block which has the same checksum will only be stored on disk once.
Block sizes in ZFS are I think 128KB -> 1MB depending on dataset/zpool configurations.
Thus duplicating a file shouldn't use any extra space. Also, truncating part of the end of one of those files shouldn't use more than 1 extra block of space, and modifying one block in one of those files, again, shouldn't result in more than 1 extra block of used space.
Of course, used space will go up, but available space will not go down (by much).
Other than that, i have no experience with dedup, and can't tell you how good/bad the dedup ratio numbers are.
And then those blocks get compressed, leading to variable block sizes.
OK I try an experement as follows:
file1 = 17.136MB
Content:
file2 = exact copy of file1 + "abcd" characters
Now you seem to be right :)