Chin-Fah HEOH
Dabbler
- Joined
- Dec 14, 2016
- Messages
- 22
Hello
2 years ago, at the OpenZFS summit Matt Ahrens presented this. Video --> https://www.youtube.com/watch?v=PYxFDBgxFS8. He mentioned that his employer, Delphix, did not implement dedupe but would like interested parties to take up his idea of using logs instead of dedupe hashtables.
I was wondering if any company, or person(s), who has taken up Matt's idea and fix the large memory footprint required by ZFS dedupe. A 5GB per 1TB of dedupe is not very good. Imaging trying to dedupe 300TB usable would mean 1.5TB of RAM!
If you have more info, please share.
Thank you
/CF
2 years ago, at the OpenZFS summit Matt Ahrens presented this. Video --> https://www.youtube.com/watch?v=PYxFDBgxFS8. He mentioned that his employer, Delphix, did not implement dedupe but would like interested parties to take up his idea of using logs instead of dedupe hashtables.
I was wondering if any company, or person(s), who has taken up Matt's idea and fix the large memory footprint required by ZFS dedupe. A 5GB per 1TB of dedupe is not very good. Imaging trying to dedupe 300TB usable would mean 1.5TB of RAM!
If you have more info, please share.
Thank you
/CF