inman.turbo
Contributor
- Joined
- Aug 27, 2019
- Messages
- 149
Hi everyone I am new here and I don't usually do this forum thing. I would like to more, but I just can't ever seem find the time. However I have no choice in this case because I believe this is the only place I can find the answer I need.
I am setting up a small group of VM's on a 14gen Dell production system. Red Hat, Qemu-KVM. One of the VM's has Freenas installed. Freenas controlls 18432 MIb RAM, a LSI HBA, and a single ~2 TB SATA DC SSD storage pool. Most of my VM are qcow2 on a FreeNas NFS share on a single dataset (sync always, Inherits lz4 2.28x compression). Others are on an LVM on an iSCI controlled by the host from another SAN (their storage is out of my control). A couple though are on thier own isci block device from my Freenas.
Now I have a ticket for a VM that will host web applications over several domains. The ticket is a 20gb zvol (iSCI extent) . The image is supposed to be DD'd over to the target by the end of the day. The current true size of the image is less than 5gb and it is likely the VM will never even use half of the 20GB. Each domain will have the exact same web application with small differences. All the same except images, company names and logos etc. and a different database (or same database with different data).
Now I have worked with this client before on his app and dev servers. His software guys do a lot of copying and moving around of the same files, and he is always asking for better read and write for copying around and uploading small bits of the same tiny text files repeatedly. I have offered to help him at the file and application level to restructure a few thing and maybe use symlinks to save him some of the trouble. He isn't interested though because he wants the flexibility of being able to change something in any one place and it not have any affect on another. Also there may be some permission and/or copy-write issues not having everything segregated at the file-system level in the production arena.
Is deduplication at block level a good idea for this small zvol, for performance? I think maybe since it is so small but I'm scared to death of deduplication. If there was ever a problem I would never be able to get more ram from upstream, and would end up having to restore the whole pool from back up on a dark and lonely weekend night.
Thank you for your help.
I am setting up a small group of VM's on a 14gen Dell production system. Red Hat, Qemu-KVM. One of the VM's has Freenas installed. Freenas controlls 18432 MIb RAM, a LSI HBA, and a single ~2 TB SATA DC SSD storage pool. Most of my VM are qcow2 on a FreeNas NFS share on a single dataset (sync always, Inherits lz4 2.28x compression). Others are on an LVM on an iSCI controlled by the host from another SAN (their storage is out of my control). A couple though are on thier own isci block device from my Freenas.
Now I have a ticket for a VM that will host web applications over several domains. The ticket is a 20gb zvol (iSCI extent) . The image is supposed to be DD'd over to the target by the end of the day. The current true size of the image is less than 5gb and it is likely the VM will never even use half of the 20GB. Each domain will have the exact same web application with small differences. All the same except images, company names and logos etc. and a different database (or same database with different data).
Now I have worked with this client before on his app and dev servers. His software guys do a lot of copying and moving around of the same files, and he is always asking for better read and write for copying around and uploading small bits of the same tiny text files repeatedly. I have offered to help him at the file and application level to restructure a few thing and maybe use symlinks to save him some of the trouble. He isn't interested though because he wants the flexibility of being able to change something in any one place and it not have any affect on another. Also there may be some permission and/or copy-write issues not having everything segregated at the file-system level in the production arena.
Is deduplication at block level a good idea for this small zvol, for performance? I think maybe since it is so small but I'm scared to death of deduplication. If there was ever a problem I would never be able to get more ram from upstream, and would end up having to restore the whole pool from back up on a dark and lonely weekend night.
Thank you for your help.