ThomasRens
Cadet
- Joined
- Oct 14, 2015
- Messages
- 1
Hello,
I'm wandering the internet for a while looking for the ideal ZFS tweaking/configuration for my Supermicro server setup. I'm not expecting you would spit out the perfect setup but some tips are welcome!
Hardware Specs:
Supermicro custom server:
This server will be used for visualization of multiple Virtual Private Servers. I expect that there will run various VM’s with Linux Distributions(CentOS, Debian and so on) but a Windows Server environment is not ruled out.
Current ZFS Configuration:
At the moment we have two datapools:
Rpool(mirror):
Two SSD disks for the OS.
Datapool(raidz3, with SLOG and L2ARC as 2 partitions of a INTEL SSD):
11 SAS disks(total size 20T)
Logs
SSD partition 1(sdc1, 50G)
Cache
SSD partition 2(sdc2, 61.8G)
OUTPUTS
Pool Config:
Datapool specs:
I did some configuration for ZFS as well with the options that can be found in:
/sys/module/zfs/parameters/
Setting the zfs_arc_min on 1GB and the zfs_arc_max on 12 GB(because we are running VM's and we expect those will consume lots of memory). I know there are many many options available but i'm not sure what to do with them for my Use Case. The recordsize = 128K but the volblocksize is not defined in some way(VALUE -). Compression is SET on lz4(yes this will consume CPU, but we can handle that). And yes i read the evil-guide, oracle pages and so on...
Thanks for the effort reading this long long POST, hope you can help me out with some Tuning tips.
Kind regards,
Rens
I'm wandering the internet for a while looking for the ideal ZFS tweaking/configuration for my Supermicro server setup. I'm not expecting you would spit out the perfect setup but some tips are welcome!
Hardware Specs:
Supermicro custom server:
- C600/X79 series chipset
- 24 x Intel® Xeon® CPU E5-2620 v2 @ 2.10GHz (2 Sockets)
- LSI 9207-8i hba card
- 11 x 2TB SAS 7.200
- 3 x INTEL SSDSC2BB12
- 128 GB memory 1600 MHz
This server will be used for visualization of multiple Virtual Private Servers. I expect that there will run various VM’s with Linux Distributions(CentOS, Debian and so on) but a Windows Server environment is not ruled out.
Current ZFS Configuration:
At the moment we have two datapools:
Rpool(mirror):
Two SSD disks for the OS.
Datapool(raidz3, with SLOG and L2ARC as 2 partitions of a INTEL SSD):
11 SAS disks(total size 20T)
Logs
SSD partition 1(sdc1, 50G)
Cache
SSD partition 2(sdc2, 61.8G)
OUTPUTS
Pool Config:
Code:
pool: datapool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM datapool ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxx ONLINE 0 0 0 scsi-xxxxxxc ONLINE 0 0 0 logs scsi-SATA_INTEL_SSDSC2BBxx-part2 ONLINE 0 0 0 cache scsi-SATA_INTEL_SSDSC2BBxx-part2 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-INTEL_SSDSC2BB120G6_xxxx-part2 ONLINE 0 0 0 ata-INTEL_SSDSC2BB120G6_xxxx-part2 ONLINE 0 0 0 errors: No known data errors
Datapool specs:
Code:
NAME PROPERTY VALUE SOURCE datapool type filesystem - datapool creation xx - datapool used 8.40G - datapool available 14.3T - datapool referenced 2.70G - datapool compressratio 1.46x - datapool mounted yes - datapool quota none default datapool reservation none default datapool recordsize 128K default datapool mountpoint /datapool default datapool sharenfs off default datapool checksum on default datapool compression lz4 local datapool atime on default datapool devices on default datapool exec on default datapool setuid on default datapool readonly off default datapool zoned off default datapool snapdir hidden default datapool aclinherit restricted default datapool canmount on default datapool xattr on default datapool copies 1 default datapool version 5 - datapool utf8only off - datapool normalization none - datapool casesensitivity sensitive - datapool vscan off default datapool nbmand off default datapool sharesmb off default datapool refquota none default datapool refreservation none default datapool primarycache all default datapool secondarycache all default datapool usedbysnapshots 0 - datapool usedbydataset 2.70G - datapool usedbychildren 5.70G - datapool usedbyrefreservation 0 - datapool logbias latency default datapool dedup off default datapool mlslabel none default datapool sync standard default datapool refcompressratio 1.04x - datapool written 2.70G - datapool logicalused 11.5G - datapool logicalreferenced 2.79G - datapool snapdev hidden default datapool acltype off default datapool context none default datapool fscontext none default datapool defcontext none default datapool rootcontext none default datapool relatime off default datapool redundant_metadata all default datapool overlay off default
I did some configuration for ZFS as well with the options that can be found in:
/sys/module/zfs/parameters/
Setting the zfs_arc_min on 1GB and the zfs_arc_max on 12 GB(because we are running VM's and we expect those will consume lots of memory). I know there are many many options available but i'm not sure what to do with them for my Use Case. The recordsize = 128K but the volblocksize is not defined in some way(VALUE -). Compression is SET on lz4(yes this will consume CPU, but we can handle that). And yes i read the evil-guide, oracle pages and so on...
Thanks for the effort reading this long long POST, hope you can help me out with some Tuning tips.
Kind regards,
Rens