jenksdrummer
Patron
- Joined
- Jun 7, 2011
- Messages
- 250
I have two seperate boxes that I have been bouncing data between and experimenting with dataset record size as well as small block sizing. This round was with 128K record size and 32 K small IO; my aim is to get about as close to 1:1 between special and normal classes (64K it was higher special than normal, so I think I have this mix about as good as I can get it short of getting gritty with each dataset!)
On Box B, initially I copied the data over as a 1M dataset record size and had 64K small IO; which then I did a local replication job to a fresh dataset that was configured at 128k/32k, then deleted the original datasets - happy with the results, I copied it back over to Box A expecting similar results; to which I had nuked the pool on Box A before this copy back.
* Box A has 6 SATA SSDs in Z2 with a 1TB Mirrored NVMe Metadata VDEV and 1TB NVMe Cache VDEV
* Box B has 12 SATA HDDs as Mirrored-Stripped with a 1TB Mirrored NVMe Metadata VDEV and 1TB NVMe Cache VDEV
Based on what I am seeing below, it's almost as if box A is compressing the data comparatively to Box B, but then isn't allocating correctly, in fact, to about 20% over the logical size. Not sure why?
root@san03[/]# zdb -LbbbA -U /data/zfs/zpool.cache zpool01_r6_ssd
Traversing all blocks ...
9.87T completed (99944MB/s) estimated time remaining: 0hr 00min 00sec
bp count: 62544060
ganged count: 0
bp logical: 8028262426112 avg: 128361
bp physical: 7208492167168 avg: 115254 compression: 1.11
bp allocated: 10858532061184 avg: 173614 compression: 0.74
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 10800596434944 used: 45.07%
Special class 57935646720 used: 5.81%
Embedded log class 0 used: 0.00%
root@san01[~]# zdb -LbbbA -U /data/zfs/zpool.cache zpool01_r10
Traversing all blocks ...
6.51T completed (129315MB/s) estimated time remaining: 0hr 00min 00sec
bp count: 62546054
ganged count: 0
bp logical: 8028424893440 avg: 128360
bp physical: 7208521473536 avg: 115251 compression: 1.11
bp allocated: 7214844813312 avg: 115352 compression: 1.11
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 7156845871104 used: 11.97%
Special class 57993809920 used: 5.82%
Embedded log class 0 used: 0.00%
On Box B, initially I copied the data over as a 1M dataset record size and had 64K small IO; which then I did a local replication job to a fresh dataset that was configured at 128k/32k, then deleted the original datasets - happy with the results, I copied it back over to Box A expecting similar results; to which I had nuked the pool on Box A before this copy back.
* Box A has 6 SATA SSDs in Z2 with a 1TB Mirrored NVMe Metadata VDEV and 1TB NVMe Cache VDEV
* Box B has 12 SATA HDDs as Mirrored-Stripped with a 1TB Mirrored NVMe Metadata VDEV and 1TB NVMe Cache VDEV
Based on what I am seeing below, it's almost as if box A is compressing the data comparatively to Box B, but then isn't allocating correctly, in fact, to about 20% over the logical size. Not sure why?
root@san03[/]# zdb -LbbbA -U /data/zfs/zpool.cache zpool01_r6_ssd
Traversing all blocks ...
9.87T completed (99944MB/s) estimated time remaining: 0hr 00min 00sec
bp count: 62544060
ganged count: 0
bp logical: 8028262426112 avg: 128361
bp physical: 7208492167168 avg: 115254 compression: 1.11
bp allocated: 10858532061184 avg: 173614 compression: 0.74
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 10800596434944 used: 45.07%
Special class 57935646720 used: 5.81%
Embedded log class 0 used: 0.00%
root@san01[~]# zdb -LbbbA -U /data/zfs/zpool.cache zpool01_r10
Traversing all blocks ...
6.51T completed (129315MB/s) estimated time remaining: 0hr 00min 00sec
bp count: 62546054
ganged count: 0
bp logical: 8028424893440 avg: 128360
bp physical: 7208521473536 avg: 115251 compression: 1.11
bp allocated: 7214844813312 avg: 115352 compression: 1.11
bp deduped: 0 ref>1: 0 deduplication: 1.00
Normal class: 7156845871104 used: 11.97%
Special class 57993809920 used: 5.82%
Embedded log class 0 used: 0.00%