Transfer rate between internal pool

ShimadaRiku

Contributor
Joined
Aug 28, 2015
Messages
104
* Intel Xeon 1231v3
* ASRock E3C224D4I-14S
* 32GB ECC RAM
* FreeNAS-9.3-STABLE-201605170422 Virtualized under ESXI 6.5
* Onboard LSI 2308 controller pass-through

* Pool1: mirrored 4x4TB WD Green
* Pool2: mirrored 4x8TB WD White



8TB White drives are rated at ~180 MB/s writes so two vdev striped would yield 360 MB/s, but that would be sequential. The 4x 4TB green drives should be able to handled well over 400 MB/s read speed.

Doing zfs send/recieve from Pool1 to Pool2, zpool iostat shows transfer rate around 150-190 MB/s. Is this normal or should I be getting higher rates?

Where is my bottleneck at? CPU usage capping out at 28%, could it be single thread/core limited? ESXI controller pass-through hindering performance? Fragmented or non-sequential read/writes?

edit:

Looking at disk reporting GUI only two of the 4 drives in Pool1 is being used for reads. Why does it not use all 4? My theory is some data were not striped over to all drives prior to expanding from 2x4TB to 4x4TB.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Looking at disk reporting GUI only two of the 4 drives in Pool1 is being used for reads. Why does it not use all 4? My theory is some data were not striped over to all drives prior to expanding from 2x4TB to 4x4TB.
The data never gets moved, so if you initially had only two drives, all that data only exists on those two drives. Later, once you added two more drives, the new data would have been more likely to go to the drives with more free-space, but it would have been spread across both mirrors to some degree. The limiting factor in this process is drive speed, mechanical, so just let it run and it will finish as quickly as it can.
 
Top