Ulrich Jorgensen
Dabbler
- Joined
- Dec 1, 2016
- Messages
- 12
Hi there everyone,
We have been trying a lot of different combinations to get good storage performance for our ESXi cluster of 4 nodes, but we seem to struggle to find the right way.
I have run multiple FreeNAS boxes, for several years, and I have numerous articles on ZFS performance and recommendations, but I cannot get it right.
We have 10Gbit/s network, Jumbo frames activated
Our test FreeNAS is a SuperMicro box with 6-core Xeon and 32GB RAM
There are 6x Samsung SM863 960GB SSD's in striped mirrors
There is 1 Intel P3700 NVMe 800GB as SLOG device.
Internal speed test gives me:
% /usr/bin/time -h dd if=/dev/zero of=/mnt/TEST/tmp bs=2048k count=10k
10240+0 records in
10240+0 record out
21474836480 bytes transferred in 7.738622 secs (2775020811 bytes/sec)
7.74s real 0.02 user 7.71s sys
% /usr/bin/time -h dd if=/mnt/TEST/tmp of=/dev/zero bs=2048k count=10k
10240+0 records in
10240+0 record out
21474836480 bytes transferred in 3.948149 secs (5439215903 bytes/sec)
3.94s real 0.00 user 3.94s sys
When I put this in test over iSCSI I only get 467MB/s write and625MB/s read
With NFS I get 775MB/s write and 779MB/s read
These numbers are with Sync = Off, with the recommended sync=always, those numbers are 20-30% lower (I forgot to write them down ;-))
Testing network with iPerf I get around 9 Gbit/s which should make it possible to climb above the numbers we are seeing.
I will set up LAG today, but as we are not saturating the 10 Gbit/s link, I do not expect to see much improvement there.
I really hate to see all these MB/s lost in iSCSI and/or NFS, does any anyone have a similar experience, and found a solution?
We have been trying a lot of different combinations to get good storage performance for our ESXi cluster of 4 nodes, but we seem to struggle to find the right way.
I have run multiple FreeNAS boxes, for several years, and I have numerous articles on ZFS performance and recommendations, but I cannot get it right.
We have 10Gbit/s network, Jumbo frames activated
Our test FreeNAS is a SuperMicro box with 6-core Xeon and 32GB RAM
There are 6x Samsung SM863 960GB SSD's in striped mirrors
There is 1 Intel P3700 NVMe 800GB as SLOG device.
Internal speed test gives me:
% /usr/bin/time -h dd if=/dev/zero of=/mnt/TEST/tmp bs=2048k count=10k
10240+0 records in
10240+0 record out
21474836480 bytes transferred in 7.738622 secs (2775020811 bytes/sec)
7.74s real 0.02 user 7.71s sys
% /usr/bin/time -h dd if=/mnt/TEST/tmp of=/dev/zero bs=2048k count=10k
10240+0 records in
10240+0 record out
21474836480 bytes transferred in 3.948149 secs (5439215903 bytes/sec)
3.94s real 0.00 user 3.94s sys
When I put this in test over iSCSI I only get 467MB/s write and625MB/s read
With NFS I get 775MB/s write and 779MB/s read
These numbers are with Sync = Off, with the recommended sync=always, those numbers are 20-30% lower (I forgot to write them down ;-))
Testing network with iPerf I get around 9 Gbit/s which should make it possible to climb above the numbers we are seeing.
I will set up LAG today, but as we are not saturating the 10 Gbit/s link, I do not expect to see much improvement there.
I really hate to see all these MB/s lost in iSCSI and/or NFS, does any anyone have a similar experience, and found a solution?