Eds89
Contributor
- Joined
- Sep 16, 2017
- Messages
- 122
Hi all,
I have an ESXi and FreeNAS all in one box on which I seem to be hitting an odd bottleneck on SSDs that I'm after some help diagnosing;
My ESXi box runs a FreeNAS VM storage on a 960 pro NVMe SSD. I then have a virtual storage network between the FreeNAS VM and ESXi.
The VMXNET adapter on FreeNAS is 10Gigabit, and the MTU is set to 9000.
I have a 4 disk mirrored set of 2TB Hitachi drives on which an iSCSI zvol is stored and attached to ESXi. I store my VMs here.
Responsiveness with sync disabled on this volume is still poor, so I am moving to SSDs instead.
I have now setup 3x Samsung 860 500GB SSDs in RaidZ, disabled sync, and created a new iSCSI volume.
Attached this to ESXi, moved a VM to it.
Testing using crystaldiskmark on a VM on the SSD volume, and one on the HDD volume, yields the same kind of results for a 1GB file, with the HDD volume actually performing better in some cases:
about 600MB/s read, 500MB/s write sequential
550/300 for 512k
15/12 for 4k
130/100 for 4k QD32
Given that sync is disabled, I would expect better results to the SSD volume, so am not sure if there is an issue here?
Given writes of about 400MB/s for each SSD, I would expect more like 800MB/s writes, if not more.
Whilst responsiveness of the VMs is vastly improved, the sequential write speeds still bother me.
Am I getting expected figures (if so, why so low), or is there potentially something holding me back.
If further detailed info on exact hardware and configuration settings is required, please let me know what is needed and I will update the post.
Cheers
Eds
I have an ESXi and FreeNAS all in one box on which I seem to be hitting an odd bottleneck on SSDs that I'm after some help diagnosing;
My ESXi box runs a FreeNAS VM storage on a 960 pro NVMe SSD. I then have a virtual storage network between the FreeNAS VM and ESXi.
The VMXNET adapter on FreeNAS is 10Gigabit, and the MTU is set to 9000.
I have a 4 disk mirrored set of 2TB Hitachi drives on which an iSCSI zvol is stored and attached to ESXi. I store my VMs here.
Responsiveness with sync disabled on this volume is still poor, so I am moving to SSDs instead.
I have now setup 3x Samsung 860 500GB SSDs in RaidZ, disabled sync, and created a new iSCSI volume.
Attached this to ESXi, moved a VM to it.
Testing using crystaldiskmark on a VM on the SSD volume, and one on the HDD volume, yields the same kind of results for a 1GB file, with the HDD volume actually performing better in some cases:
about 600MB/s read, 500MB/s write sequential
550/300 for 512k
15/12 for 4k
130/100 for 4k QD32
Given that sync is disabled, I would expect better results to the SSD volume, so am not sure if there is an issue here?
Given writes of about 400MB/s for each SSD, I would expect more like 800MB/s writes, if not more.
Whilst responsiveness of the VMs is vastly improved, the sequential write speeds still bother me.
Am I getting expected figures (if so, why so low), or is there potentially something holding me back.
If further detailed info on exact hardware and configuration settings is required, please let me know what is needed and I will update the post.
Cheers
Eds