SSD Array Performance

Rand

Guru
Joined
Dec 30, 2013
Messages
906
First,
I am not sure how well your ATTO Runs represent actual performance here, since you run them with 256MB test size which might very well fit in some cache or the other. Yes I saw the direct IO box, but not sure thats sufficient to ignore FreeNas in Memory + individual Disk write cache.

Else, I still see several question here (which I can't answer)

1. Why is even a dd test (which is as baremetal as it gets, only dependent on drivers & HW) not able to reach the maximum speed of the drives (SAS writes especially - thats 10 vdevs which means each disk only contributes 340 MB/s to the total).

From my previous tests I am sure a single vdev would reach higher values than that and the more vdevs you add the less benefit you get from each new vdev.
If you have time (and no data on the drives) you could test a 20 disk stripe to see what limit you can come up with there.

2. The next loss is of course network (and protocol). Its quite interesting to see that Windows iSCSI is performing significant better than ESXi
in bhyve locally on the FreeNas box (provided you have enough CPU) to see how that performs with NFS/iSCSI

3. How does the async performance translate to actual sync performance? Whats your slog if any? Or are you fine running async in the end?

In the end of course all that matters is whether the solution you have satisfies your requirements - if SAS3 via iSCSI is enough then its probably not worth spending even more time on this without getting answers.
 

velocity08

Dabbler
Joined
Nov 29, 2019
Messages
33
Hi All

hopefully this thread may throw some light on NVME performance on ZFS.


according to the link ZFS on NVME drives at this point in time doesnt handle NVME performance very well (short answer it's too fast from my understanding) and requires some specific tuning.

It may be why you are seeing the same performacne from SATA drives compared to NVME drives in your testing.

Not sure if this applies to FreeBSD ZFS but it may be related.

""Cheers
G
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Yes I had seen that, and at some point it must have been true that more vdevs = more performance, but with todays fast drives (nvme and fast sas3) it does not seem to hold true. Thats why the info you provided was so enlightening - it provides an explanation why this might be an issue :)
My goal (in short as not to usurp this thread) is to get at least 500 MB/s sync write speed at QD1/T1 (=single user) for a esxi datastore (ideally via nfs). With pmem it looks like it should be doable, but the issue of scaling (for large amounts of sata ssds, less amounts of (faster sas ssds) and nvme) still is irritating (and not limited to FN btw, I tested other OS's too (results on StH since this is a FN forum)
 
Top