All-Flash NVMe Pool. Slow Read.

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
This post contains NO ADVICE
(it's just my attempt at commiserating with the OP encountering problems similar to mind)

Isn't it amazing that we have all these problems trying to use Dell's SERVERS ... when ... if I connect 4 drives to my janky HighPoint NVMe HBA ... not even with very good drives (PM983) ... I got over 10GB/s via an i7-8700k which doesn't even have enough lanes for my HBA after the Boot M.2, my NIC, and GPU ... lol. And was really quite easy to configure. I just didn't have ECC ... and thought,

"shouldn't I get something with a whole bunch of PCIe lanes for my dream NVMe array ..?"

I bought an R7415 which uses an EPYC with 128 PCIe lanes ... only to find out (in none of their regular manuals, but in a manual with only this information, but general ... for all either Epyc Dells or Intel Dells) ... that a unit sold with no way of connecting drives EXCEPT via NVMe ... which has 128 lanes ... provides only 32-lanes!!! to the 24-drive NVMe backplane!!! lol. Yeah. Even the R7425, which has 256 PCIe lanes is hardly better ... as is the R7515 (which has 128 PCIe 4.0 lanes on the CPU). It's not until the R7525 in which it actually connects 96 lanes to the 24 NVMe backplane. I'm not sure if somehow they're PCIe 3.0 (as they appear not to give a crap if they for no good reason undermine performance) ... but I should probably assume they DO use PCIe 3.0.

What's annoying ..?? As stupid as Dell is ... (this is only one aspect of their idiocy), it seems like HPE & Lenovo are no better ... and if I wanted to (likely) waste more time, I'd probably find the same kind of BS with SuperMicro, Gigabyte and Asus systems.

What I really wanted anyway was just an ECC system with enough lanes for 16 NVMe drives for less than 5 users at the same time (some video, or high MB/s but nothing that requires exceedingly powerful CPU) ... preferably that's QUIET and doesn't use a ton of energy. Obviously this is asking too much.

While (like you) I'm incredibly grateful people read about my troubles and try to help me get past steps or even suggest ideas ... there's also a trend of people REPEATEDLY, ALWAYS pretending like anything short of a Xeon SP Platinum is "the reason" I can't get more than say ... 600MB/s. lol. Maybe I should just rush out and buy a $10,000 CPU before they look at the fact that it's not using but 3% to move 600MB/s (and uses 1.4% even when drives are doing nothing) ... bc maybe an Epyc just isn't powerful enough to exceed what my E5-2400 v3 that's gotten over 1.2GBs can get).

After replacing the CPU (and system most likely) I should just mirror my very expensive NVMe drives ... again, money's just no object. :smile:
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
This post contains NO ADVICE
(it's just my attempt at commiserating with the OP encountering problems similar to mind)
I'm not sure this is comparable to your thread at all. This was a back and forth fact finding mission to find where OPs bottleneck was. And, IMO, this was fairly productive and useful. Your posts have not asked any questions, you are just complaining about the performance of your system and throwing money at your problem. Work with me in your own thread, please, I'm trying to help.
 

Linuchan

Dabbler
Joined
Jun 4, 2023
Messages
27
This post contains NO ADVICE
(it's just my attempt at commiserating with the OP encountering problems similar to mind)
There is still a problem with the performance I want, and I will update the post soon.
The thread identified other issues before approaching the performance issue and was able to get feedback on considerations when using high-speed networking.
I don't think your claim of 'NO ADVICE' is appropriate.

To be honest, I think ZFS is still suitable for HDD-based arrays.
There are too many negative tuning elements for SATA, SAS, and NVMe SSDs.
That's why Intel's VROC, GRAID's SupremeRAID, and other SDS are attracting attention as all-flash array solutions.
Especially, SupemeRAID of GRAID is impressive.

Anyway, my goal is to use ZFS as much as possible for convenience, and if this works out, I think it would be more interesting and helpful not only for me but also for those who want to configure all-flash pools through ZFS.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
ZFS will also protected your data, whereas those other solutions absolutely will not.
So lets not compare apples and oranges, it's not the same thing.

We're happy to start this back up again when you are ready.
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I'm not sure this is comparable to your thread at all. This was a back and forth fact finding mission to find where OPs bottleneck was. And, IMO, this was fairly productive and useful. Your posts have not asked any questions, you are just complaining about the performance of your system and throwing money at your problem. Work with me in your own thread, please, I'm trying to help.

The thread identified other issues before approaching the performance issue and was able to get feedback on considerations when using high-speed networking.
I don't think your claim of 'NO ADVICE' is appropriate.

LOL ... I see I've allowed some confusion here.
And since multiple people misunderstood my comment in the same way, it's incumbent on me to clarify (and apologize):

This post contains NO ADVICE = MY POST CONTAINS NO SUGGESTIONS! lol

(it's just my attempt at commiserating with the OP encountering problems similar to mind)

Sorry -- obviously everyone thought I was denigrating their remarks. It was a self-deprecating preface that I was offering no solutions.

There is still a problem with the performance I want, and I will update the post soon.
Exactly. A trend I've dealt with with hardware gleaned suitable from over a year of discussions...
Only to have spent over 3 months after purchasing with SUB-Mechanical array performance
(I hear the problem is my expectations ... which is why it seems worth warning people).
It's unreasonable to expect an SSD's specs to be indicative of how they should perform in a pool.

Eg: A zVol of 8x 7200rpm RAIDz2 drives
Comprised of HDs with max average performance of ~160MB/s...
Will get ~600MBs frequently, 800MB/s less frequently, and occasionally hits 1200MB/s

Eg:
At ~600MBS it's getting ~50% of each drives individual performance on average.
At ~800MBS it's getting ~66% of each drives individual performance on occasion.
At the ~1200MBS it got a few times, it was yielding ~90% of each drives individual performance in the aggregate.

Yet, with several systems now, I've seen NVMe AND regular SATA SSD fall miles short of this. Getting as little as ~5% the drive's perf.
And when I report as much..? People pretend I don't know how hard drives should perform. Maybe I want the wrong things.
Though I'm testing the same things ... I should just want something other than what mediated and motivated spending money. lol
Hey, they didn't coach me on how to wipe my butt, how many squares can safely be used with "indoor plumbing." (It's ALL new to me).



Anyway, what are your READ speeds..? What's your write perf ..?
I assume you have a 10GbE network ... so with all SSDs...
What's the performance of the drives comprising the array..?
What's the array's details (RAIDz1, 2, etc) ..?
What model SSD are you using?
 
Top