This post contains NO ADVICE
(it's just my attempt at commiserating with the OP encountering problems similar to mind)
Isn't it amazing that we have all these problems trying to use Dell's SERVERS ... when ... if I connect 4 drives to my janky HighPoint NVMe HBA ... not even with very good drives (PM983) ... I got over 10GB/s via an i7-8700k which doesn't even have enough lanes for my HBA after the Boot M.2, my NIC, and GPU ... lol. And was really quite easy to configure. I just didn't have ECC ... and thought,
"shouldn't I get something with a whole bunch of PCIe lanes for my dream NVMe array ..?"
I bought an R7415 which uses an EPYC with 128 PCIe lanes ... only to find out (in none of their regular manuals, but in a manual with only this information, but general ... for all either Epyc Dells or Intel Dells) ... that a unit sold with no way of connecting drives EXCEPT via NVMe ... which has 128 lanes ... provides only 32-lanes!!! to the 24-drive NVMe backplane!!! lol. Yeah. Even the R7425, which has 256 PCIe lanes is hardly better ... as is the R7515 (which has 128 PCIe 4.0 lanes on the CPU). It's not until the R7525 in which it actually connects 96 lanes to the 24 NVMe backplane. I'm not sure if somehow they're PCIe 3.0 (as they appear not to give a crap if they for no good reason undermine performance) ... but I should probably assume they DO use PCIe 3.0.
What's annoying ..?? As stupid as Dell is ... (this is only one aspect of their idiocy), it seems like HPE & Lenovo are no better ... and if I wanted to (likely) waste more time, I'd probably find the same kind of BS with SuperMicro, Gigabyte and Asus systems.
What I really wanted anyway was just an ECC system with enough lanes for 16 NVMe drives for less than 5 users at the same time (some video, or high MB/s but nothing that requires exceedingly powerful CPU) ... preferably that's QUIET and doesn't use a ton of energy. Obviously this is asking too much.
While (like you) I'm incredibly grateful people read about my troubles and try to help me get past steps or even suggest ideas ... there's also a trend of people REPEATEDLY, ALWAYS pretending like anything short of a Xeon SP Platinum is "the reason" I can't get more than say ... 600MB/s. lol. Maybe I should just rush out and buy a $10,000 CPU before they look at the fact that it's not using but 3% to move 600MB/s (and uses 1.4% even when drives are doing nothing) ... bc maybe an Epyc just isn't powerful enough to exceed what my E5-2400 v3 that's gotten over 1.2GBs can get).
After replacing the CPU (and system most likely) I should just mirror my very expensive NVMe drives ... again, money's just no object.
(it's just my attempt at commiserating with the OP encountering problems similar to mind)
Isn't it amazing that we have all these problems trying to use Dell's SERVERS ... when ... if I connect 4 drives to my janky HighPoint NVMe HBA ... not even with very good drives (PM983) ... I got over 10GB/s via an i7-8700k which doesn't even have enough lanes for my HBA after the Boot M.2, my NIC, and GPU ... lol. And was really quite easy to configure. I just didn't have ECC ... and thought,
"shouldn't I get something with a whole bunch of PCIe lanes for my dream NVMe array ..?"
I bought an R7415 which uses an EPYC with 128 PCIe lanes ... only to find out (in none of their regular manuals, but in a manual with only this information, but general ... for all either Epyc Dells or Intel Dells) ... that a unit sold with no way of connecting drives EXCEPT via NVMe ... which has 128 lanes ... provides only 32-lanes!!! to the 24-drive NVMe backplane!!! lol. Yeah. Even the R7425, which has 256 PCIe lanes is hardly better ... as is the R7515 (which has 128 PCIe 4.0 lanes on the CPU). It's not until the R7525 in which it actually connects 96 lanes to the 24 NVMe backplane. I'm not sure if somehow they're PCIe 3.0 (as they appear not to give a crap if they for no good reason undermine performance) ... but I should probably assume they DO use PCIe 3.0.
What's annoying ..?? As stupid as Dell is ... (this is only one aspect of their idiocy), it seems like HPE & Lenovo are no better ... and if I wanted to (likely) waste more time, I'd probably find the same kind of BS with SuperMicro, Gigabyte and Asus systems.
What I really wanted anyway was just an ECC system with enough lanes for 16 NVMe drives for less than 5 users at the same time (some video, or high MB/s but nothing that requires exceedingly powerful CPU) ... preferably that's QUIET and doesn't use a ton of energy. Obviously this is asking too much.
While (like you) I'm incredibly grateful people read about my troubles and try to help me get past steps or even suggest ideas ... there's also a trend of people REPEATEDLY, ALWAYS pretending like anything short of a Xeon SP Platinum is "the reason" I can't get more than say ... 600MB/s. lol. Maybe I should just rush out and buy a $10,000 CPU before they look at the fact that it's not using but 3% to move 600MB/s (and uses 1.4% even when drives are doing nothing) ... bc maybe an Epyc just isn't powerful enough to exceed what my E5-2400 v3 that's gotten over 1.2GBs can get).
After replacing the CPU (and system most likely) I should just mirror my very expensive NVMe drives ... again, money's just no object.