SSD Array Performance

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
@Poached_Eggs ... I know I said it and now there are at least two other people telling you, it is the SAS controller that is slowing you down. Are you going to get one? They are not very expensive and I already linked you to a product.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Yes - I've come up with some ideas -

I did get and ordered a 9207-8e - I want to give this a shot with just a single tray - if its numbers are good, I'll drop the Optane/Slog and use that slot for an additional card...

If things are still hinky - I'm going to grab a lsi SAS3 8e card with new cable interconnects and that - then perhaps upgrade to a SAS3 backplane on the 2u CSE chassis - then maybe move on to a d3700 chassis (depending on pricing)

I'm going to re-investigate my numbers with 12x 4tb Ironwolf on a 9211-8i too - I get similar numbers on this array that I Did with 24 & 48 SSD - hence why I may be thinking there may be a limit on the HP motherboard itself - but I still have more testing to do - Now its waiting for all my stuff to come in.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
So new development while testing: (I've Taken the HP server out of the equation and using supermicro)

1555161159953.png

WTF are these bumps? – some sort of Cacheing issue?

STATS:
LSI SAS9207-8e
FreeNAS-11.2-U3 (Clean/Fresh install)
Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (32 cores)
Memory:
256 GiB
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
WTF are these bumps? – some sort of Cacheing issue?
This is where you're able to briefly free up some space in the "write buffer" and absorb the data in RAM at line speed (10Gbps) - but then your SSD pool is still only able to sustain roughly the 615MB/s you're seeing.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
This is where you're able to briefly free up some space in the "write buffer" and absorb the data in RAM at line speed (10Gbps) - but then your SSD pool is still only able to sustain roughly the 615MB/s you're seeing.

So I'm still back to square one ? So using the new 9207-8e and still haveing same issue on 2 different servers with very different spec - this tells me it's got to be the supermicro tray?

Ive been trying to do research on trying to understand expander cards and how they do and don't work. The super micro is an sas2-el1 card - so my understanding is that even if it does have 3 of the sas connectors - one is for cascading and 2 are for redundancy? - so even with 24x SSD - I'm limited in bandwidth on a single 4x lane?

But that still should give me more that 650MB throughput?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So I'm still back to square one ? So using the new 9207-8e and still haveing same issue on 2 different servers with very different spec - this tells me it's got to be the supermicro tray?
Is that just one bank of disks or is that both sets of 24?
The super micro is an sas2-el1 card - so my understanding is that even if it does have 3 of the sas connectors - one is for cascading and 2 are for redundancy? - so even with 24x SSD - I'm limited in bandwidth on a single 4x lane?
Do you have two cables from the SAS controller to the SAS enclosure? No, two cables from one controller to one enclosure does double your bandwidth. It is only for redundancy if you are running one cable from each of two different controllers.
If you have a single cable from the SAS controller to each of the SAS enclosures, then you have limited yourself to the bandwidth of four lanes.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Is that just one bank of disks or is that both sets of 24?

Do you have two cables from the SAS controller to the SAS enclosure? No, two cables from one controller to one enclosure does double your bandwidth. It is only for redundancy if you are running one cable from each of two different controllers.
If you have a single cable from the SAS controller to each of the SAS enclosures, then you have limited yourself to the bandwidth of four lanes.

Once again I appreciate your help with this - I'm doing something very wrong and probably simple - but I just can't understand what it is I'm doing wrong.

Originally I had 1 cable between HBA and tray - and now I attached 2. The difference was negligible. I did run bonnie++ for numbers in an jail but its kind of what you see and what you feel difference.
1555198852608.png


This is how I have it connected: (2 bottom SAS connectors)
1555198909262.png


2 cables go to the back via this tandem rear plate. I connect the cables between this 2 ports on rear to the 2 ports on rear of HBA on server
1555198931167.png


Now per the manual here:
https://www.supermicro.com/manuals/other/BPN-SAS2-216EL.pdf

Chapter 3 reads as if with a single expander (like I have) the top port is the only port available to go to an HBA. This chassis original had a mobo in it, in which a 9271-8i was installed and connected to the 2 bottom ports, while the top was fed to the external port.

(EDIT - I found another forum hitting the same issue as me, saying this backplane is horrible for SSD)
https://forums.servethehome.com/ind...ay-supermicro-cse-216-sas2-chassis-225.11185/


This is my first foray into Expanders - on my HP with 12x 4tb Ironwolf connected via 2 port expander 9211-8i - this was also limiting me to a 650MB rear/write - with a Raidz2 setup - and I just accepted that speed. A colleague said he was able to max throughput on 10gb with a similar setup with 12x spinners - This started me to doubt my understanding and knowledge.

My understanding(s) - Please correct me if my assumptions are wrong. I am using SATA3 SSD drives - I don't recall if there is a hit/limit for SATA3 drives in a SAS2 backplane-

With a 4i or 4e connector (4x SAS2 connections @ 600 MB/s each) = should be theroretical 2400MB before overhead..etc

With 24x SSD stripe I should get 10000+ MB/s. (provided sufficient hardware/bandwidth)

So - using an 8e (2x 4xSAS2 connectors should get me 4800 MB/s using an PCIe 3.0 8x card. (Assuming 7.88 GB/S pcie 3.0 limit)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Yes. This is a limitation on the way the backplane of the drive shelf is made.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Yes. This is a limitation on the way the backplane of the drive shelf is made.

Do you think getting a D3700 chassis ( I've got a decent pricing on these), transplanting these drives into those chassis will get me huge improvement? - not sure if those chassis will double like you suggest using dual ports.

Whats best bang for buck?
1: D3700 with SAS3 (with SATA3 drives) - connecting to 1 or 2 SAS3 HBA per tray in server? (not sure even what SAS3 HBA to use)

2: Change backplane in Supermicro 216 (So many options, not sure whats best)
2a: BPN-SAS2-216A - with 3x dual extensions to rear of case - to feed 3x HBA cards?
2b: BPN-SAS3-216A - with 3x dual extensions to rear of case - to feed 3x HBA cards?
2c: BPN-SAS3-216EL - with 1x dual extensions feeding 2x pcie 3.0 cards?

Just looking at pricing for the supermicro options - the HP tray seems to be on par and even cheaper (Saying I get shelfs for $300 and already have the disk caddys)

Suggestions?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
On full SSD, i remember that SLOG or ARC is not needed, as 48 SSD's will outperform 1 NVME ( in generaly )

Not true; the purpose of SLOG isn't to "outperform" the pool. It's to guarantee writes.

The LSI SAS 9271 is a ROC ( Raid on Chip ) not a true HBA ( LSISAS2208 Dual-Core RAID on Chip (ROC) )

Neither the 9271 nor any SAS2208 based controller is a HBA. It isn't clear if you're calling the LSISAS2208 a HBA. I think you aren't, but that's only because I know it isn't, and some people might misinterpret the way you put that.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Chapter 3 reads as if with a single expander (like I have) the top port is the only port available to go to an HBA. This chassis original had a mobo in it, in which a 9271-8i was installed and connected to the 2 bottom ports, while the top was fed to the external port.
Maybe I'm reading it wrong, but section 3-9 of that PDF describes a different layout for single backplane vs. multiple cascaded ones, even with a single HBA. If you have both trays active externally it seems to imply that the HBA should be connected to the bottom port, then the middle one cascaded to the bottom port of the next array. The top one is left unused in that setup.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
I just ran into something interesting - I ran the 24x SSD via an 9271-8i. I set this controller to be Raid 1 - and just imported the DA0(controller) into Freenas.

On fresh install - my numbers are right where they should be for read and write for 10gbe. But why with a 9271-8i, but none of the other cards in HBA mode? Now - Whats weird - to do Bonnie testing - I've been installing jails and running it through them.. What I found, My system read/write drops to HALF of what I was getting by JUST having the jail running on the same storage. Only by deleting the jail and restarting the box do I get my speeds back up...


1555278691968.png
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I just ran into something interesting - I ran the 24x SSD via an 9271-8i. I set this controller to be Raid 1 - and just imported the DA0(controller) into Freenas.

On fresh install - my numbers are right where they should be for read and write for 10gbe. But why with a 9271-8i, but none of the other cards in HBA mode? Now - Whats weird - to do Bonnie testing - I've been installing jails and running it through them.. What I found, My system read/write drops to HALF of what I was getting by JUST having the jail running on the same storage. Only by deleting the jail and restarting the box do I get my speeds back up...


View attachment 30137
With a jail running you lose any hardware offloading from the nic. Probably cause increased cpu load and slower speeds.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Sorry, I forgot about that. It is the vnet driver that allows the jail to share the same NIC as the storage. I ran into that before. If you have a jail sharing the NIC, it will cause all kinds of problems with networking. Do you really need a jail?
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
So you are saying that with the 9207-8e you are not getting near 800MB/sec? Interesting, unfortunately i do not have more than 2 ssd's at the moment to test.

Can you try the folowing:

Put 12 SSD's in the HP, just insert the disk's into the sas/sata ports, they will stay there in "the air" with no caddy.
Put a LSI 9207-8e in the HP and connect it to the backplane of the server,
Do a Raid 10 on the 12 ssd's, test the speed again.

I'm guessing you have these cabels:

1555351926411.png




I just ran into something interesting - I ran the 24x SSD via an 9271-8i. I set this controller to be Raid 1 - and just imported the DA0(controller) into Freenas.

On fresh install - my numbers are right where they should be for read and write for 10gbe. But why with a 9271-8i, but none of the other cards in HBA mode? Now - Whats weird - to do Bonnie testing - I've been installing jails and running it through them.. What I found, My system read/write drops to HALF of what I was getting by JUST having the jail running on the same storage. Only by deleting the jail and restarting the box do I get my speeds back up...


View attachment 30137
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Sorry, I forgot about that. It is the vnet driver that allows the jail to share the same NIC as the storage. I ran into that before. If you have a jail sharing the NIC, it will cause all kinds of problems with networking. Do you really need a jail?
No - Just believed best practice to install anything in jails. -


BUT - I may have just figured something else out - Windows 10 - Desktop resume from sleep - speed back to 600MB/s - After a reboot, speed back up to par..
So you are saying that with the 9207-8e you are not getting near 800MB/sec? Interesting, unfortunately i do not have more than 2 ssd's at the moment to test.

Can you try the folowing:

Put 12 SSD's in the HP, just insert the disk's into the sas/sata ports, they will stay there in "the air" with no caddy.
Put a LSI 9207-8e in the HP and connect it to the backplane of the server,
Do a Raid 10 on the 12 ssd's, test the speed again.

I have multiple adapters attempting to get anything.

My CURRENT setup test rigs:
HP DL380E G8 W/ 12x Ironwolf 4tb via 9211-8i
CSE-216 JBOD w/ BPN-SAS2-EL1 via external sas to a 9207-8e (in HP server) - Originally a 9201-16e

AND
CSE-836 2x E5-2670 w/256GB Ram and onboard 10gbe. I have swapped the 9201-16e and 9207-8e in this chassis to external SAS in CSE-216 JBOD chassis (same result) -

Also

CSE-216 w/ BPN-SAS2-EL1 w/ 9271-8i (own separate system, dual proc, 256MB ram)
(This currently gets me via Bonnie 1000/1000/4000 W/RW/R - the best I've been able to get so far)

I have tried various connections methods using parts I have around - in which I tend to be hitting same limitation. I have ordered a 9207-8i (To replace the PCIE 2.0 9211-8i) - waiting for this to arrive.


The isolated CSE-216 system with a ROC (9271) is the only one I can get reasonably 10gbe line speed read & write.

At this time - I'm thinking I have a Backplane limitation issue - the EL1 boards have a SINGLE port for all 24 slots, - but even with that limitation I should be getting more.


I was about to buy a D3700 SAS3 shelf and SAS3 PCIE3.0 card - but it would max out with 12x SSD - It seems I may need correct SAS3 BPN for supermicro with 2x SAS3 controller cards per shelf to get max throughput - This is an expensive undertaking for a home lab.








1555366339675.png
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
FWIW - I'm quiet as I've got a f-ton of shelfs/cards/cables on order - so hopefully within couple of weeks I'll have some answers.
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Hope you figure it out, 48 ssd's are a rare combo :smile:
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Something interesting - got a 9207-8i - PCIE 3.0 card. I moved all my ironwolf drives into my 16bay SuperMicro - And was getting same speed with PCIE3.0 card in both. Someone suggested doing gstat..

1555732684838.png


Notice that I have 5 of these drives in the low side? are these drives just being lazy? - Setup as a single raid2z vdev.

Waiting on a BPN-SAS3-216A-N4 used supermicro server - Throw 3x 9340-8i with 24 SSD and see what I get.

FWIW (12x Ironwolf 4tb Raidz2 in single vdev)


Code:
root@freenas[/mnt/Data]# dd if=/dev/zero of=/mnt/Data/testfile bs=1M count=10k

10240+0 records in
10240+0 records out
10737418240 bytes transferred in 5.179950 secs (2072880635 bytes/sec)
root@freenas[/mnt/Data]# dd of=/dev/null if=/mnt/Data/testfile bs=1M count=10k
10240+0 records in
10240+0 records out
10737418240 bytes transferred in 1.870061 secs (5741746921 bytes/sec)
 

tomaash

Cadet
Joined
Aug 18, 2018
Messages
2
Edit - because I can't read.
Sas expander don't like sata, especially if many iops is in the game. May be indeed worth testing with recent expander tech.
 
Top