SSD Array Performance

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
I've been messing with Freenas on and off for almost a year. After going down the Inverted triangle of doom - I shy'ed away from a glusterfs 2 node replica +arbiter - (The cost just wasn't justified with doing massive duplication efforts, which was just plain stupid for home lab).

I was 95% settled into a Primary & Backup Freenas - 2x HP DL380 G8 - Optane 900 Slog, 12x 4TB Ironwolf. - I could get 500MB/S to 600MB/S up/down - and with the slog for my VM's - the numbers were even better with NFS. (

Of course - What falls in my lap a week or 2 before I finalize my plans - 2x JBOD trays of 24x 400GB Intel s3700 SSD. - so of course back to drawing board.

----------------------------------------
My current config: Each tray is 4x Vdev of Raidz2 (6 drives). So I was of course hoping that with 48x SSD drives my read/write over 10gb should max out...

My write are no better then with my 12x 4tb ironwolf's. My reads via SMB from freenas to desktop are 950MB/S +, but writes are still 550ish. And NFS read/write (without slog) are 550MB/s read - 650MB/s write. - With slog it goes to 850MB/s write, but still 550ish read.

Are my expectations unreasonable with this setup? Is my hardware not capable? A configuration error on my side? Tried with/without autotune - tried with jump packets all the way through)

Iperf numbers are good all which ways. Desktop has X550-T and using stripped nvme and a OCZ Z Drive for copy to/from. (Benc mark 2.86GB/s Read & Write)

(Would have LOVED to use these OCZ Raid-Z 3.2 PCIE cards for NFS storage but it seems freebsd doesn't recognize them)


Any suggestions are appreciated - Even telling me to sell the SSD arrays and just keep the spinners..

Server - HP DL380e G8 12+2
1x E5-2407 v2 @ 2.40GHz
96GB ECC Ram
Internal: LSI 9211-8i
External: LSI 9201-16e
Intel Optane SSD 900P
Nic: 560SFP+

2x JBOD:
Super Micro: BPN-SAS2-216EB w/ SAS2-216EL1
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
What falls in my lap a week or 2 before I finalize my plans - 2x JBOD trays of 24x 400GB Intel s3700 SSD
This is supposed to be a safe-for-work board so I'll have to ask you to stop posting so much hardware porn. ;)

Let's try a shotgun approach for some ideas:

Can you tell us anything about the type of data and the access patterns? Recordsize of the datasets, occupancy numbers (brand new, no data?)

Have you tried configuring some drives as mirrors? Ensured you've got the latest LSI IT firmware on your HBAs?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
With slog it goes to 850MB/s write, but still 550ish read.
SLOG is for write but you don't need that. Read is boosted by having data in either ARC or L2ARC. ARC (Adaptive Replacement Cache) is in RAM, and you have a fair amount of RAM, so what you might try is adding an L2ARC. Here is a Very Good video that talks about the benefits and some tweaks to get it working better.
https://www.youtube.com/watch?v=oDbGj4YJXDw
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My current config: Each tray is 4x Vdev of Raidz2 (6 drives). So I was of course hoping that with 48x SSD drives my read/write over 10gb should max out...
You made that a separate pool, YES?
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
This is supposed to be a safe-for-work board so I'll have to ask you to stop posting so much hardware porn. ;)

Let's try a shotgun approach for some ideas:

Can you tell us anything about the type of data and the access patterns? Recordsize of the datasets, occupancy numbers (brand new, no data?)

Have you tried configuring some drives as mirrors? Ensured you've got the latest LSI IT firmware on your HBAs?

First - The SSD drives are 5+ year old - I'm playing with FIRE - lol.

Data - nothing heavy at all - I just like moving large ISO's and movies between desktops and server. so the 48ssd and 12x Freenas are empty except for my testing things here and there.

As soon as I send post I was like - I haven't tried mirrors - I wasn't impressed with the difference in 12 spinners for loss of space so I kind of forgot to try.

LSI IT - haven't touched them - I Read a lot about having the perfect version.. the not to old but not to new one.. so I never looked into it (But I bought them IT-MODE flashed from ebay - so I'm thinking semi-recent)


g6D7oyE.jpg
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
SLOG is for write but you don't need that. Read is boosted by having data in either ARC or L2ARC. ARC (Adaptive Replacement Cache) is in RAM, and you have a fair amount of RAM, so what you might try is adding an L2ARC. Here is a Very Good video that talks about the benefits and some tweaks to get it working better.
https://www.youtube.com/watch?v=oDbGj4YJXDw

I was thinking by having pure SSD an L2ARC wouldn't be necessary.. I was thinking of partitioning 100gb from the optane for L2ARC - just because.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
First - The SSD drives are 5+ year old - I'm playing with FIRE - lol.
It's all down to how many writes they've had. I took an HGST MLC drive north of 5PB before it finally got replaced.

It had no read or write errors logged during that time.

For firmware it's worth checking. Both of those are SAS2008 so the correct firmware is P20.00.07.00 I believe. But for an all-flash setup of that scale they might be bordering on too slow of an ASIC.

If there's no HDD pools in the system, set the tunable vfs.zfs.metaslab.lba_weighting_enabled: 0 to tell ZFS not to worry about "where on the disk" it's writing - with flash it doesn't matter. But if you have an HDD pool in the system its performance will be hurt (possibly significantly) by that.

Give mirrors a shot.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
It's all down to how many writes they've had. I took an HGST MLC drive north of 5PB before it finally got replaced.

It had no read or write errors logged during that time.

For firmware it's worth checking. Both of those are SAS2008 so the correct firmware is P20.00.07.00 I believe. But for an all-flash setup of that scale they might be bordering on too slow of an ASIC.

If there's no HDD pools in the system, set the tunable vfs.zfs.metaslab.lba_weighting_enabled: 0 to tell ZFS not to worry about "where on the disk" it's writing - with flash it doesn't matter. But if you have an HDD pool in the system its performance will be hurt (possibly significantly) by that.

Give mirrors a shot.

Much appreciated!

Looks like 1 of controllers at version 19 - I Went ahead and updated it to 20. The 8i is for the 3.5" spinners I have pulled at the moment. (Deciding if I really require them vs the 48x SSDs vs power consumption)

gpart was giving me hell of time with 2 ssd drives - kept giving me device was busy - tried sysctl kern.geom.debugflags=16 (No change after reboot) so Giving the whole array DBAN nuke now - 17hours and counting (

Code:
root@freenas[~]# gpart show
=>       40  547002208  nvd0  GPT  (261G)
         40  125829120     1  freebsd-zfs  (60G)
  125829160  125829120     2  freebsd-zfs  (60G)
  251658280  295343968        - free -  (141G)

=>       40  234441568  ada0  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234422272     2  freebsd-zfs  (112G)
  234423336      18272        - free -  (8.9M)

=>       40  234441568  ada1  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234422272     2  freebsd-zfs  (112G)
  234423336      18272        - free -  (8.9M)

=>       40  781422688  da33  GPT  (373G)
         40         88        - free -  (44K)
        128    4194304     1  freebsd-swap  (2.0G)
    4194432  777228296        - free -  (371G)

=>    32  524256  da48  MBR  (256M)
      32      31        - free -  (16K)
      63  514017     1  !12  (251M)
  514080   10208        - free -  (5.0M)

root@freenas[~]# gpart delete -i 1 da33
gpart: Device busy
root@freenas[~]# gpart delete -i 1 da33
gpart: Device busy



With the external 16e hba - Have a better model in mind?

Ok - So for mirrored vdev's - suggestions? - 2 drives in mirrored vdev? or 2 raidz2 mirriroed vdev? -

I'll have to play with freenas as the GUI is a little confusing as to what I want to create.. it seems like it would take a little while doing 2 drive mirrored vdevv..


zpool create tank mirror sda sdb mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl .. etc? - But that doesn't give me so good redundancy does it?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
External: LSI 9201-16e
With the external 16e hba - Have a better model in mind?
The one you have uses PCI-E 2.0, so you could go to a newer card that uses PCI-E 3.0 but I am not sure that is going to be a significant change for you. You might look at this, which is a newer chipset, with is PCI-E 3.0 , but I don't know if that is a guarantee of better performance.
https://www.ebay.com/itm/LSI-9206-16e-6Gbps-SAS-HBA-P20-IT-mode-firmware-ZFS-FreeNAS-unRAID-Dell-TFJRW/163274936627
The bad part is that it takes a different cable from the card to the disk shelf vs the one you have now.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Giving the whole array DBAN nuke now - 17hours and counting

With the external 16e hba - Have a better model in mind?

Ok - So for mirrored vdev's - suggestions? - 2 drives in mirrored vdev? or 2 raidz2 mirriroed vdev? -

I'll have to play with freenas as the GUI is a little confusing as to what I want to create.. it seems like it would take a little while doing 2 drive mirrored vdevv..


zpool create tank mirror sda sdb mirror sdc sdd mirror sde sdf mirror sdg sdh mirror sdi sdj mirror sdk sdl .. etc? - But that doesn't give me so good redundancy does it?
DBAN nuke or other "overwrite" style erasure isn't good for SSDs. You should boot to Linux (or Windows, I guess) and use the Intel SSD Datacenter Tool (isdct) available here to do an erase/TRIM:
https://downloadcenter.intel.com/download/28639/Intel-SSD-Data-Center-Tool-Intel-SSD-DCT-

For the mirror setup, yes you'd want to create a whole whack of 2-drive mirrors. This sort of reduces redundancy vs a RAIDZ2 since you can only lose one drive per vdev - but you do have a whole lot more vdevs to spread potential failures across. And you could always do snapshots and replicate them back to the spinning disks for additional safety.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
DBAN nuke or other "overwrite" style erasure isn't good for SSDs. You should boot to Linux (or Windows, I guess) and use the Intel SSD Datacenter Tool (isdct) available here to do an erase/TRIM:
https://downloadcenter.intel.com/download/28639/Intel-SSD-Data-Center-Tool-Intel-SSD-DCT-

For the mirror setup, yes you'd want to create a whole whack of 2-drive mirrors. This sort of reduces redundancy vs a RAIDZ2 since you can only lose one drive per vdev - but you do have a whole lot more vdevs to spread potential failures across. And you could always do snapshots and replicate them back to the spinning disks for additional safety.

Thanks! I rarely ask for help (Search is my friend) - its very nice to have such constructive help!

So I did what you suggested - canceled DBAN - ran the intel SSD tool - upgraded firmware on all of them and then an erase. (seems to have solved drive issues)

I then created 12vdevs of mirrored SSD - did some testing and then extended pool to 24vdev or mirrored SSD.

I'm starting to second guess my testing methods - First is I create an NFS share off the pool, using vcenter create a windows 2019 server image, install vmtools and then use Atto Disk benchmark against c drive (since its actually sitting on the freenas pool)

12x vdev (no slog) 750MB/s write - 550MB/s read - Assuming via esxi this is a sync write via NFS

24x Vdev (no slog) 650MB/s write - 450MB/s read- NFS (I created new VM to see that it spread the files across more of vdevs)

I know adding a slot will help max my write, but not my read. - I'm a bit baffled as I can get same performance with my 12x spinners with slot. I just don't have the heart to sell these JBOD trays - I need to find a problem to my solution! lol

Now, I did some SMB tests (confirmed desktop to desktop hits 1GB synchronous (Nvme to Nvme) )- Now from desktop to Freenas SMB folder - With 24x Vdev-580MB/s write & 950MB/s read

I'm curious if my testing methods are SH** - Considering my numbers - Raid2z is almost comparable but I pick up a few more TB with it.

My raw DD numbers from freenas shell to pool
1912MB/s write - 2870MB/s read
Code:
[root@freenas /mnt/SSDArray/SMB]# dd if=/dev/zero of=testfile bs=1G count=100
100+0 records in
100+0 records out
107374182400 bytes transferred in 56.139333 secs (1912637297 bytes/sec)
[root@freenas /mnt/SSDArray/SMB]# dd if=testfile of=/dev/zero bs=1G count=100
100+0 records in
100+0 records out
107374182400 bytes transferred in 37.308033 secs (2878044587 bytes/sec)
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thanks! I rarely ask for help (Search is my friend) - its very nice to have such constructive help!
Please ask. We do this for fun and if you don't ask, what will we do with our free time.
I know adding a slot will help max my write, but not my read. - I'm a bit baffled as I can get same performance with my 12x spinners with slot. I just don't have the heart to sell these JBOD trays - I need to find a problem to my solution! lol
You might want to look at these performance tests with some different SAS controllers:

Comparing HBA IT mode SAS controllers
https://www.youtube.com/watch?v=PeFJtjVvGyc

LSI SAS2308 performance benchmarks
https://www.youtube.com/watch?v=LMPq1B31cmE

LSI SAS2008 with IBM SAS expander performance benchmarks
https://www.youtube.com/watch?v=BvL70tEW3VU

LSI SAS2008 HBA performance benchmarks in 2018
https://www.youtube.com/watch?v=craKJRw4-9c

If I recall, the best performance the tester got was with two SAS controllers. So it is possible that you might get better performance if you put each of the SSD enclosures on a separate controller. I can see how you might be getting a bottleneck in the SAS controller with that older controller. The tester that made these videos didn't get a bottleneck because they were using mechanical disks, but you might be hitting a limit with SSDs. That is part of the reason I suggested a newer controller.

You might also want to try this performance testing tool developed by one of the senior moderators on the site:

solnet-array-test (for drive / array speed) non destructive test
https://forums.freenas.org/index.php?resources/solnet-array-test.1/

Another thing to keep in mind is that to do performance testing and get valid numbers, you need to turn caching off so you are testing actual drive speed vs the speed of RAM.

You might also want to read the write-up that @jgreco did on why to use mirrors for block storage but I don't know if that is really applicable to NFS storage of VM images.

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage
https://www.ixsystems.com/community...d-why-we-use-mirrors-for-block-storage.44068/
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You might also want to read the write-up that @jgreco did on why to use mirrors for block storage but I don't know if that is really applicable to NFS storage of VM images.

It's applicable to anything where there's large scale rewrites of random data blocks. That definitely covers both VM storage and database files, but may include other stuff.

RAIDZ is great at large file archival stuff. Your ISO images, your home movies, stuff like that. It's optimized towards that. If you were going to archive away an old VM, that's fine too.

Mirrors are generally great at every kind of storage, but suffer from space efficiency issues.

On a 12-drive array, with RAIDZ3, you can specify a warm spare disk, have resilience against three failures, and still have eight disks worth of space.

On a 12-drive array, with mirrors, you can only get six disks worth of space MAX, and that's with single-failure resilience, or four disks worth of space with double-failure resilience (four 3-disk vdevs).

I know adding a slot will help max my write, but not my read. - I'm a bit baffled as I can get same performance with my 12x spinners with slot. I just don't have the heart to sell these JBOD trays - I need to find a problem to my solution! lol

No, SLOG does not "help max [your] write." Turning off sync writes will max out your writes. The reason that you get the same performance with 12x HDD as with SSD when using a SLOG is because the SLOG is ultimately limiting your performance.

https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

The ultimate write speed of your pool is whatever write speeds are when sync writes are disabled. It is not possible to make the pool go faster than that without redesigning/reconfiguring the pool or eliminating hardware bottlenecks somehow.

Adding a SLOG and turning on sync writes will immediately tank your pool's performance. This is expected. The process of committing sync writes is inherently a performance-lossy process. What the SLOG is doing for you is saving you from the MUCH slower horror of committing ZIL transactions to your main pool.
 

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Please ask. We do this for fun and if you don't ask, what will we do with our free time.

You might want to look at these performance tests with some different SAS controllers:

Comparing HBA IT mode SAS controllers
https://www.youtube.com/watch?v=PeFJtjVvGyc

LSI SAS2308 performance benchmarks
https://www.youtube.com/watch?v=LMPq1B31cmE

LSI SAS2008 with IBM SAS expander performance benchmarks
https://www.youtube.com/watch?v=BvL70tEW3VU

LSI SAS2008 HBA performance benchmarks in 2018
https://www.youtube.com/watch?v=craKJRw4-9c

If I recall, the best performance the tester got was with two SAS controllers. So it is possible that you might get better performance if you put each of the SSD enclosures on a separate controller. I can see how you might be getting a bottleneck in the SAS controller with that older controller. The tester that made these videos didn't get a bottleneck because they were using mechanical disks, but you might be hitting a limit with SSDs. That is part of the reason I suggested a newer controller.

You might also want to try this performance testing tool developed by one of the senior moderators on the site:

solnet-array-test (for drive / array speed) non destructive test
https://forums.freenas.org/index.php?resources/solnet-array-test.1/

Another thing to keep in mind is that to do performance testing and get valid numbers, you need to turn caching off so you are testing actual drive speed vs the speed of RAM.

You might also want to read the write-up that @jgreco did on why to use mirrors for block storage but I don't know if that is really applicable to NFS storage of VM images.

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage
https://www.ixsystems.com/community...d-why-we-use-mirrors-for-block-storage.44068/


Hi - I actually had found and watched those videos a few hours before you posted! - I've actually started a dialog with theartofservers as I did get my HBA cards from him - He also gave me some ideas to try - my results don't seem promising.

Per his suggestion - break drives into a mdadm strip - to take zfs out of the equation - loaded centos onto a USB - imported the ZFS pool and tested with bonnie++ - numbers matched similar to what I had been quoting earlier..

So I destroyed the pool and started down this hole (Thank god for some good scripting posts what a PITA doing anything with 48 drives is.. lol)


Any my results with a raid 5 - 48 drive SSD array -
w= 657MB/s , rw=680MB/s , r= 4021MB/s

GARBAGE it seems

[root@localhost raid1]# bonnie++ -u root -r 1024 -s 16384 -d /mnt/raid1/RAIDFolder/ -f -b -n 1 -c 4

Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
localhost.local 16G 657341 70 680610 66 4021459 99 2461 107
Latency 34034us 24506us 318us 9654us
Version 1.97 ------Sequential Create------ --------Random Create--------
localhost.localdoma -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
1 343 4 +++++ +++ 588 2 609 4 +++++ +++ 551 2
Latency 282ms 77us 5974us 7726us 8us 79555us


Thoughts? HP DL380E G8 w/2407 96gb ram - Machine just suck? I've got a SuperMicro X9DRH-iTF with dual 2690v2 I may try - I'm just a huge fan of ILO vs the IPMI.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
HP DL380E G8 w/2407 96gb ram
That is a socket LGA 1356 processor @ 2.2 GHz, if I found the correct info.
I've got a SuperMicro X9DRH-iTF with dual 2690v2 I may try
That is a significantly better system, LGA 2011 @ 3 GHz. I would go with the Supermicro system if it was me. If I had the enclosures you are working with, I would put each of them on a separate controller to see if the controller was the bottleneck.
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Well with 48 SSD's i was hoping for some astronomical read and write speed's. but 550mb, something is definetly off.
I would try 2 controllers for the shelf's, lsi 9207-8e, 1 for each shelf of ssd
Raid 10 ( stripped mirrors )

Regarding SSD, i was having low performance with the LSI SAS 2008 chipset controllers ( dell H200E ) and changed to LSI SAS9207-8e solved my problem ( SAS2308 chipset ). You can find LSI SAS 9207-8E model on e-bay at very good price.

I had 8 x 720 GB CHINA SSD in my MD1220, and read was over 1110 mb and write was over 1100 mb. But the drives failed in 3 weeks. So i trashe'd the CHINA SSD and went with SAMSUNG. But on my H200E controller (2008 chipset ), the china ssd's never went above 650-700 mb.

With your setup, i would definitely use separate controller for each shelf. I do the same, in your case that would be 3 card's / shelf or 1 x 16 port / shelf ( LSI SAS 9206-16e )
Do firmware upgrade on the server, bios and ILO
Do firmware update on the controller's, if you have the same model of controller's, it is advice'd to have the same firmware and bios on them.

Also, do you have both raiser card's in the system? Guessing not, since you have only 1 cpu, but check

1554841725415.png


The drive's you have are 500 read and 460 write, so you should go past 1000mb/sec in any raid setup ( 10, z1, z2, z3 )


1554839952533.png



the picture below is with only 2 samsung qvo ssd 1 tb in mirror ( I have 8 vm's running from he same pool when doing the copy test )

1554840954191.png


1554841158992.png
 
Last edited:

Poached_Eggs

Dabbler
Joined
Sep 17, 2018
Messages
31
Well with 48 SSD's i was hoping for some astronomical read and write speed's. but 550mb, something is definetly off.
I would try 2 controllers for the shelf's, lsi 9207-8e, 1 for each shelf of ssd
Raid 10 ( stripped mirrors )

Regarding SSD, i was having low performance with the LSI SAS 2008 chipset controllers ( dell H200E ) and changed to LSI SAS9207-8e solved my problem ( SAS2308 chipset ). You can find LSI SAS 9207-8E model on e-bay at very good price.

I had 8 x 720 GB CHINA SSD in my MD1220, and read was over 1110 mb and write was over 1100 mb. But the drives failed in 3 weeks. So i trashe'd the CHINA SSD and went with SAMSUNG. But on my H200E controller (2008 chipset ), the china ssd's never went above 650-700 mb.

With your setup, i would definitely use separate controller for each shelf. I do the same, in your case that would be 3 card's / shelf or 1 x 16 port / shelf ( LSI SAS 9206-16e )
Do firmware upgrade on the server, bios and ILO
Do firmware update on the controller's, if you have the same model of controller's, it is advice'd to have the same firmware and bios on them.

Thanks for this info! I've got the stock 9271 HBA that came with the servers - but can't be flashed to IT mode - but I recall getting some big numbers in just a stripe. I was starting to think it was the HBA - but I've got a bit more testing to try..

Everything is up to date for server and HBA's - (trying to rule out obvious) and even updated firmware to the ssd drives themselves.

I have 3x 3.0 and 1x 2.0 PCIE - 10gb 560+ card, 9211-8i (For the spinners), Optane 900P for slog and the external hba card - you can see I'm out of PCIE slots - so 2x adapters may not fit - perhaps if I don't need the slog it will open a slot for me.

Also, do you have both raiser card's in the system? Guessing not, since you have only 1 cpu, but check
Right only 1x CPU - as this is storage only - I figured higher clock and lower core's[/QUOTE][/QUOTE]
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
On full SSD, i remember that SLOG or ARC is not needed, as 48 SSD's will outperform 1 NVME ( in generaly )

I would go with a better HBA, as I sad, for me it worked, and as i posted above, i get 335 MB with 1 mirror of SSD, not with 24 ( 48 / 2 ).

Also do note, thet you will hit more than 75.000 iops with a raid 10 setup from thoe's 48 SSD's, that meanes that you'r vm's will work almost instant.
I am sure that your problem is something related to HBA.

I would do this:

Get a LSI 2308 chipset HBA for the shelf's, also for the disk's
Drop the INTEL OPTANE, as you will not need it.
Reinstall Freenas, and do the test again, set up the 48 SSD's in RAID10 ( Stripped Mirrors )


The LSI SAS 9271 is a ROC ( Raid on Chip ) not a true HBA ( LSISAS2208 Dual-Core RAID on Chip (ROC) )
https://www.broadcom.com/products/storage/raid-controllers/megaraid-sas-9271-8i#specifications
If you want to put another RAISER in that HP, you will need to install the other CPU, as the 2'd raiser uses PCE-EX lanes from the second CPU.

As for the shelf's, I am not familiar with SUPERMICRO, so my statement in my previsius post may be off regarding the number of controllers needed for 1 shelf, as i saw the backplane, it has 6 sff 8087 connectors. Is that true? or it uses SAS controllers as HP, DELL or NetApp?
If this is your case, you could use a SAS EXPANDER, like the Intel RES2SV240 SAS 2 Expander ( https://www.servethehome.com/byo-sas-expander-deal-intel-res2sv240-sas-2-expander/ )

Supermicro:
1554885874573.png




Dell:

1554885480566.png
 

Attachments

  • 1554885451210.png
    1554885451210.png
    194.3 KB · Views: 467

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@Poached_Eggs It seems like a major bottleneck could be the older SAS2008 based external HBA. It's apparently very easy to exhaust the raw IOPS/processing capability of this SoC (290,000 IOPS) with a bunch of SSDs. You've got some strong and consistent performers for SSDs there (DC S3700) so in my mind it's entirely plausible that they're overwhelming the controller.

The newer SAS2308 supposedly supports more simultaneous interrupts and 600,000 IOPS - the newest SAS3008 is supposed to be "over a million" but will be more costly and require newer cables.

It might be worth trying to pick up a refurbished SAS2308-based unit (HP H221?) to see see if this helps alleviate pressure.
 
Top