Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

The biggest SAS 10k for DELL R510 & H200

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,350
Thanks
2,983
#21
It's a relatively new product so it's not something I've actually tried before but the Intel site says that they're intended for servers.
They have a 5 year warranty and the mean time to failure raiting is good. I'd say they wouldn't be a bad choice for mass storage. You can give it a try, if you can pick them up relatively cheap.
 
Joined
Sep 18, 2012
Messages
88
Thanks
3
#22
No SATA device will be fast enough to be an SLOG for all-flash vdevs. You'll need to devote a PCIe slot to an NVMe device like an Intel P3700 or Optane.
I'm planinig put to my DELL R510:
- 8 x Intel SSD D3-S4510 1.92TB, 2.5in SATA 6Gb/s
- 4 x Seagate Iron Wolf 12TB
- 10 Gbit ethernet NIC
- 64 GB RAM

Does Intel P3700 for SLOG will have big influence for performance? Is it necessary?
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,350
Thanks
2,983
#23
Does Intel P3700 for SLOG will have big influence for performance? Is it necessary?
It might make things a bit faster for you. Even in an array that is made up of SSDs, you will have double the work on the pool if you don't have the LOG activity moved into a separate device, which is what a SLOG is (Separate LOG) so I would expect it to be faster but I can't say how much faster it would be.
 
Joined
Sep 18, 2012
Messages
88
Thanks
3
#24
@Chris Moore big thanks for answers and patience of course.

I had two scenarios in my head
1.
Volume1 - RAIDZ1 8 x SSD 1.9 TB (databases and VMs system disks)
Volume2 - RAIDZ1 4 x 12 TB Seagate Exos (users documents and photos stored on VMs)

Whole space ~49,3 TB

2.
Volume1 - RAIDZ1 12 x SSD (for all)

Whole space ~22,8 TB

Now I know what you will write - HDDs &10 Gbit ethernet is bad idea, and scenario 2 is better.
Maybe I should build second HDD FreeNAS with big space for VMs disks where users will store documents and don't waste 10 Gbit ethernet bandwitch..

Do you have any idea for two SATA bays which is located inside R510 chassis?
 

HoneyBadger

Mushroom! Mushroom!
Joined
Feb 6, 2014
Messages
1,888
Thanks
618
#25
Now I know what you will write - HDDs &10 Gbit ethernet is bad idea, and scenario 2 is better.
Maybe I should build second HDD FreeNAS with big space for VMs disks where users will store documents and don't waste 10 Gbit ethernet bandwitch..
I'd actually flip this on its head and say that your R510 with the 3.5" LFF bays is the ideal "big space" host, and then search out a server that has 2.5" SFF bays for your high-performance SSD solution, such as a Dell R720xd (or an R730xd if your budget allows, and spec one with the 4x NVMe bays up front for easy SLOG additions!)

For disk config I'm a huge fan of mirror vdevs; I know that you "don't get as much space" as with RAIDZ but you still don't want one stalled drive (doing a TRIM, a page erase, etc) dragging down all the others. In mirrors it will at least only stuff up one of many vdevs rather than "the one and only vdev" - if you stick with a 12-bay server, that means 6 mirrors. If you go with an R-series XD, I'd start with ten drives (20 bays of SAS/SATA, 4 bays of NVMe) and expand by twos as you can.

If you have to maximize space your "bulk data" in the spinning drives could be set up as a RAIDZ2 of 6x12TB in the R510 (~48TB usable) with the option to expand into the other six bays for an additional 48TB.

With regards to the two SATA bays inside the R510 I believe they're actually connected via expander to the main SAS backplane as drives 13 + 14. Either way, they'd make a swell spot for boot devices. I don't think I'd want something that hard to get at to be a critical workload or vdev member. L2ARC for the spinning drives might be an option; there's several users now who have reported a positive impact to general file browsing by leveraging a metadata-only L2ARC.
 

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#26
NOTE: I suggest RAIDz1 because these are SSDs and SSDs are usually very reliable. I would not suggest RAIDz1 with mechanical disks. With mechanical disks over 1TB in capacity, RAIDz2 is usually considered a minimum.
Volume2 - RAIDZ1 4 x 12 TB Seagate Exos (users documents and photos stored on VMs)
A problem here.

How about
Pool1: 6x 3.84Tb S4510 in RaidZ1 (19.2Tb)
Pool2: 6x 10Tb Exos in RaidZ2 (40Tb)
?

I'd actually flip this on its head and say that your R510 with the 3.5" LFF bays is the ideal "big space" host, and then search out a server that has 2.5" SFF bays for your high-performance SSD solution, such as a Dell R720xd
+1.
Or a Google Search Appliance, if you like yellow :)
Other servers of that generation that would do the job are a 16-bay IBM x3650 M4 or a 16 or 25 bay HP DL380 G8.
I wonder though how many SSDs a SAS2308 card like the H220 can realistically handle via a server SAS backplane.

R510 with its PCI-e 2.0 bus will not do justice to an SSD array with multiple VDEVs.
 
Last edited:

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,350
Thanks
2,983
#27
I wonder though how many SSDs a SAS2308 card like the H220 can realistically handle via a server SAS backplane.
If I recall correctly, there's a theoretical bottleneck at around 12 SSDs, because of the bandwidth over the PCIe 2.0 interface. In practice, you are probably going to be good with 24 drives because of the way ZFS access of the drives is handled, but you might be sacrificing some performance. It will depend on the SSD selection to some degree because some drives perform better than others.
 
Last edited:

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#29

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,350
Thanks
2,983
#31
Joined
Sep 18, 2012
Messages
88
Thanks
3
#32
I'm confiused. I almost sure that R510 doesn't have PCI 3.0 in spec i found this information: "Intel 5500 chipset IOH provides multiple PCI Express* Gen 2 interfaces" So if I want to efective use SSD I should change hardware to newer for example DELL R620. Am I right?
 
Last edited:

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#33
I wouldn't rush into buying a new server. The great thing about FreeNAS is that you can move storage pools from one system to another pretty easily. So you can setup your SSD pool on R510, and if or when you find that PCI2.0 is holding you back (unlikely, as 32Gbps is still plenty for a 10Gpbs network), then buy a new server and move all your SSD drives there. Same applies to your HBA to some extent. Cross flash H200 to 9211-8i IT firmware and see how it goes. If it holds you back in terms of IOPS, get an HP H220.

Is LSI 9205-8i and LSI 9207-8i the same RAID card?
For FreeNAS purposes it is. I don't think there is a retail version of 9205-8i, so it might be slightly different board design to suit OEMs like HP.
I have server DELL R510 with 12 bays, does LSI 9205-8i support all 12 disks?
Just as your existing H200, it has 8 ports, but will talk to 12 (or 14) drives via the SAS expander backplane installed in your server.
 
Joined
Sep 18, 2012
Messages
88
Thanks
3
#34
@2nd-in-charge do you mean that bottleneck will be 10 Gbps ethernet? I have 2 ports 10 Gbps ethernet...

In spec DELL R510 I found this info

3 PCIe G2 slots + 1 storage slot: One x8 slot Two x4 slots One storage x4 slot

1. "Storage slot" means slot which I should use for RAID card or can I use one which is x8?

2. x4 means 4 x 4 Gbit/s? If yes it is only 16 Gbit/s and 2 x 10 Gbit/s ethernet...
 

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#35
@2nd-in-charge do you mean that bottleneck will be 10 Gbps ethernet? I have 2 ports 10 Gbps ethernet...
Even if you manage to evenly balance the load, 20Gbps is still less than 32Gbps.
In spec DELL R510 I found this info

3 PCIe G2 slots + 1 storage slot: One x8 slot Two x4 slots One storage x4 slot
Crikey! Why would Dell make the storage slot x4 when LSI controllers they use are x8? I'm pretty sure on T710 storage slot is x8, and most of the rear PCIe slots are x8.
1. "Storage slot" means slot which I should use for RAID card or can I use one which is x8?
Yes, "storage slot" is that internal PCIe slot where your H200 currently is. I would try and move it to the slot 3 (the only x8 slot on your server), provided that cables reach. If they do, it's one less thing you need to worry about when flashing the card. Because cross-flashed H200 will likely lock up your server when installed in the storage slot. There is a way to fix it, but it's pretty involved:
https://www.ixsystems.com/community...ard-found-in-the-internal-storage-slot.74149/

When the card is in the slot you can check the number of lanes with lspci -vv command. Here is the relevant part of my output on T710 (the card is in one of the x8 PCIe slots at the back):

Code:
81:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
        Subsystem: Dell 6Gbps SAS HBA Adapter
    ....
                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns


2. x4 means 4 x 4 Gbit/s? If yes it is only 16 Gbit/s and 2 x 10 Gbit/s ethernet...
Yes it does..
However, depending on the load balancing on the network interfaces, and network latency, that 16Gbit/s PCIe speed might still be faster than 20Gbit/s combined ethernet speed.
 
Joined
Sep 18, 2012
Messages
88
Thanks
3
#36
Yes, "storage slot" is that internal PCIe slot where your H200 currently is. I would try and move it to the slot 3 (the only x8 slot on your server), provided that cables reach. If they do, it's one less thing you need to worry about when flashing the card. Because cross-flashed H200 will likely lock up your server when installed in the storage slot. There is a way to fix it, but it's pretty involved"
I want to put H220 inside so I suppose that will be the same situation?
 

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#37
Yes, and it might be not even possible. I don't know enough about how Dell Bios detects that the card in the server slot is "valid". If I recall correctly from watching that video in Chris's thread, it has something to do with the card having a "correct" SBR. I don't know if it's possible to create that SBR on any LSI OEM card (like HP H220) or only the ones that were originally Dell cards.
 
Joined
Sep 18, 2012
Messages
88
Thanks
3
#39
I'm seriously wondering about changing R510 to R720xd (reason: PCI 2.0 and only one slot x8) What do you think about this chassis for FreeNAS - R720xd ??
 
Last edited:

2nd-in-charge

FreeNAS Aware
Joined
Jan 10, 2017
Messages
94
Thanks
18
#40
I think it's a good choice, especially if you can find one for a good price.
I don't have one though, can't say how hard it would be to boot from the two SSDs at the back and whether it has the same nuisance with the storage slot as R710.
 
Top