The biggest SAS 10k for DELL R510 & H200

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
Hi

I have DELL R510 with H200. I want to put inside the biggest disks. DELL said that this generation of DELL servers supports max 900 GB SAS. Is it true?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Not even close to true. Dell only suggests the things that they have tested. All it means is that they didn't test anything new since the server was released because their whole business is about selling a whole new system. They don't put any R&D into upgrade other than security updates on the firmware.
Why do you think you want 10k SAS drives?
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
Why do you think you want 10k SAS drives?

I want to delegate part of the storage for data base iscsi disk over 10 Gbit ethernet so I need fast storage. Second part will be for documents and I thought about ST8000NM0085 but it has 12 Gbs interface so I afraid about H200

Maybe someone could recomend the biggest SAS disks for R510 & H200...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I know that the DELL R510 kind of ties you to the H200, but that is not the best SAS card if you want to get to 10Gb on the network. It actually limits you max IOPS in the card because of the older controller chip. Probably not an issue with spinning disks though. Your best bang for the buck is 10TB SATA drives. SAS drives are over priced for this particular use and don't provide any additional speed. I was just doing price comparisons this week as we are ordering 86 of the 12TB Seagate Exos drives to upgrade some servers at work. They are more expensive per TB of storage, but only about $3 per TB in the application we are working on and we like having the additional capacity vs the 10TB drives.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I want to delegate part of the storage for data base iscsi disk over 10 Gbit ethernet so I need fast storage. Second part will be for documents and I thought about ST8000NM0085 but it has 12 Gbs interface so I afraid about H200

Maybe someone could recomend the biggest SAS disks for R510 & H200...

Database iSCSI and documents? There's no way you'll be able to push 10Gbps ethernet without SSD or hundreds of HDD's. Seek times will kill you.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
I know that the DELL R510 kind of ties you to the H200, but that is not the best SAS card if you want to get to 10Gb on the network.
Can you recomend better controller?

Database iSCSI and documents? There's no way you'll be able to push 10Gbps ethernet without SSD or hundreds of HDD's. Seek times will kill you.
Which constelation of disks do you recomend for 12 disks 3.5" FreeNAS storage for databases and documents. We need about 4 TB for databases, they are not demanding for performace. FreeNAS has 10 Gbit card with 2 ports, so we can use one of them for databases and second for documents.
Maybe ST12000NM0027 is good idea?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
What is the difference between DELL H200 and HP H220? Both are 6 6Gb/s. Maybe I should buy lsi 9300-8i ?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What is the difference between DELL H200 and HP H220?
The HP H220 uses a newer LSI chipset that is more capable and runs on PCI-E 3.0 where the Dell H200 uses an older LSI chipset that has some limitations and only works at PCI-E 2.0 speed. They are different generations. With regard to what @jgreco said, you will not come close to 10Gb network speed on random IO using the limited number of mechanical disks that your server chassis can hold. To get the speed you are interested in having, you will need to use SSDs and SSDs will overwhelm the capacity of the H200.
Both are 6 6Gb/s. Maybe I should buy lsi 9300-8i ?
Both are rated for 6Gb/s controller to disk, but the controller chips are not equal and that doesn't account for the interface between the controller and the system. There is no advantage in going to a 12Gb SAS controller such as the lsi 9300-8i due to the fact that the number of disks and type of disks will be your limiting factor with that controller. It doesn't matter how fast the controller is when you don't have enough disks to do the work.
The fastest mechanical disks that I have read specs on are the Seagate Exos 12TB models and they have a max speed of about 250MB/s but that is only for sequential access. If you have random data, like databases and VMs typically need, the quantity of seek time adds up fast and the disk can't provide nearly that much speed. The aggregate speed of large pools I have at work has the disks delivering between 100 and 120 MB/s. That is well short of the speed of 10Gb networking and you should understand that putting a faster controller on slow mechanical disks will not make the disks go faster. It is the mechanical nature of the device that is the limit. If you need speed, you need SSDs and not just junk / cheap consumer SSDs. You need data-center grade drives or they will not perform well and they will not last long.
I thought about ST8000NM0085 but it has 12 Gbs interface so
Just because the interface is 12Gb/s does not mean the mechanical parts work faster. Did you look at the spec sheet on that drive?
https://www.seagate.com/www-content...p-3-5-hdd-data-sheetDS1882-2-1606US-en_US.pdf
It uses the same exact mechanical parts as the SATA variant of the drive and they are both rated at a max sustained transfer of 249MB/s.
I don't know how to make that any clearer. There is NO point in buying a faster interface when it is the mechanical portion of the drive that is the problem with speed. To overcome the mechanical speed limit, you would need to throw a massive number of drives at the solution. We have a SAN where I work that is running over 300 drives, partly for the capacity, but mainly for the speed because when you need both speed and capacity and you don't have the budget for an all SSD solution, you need a large number of drives and 12 drives is not a large number of drives.
Maybe ST12000NM0027 is good idea?
Still limited by the random IOPS, 170 on read with a queue depth of 16, and that is terrible in comparison to an SSD.
We need about 4 TB for databases,
What is the TOTAL storage you need, projected out for planned growth over the next five or six years?
FreeNAS has 10 Gbit card with 2 ports
It doesn't matter how many ports you have, the disks are the limitation.

If you were willing to spend for the mechanical drives you were suggesting, it should be no problem to spend for these SSDs:
https://www.ebay.com/itm/New-HP-Intel-DC-S3610-Series-1-6TB-2-5-inch-7mm-SATA-III-MLC-6-0Gb-s-SSD/113687480914

At 1.6TB each, you could put 12 of them in mirrored pairs to get six vdevs of 1.6TB each for around 6TB of usable capacity of super fast storage.
If you want to have more capacity with a little less speed, you could put them in two RAIDz1 vdevs of six drives each and get very fast performance and about 10TB of usable space.

NOTE: I suggest RAIDz1 because these are SSDs and SSDs are usually very reliable. I would not suggest RAIDz1 with mechanical disks. With mechanical disks over 1TB in capacity, RAIDz2 is usually considered a minimum.
 
Last edited:

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
The HP H220 uses a newer LSI chipset that is more capable and runs on PCI-E 3.0 where the Dell H200 uses an older LSI chipset that has some limitations and only works at PCI-E 2.0 speed. They are different generations.
In Dell R510 they'll be both at PCI-e 2.0 speed. So there must be a difference in IOPS to justify the upgrade. And I suspect it's only relevant if the OP heeds your advice and buys SSDs for the storage array.

Also, my newbie question. Can you mount a 2.5" drive in Dell's 3.5" caddy? Or will OP need adapter brackets?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Also, my newbie question. Can you mount a 2.5" drive in Dell's 3.5" caddy? Or will OP need adapter brackets?
Yes. The standard R510 3.5" tray does need an adapter to mount 2.5" drives. This would do the job for that:
https://www.ebay.com/itm/3-5-to-2-5-inch-SATA-Hard-Drive-Disk-Caddy-Tray-Adapter-For-Dell-0G302D-J2R9/132971443476
In Dell R510 they'll be both at PCI-e 2.0 speed. So there must be a difference in IOPS to justify the upgrade. And I suspect it's only relevant if the OP heeds your advice and buys SSDs for the storage array.
If the R510 only has PCI-E 2.0, that is unfortunate, but the H200 is still IOPS limited by comparison to the newer chipset.
 

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
The standard R510 3.5" tray does need an adapter to mount 2.5" drives. This would do the job for that:
Thank you. Good to know if we need to put an SSD in our T710 one day..

Both are 6 6Gb/s. Maybe I should buy lsi 9300-8i ?
Just to give you an idea and to stop you chasing 12Gpbs SAS interface for good. On the PCI-e bus end a PCI-e 3.0 x8 card would give you 63Gbps. If all of your 12Gbps drives delivered 12Gbps transfer speed, totaling 144Gbps (they won't), the fastest controller in the world would still only pass the data at 63Gbps. It gets worse. That same PCI-e 3.0 x8 card when plugged in the Dell R510 PCI-e 2.0 slot will work, but only at PCI-e 2.0 speeds, giving you whopping 32Gbps speed on the bus. The good news is that it's still more than needed to feed 2x 10Gpbs network interfaces.

Just to make it clear, there is no harm getting 12Gpbs SAS drives, but there is no benefit either. They should happily work with 6Gbps controller. The standards are designed to be backward compatible.

We need about 4 TB for databases, they are not demanding for performace.
If performance is not that important, and you only need 4Tb storage, wouldn't pretty much any drives suit your needs? There no point trying to saturate 10Gbps link if your applications are not going to use it.

Another newbie question. Is your H200 in IT mode? Is it cross-flashed to 9211-8i? Just checking..
And please beware that the H220/9207-8i might not work in Dell's PCI-e storage slot. Most likely you'll need to put it in one of the rear slots.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I suprised that I can buy professional SSD 1.6TB for about 250$ (unfortanetly disks are avaible in this price only in USA, I didn't find similar price i europe).
If I decided to buy mentioned SSD's H220 will be enough or will I need lsi 9300-8i?
Those SSDs were probably decommissioned in favor of something larger and faster. It is amazing the things you can pickup on eBay.
The H220 should be fine, as long as you don't have a really large number of drives. If I recall correctly, the PCIe bus will be the speed limit before the SAS controller.
 

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
Can two cables going to the Dell R510 SAS backplane be connected to different controllers?
Rather than replacing H200 I'd probably look into adding an H220, and connecting half of the SSD drives to it. So that each controller is only passing through half of the IOPS, and together they'd be using 16 PCI-e 2.0 lanes in the R510 rather than 8.
Would that work?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Would that work?
It would still be un-balanced because the chipset in the Dell H200 card is less capable than the chipset in the HP H220. They are different generation chips and even though they would both be limited by the PCIe interface, the H200 is even slower than that. If you want to use two cards, it would give you more IOPS, but I would use matching cards.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
What about the SLOG/ZIL and L2ARC? If I put 12 SSD 1.6TB, which disks do you recomend for SLOG/ZIL and L2ARC? R510 has two additional places for 2.5" which I can connect it over SATA...

Does HP H220 fit to chassis R510 instead H700? Does it support 12 drives backplane?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I can't comment on the fit. I don't have access to a R510 to test with. If it has a SAS expander backplane, there should be no trouble with connection to any SAS controller.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,110
What about the SLOG/ZIL and L2ARC? If I put 12 SSD 1.6TB, which disks do you recomend for SLOG/ZIL and L2ARC? R510 has two additional places for 2.5" which I can connect it over SATA...
No SATA device will be fast enough to be an SLOG for all-flash vdevs. You'll need to devote a PCIe slot to an NVMe device like an Intel P3700 or Optane.

L2ARC will also likely be a bad place to spend your money as anything that misses main ARC will be going to SSD already. You could use NVMe L2ARC but I'd suggest spending money on RAM instead.
 
Top