HP SmartArray or new HBA?

Status
Not open for further replies.

AgentZero

Dabbler
Joined
Jan 7, 2013
Messages
24
All-

I'm looking to convert an HP DL360G5 into a FreeNAS box. I have a SmartArray P800 controller and an HP SAS Expander in another shelf of 16 disks. After some reading it looks like I have a few options:

- Use the hardware RAID on the P800, but lose lots of ZFS features (not really a fan of this idea)
- Configure P800 for JBOD mode (not possible as far as I can tell)
- Configure each disk connected (24 disks) as a RAID0 volume in the controller to present each disk to FreeNAS individually (Best option?)

So really what I'm wondering is if I configure each disk as a RAID0 logical volume in the controller - is there any downside to this aside from the pain of configuring 24 logical volumes?

OR should I just use hardware RAID and be done?

OR should I go buy a different controller with 24 ports that supports JBOD passthrough?


Some specs:
DL360G5 24GB RAM
6 600GB 10K SAS mirrored for VM storage
12 1TB 7.2K RAIDZ for file storage and backup

Any thoughts are appreciated. Thanks
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
The only real downside is the RAID0 creation at the beginning and the extra step of recreating it when a spindle fails. I have done this with my dell setup and it works fine.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
From my personal experience with dealing with controllers that force RAID0 for each drive, you will likely lose the ability to run SMART tests. You may also lose the ability to obtain SMART information automatically using smartctl/smartd. Some controllers will let you obtain some information in your hard drives using the applicable CLI.

Your best bet if you plan to use this long term is to find a 24 port controller that can do passthrough. You should definitely check out the CLI for your controller as some controllers will let you do passthrough via CLI.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The SmartArray controllers are pretty nice up to the point where you want to use them for something like ZFS, at which point they become problematic. It's been a little while since I've been inside one of the HP machines here - one of the wonderful things about them, they tend to work and be pretty reliable, or break as a whole - but my recollection is that the SmartArray controllers are on some sort of proprietary mezzanine card, but do use a standard SAS cable for each deck of drives. If that's the case, looking at the "gold standard" of an M1015 cross-flashed to IT mode would get you an awesome controller for internal use, and another LSI controller with external would work for the shelf...
 

AgentZero

Dabbler
Joined
Jan 7, 2013
Messages
24
Thanks for the input, guys. Initial IOMeter testing shows drives set individually as RAID0 volumes, instead of using Hardware RAID of the same level does provide slightly better performance...but I'm not testing the final configuration.

I have a dual NIC card in one of the other PCIe slots, and I would rather not run a 2nd controller for the disk shelf and remove the 2 NICs. SO...

The M1015 would be OK, but I need 8 internal ports...so in theory, I could use one of the internal SAS ports for 4 of the drives, the other SAS port to the HP SAS Expander, then bring one of the SAS ports from the expander back up for the other 4 internal drives on the host. Lots of cross cabling...and not really pretty....but it should work.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
The SmartArray controllers are pretty nice up to the point where you want to use them for something like ZFS, at which point they become problematic. It's been a little while since I've been inside one of the HP machines here - one of the wonderful things about them, they tend to work and be pretty reliable, or break as a whole - but my recollection is that the SmartArray controllers are on some sort of proprietary mezzanine card, but do use a standard SAS cable for each deck of drives. If that's the case, looking at the "gold standard" of an M1015 cross-flashed to IT mode would get you an awesome controller for internal use, and another LSI controller with external would work for the shelf...

I found this little guy:
LSI Logic SAS3801E SAS Controller Card SAS-PCI Express x 8
View: http://www.ebay.com/itm/LSI-Logic-SAS3801E-SAS-Controller-Card-SAS-PCI-Express-x-8-/181509925769?pt=US_Computer_Disk_Controllers_RAID_Cards&hash=item2a42d5ab89


Seems everyone on the forum loves these LSI cards, after checking around, I didn't see anyone say this particular one would not work. The listed cards are all the 6Gs, which are more expensive and go beyond what my hardware is going to support (MSA70). Can anyone confirm if that card would not would not work?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If I'm not mistaken that card is limited to 2TB disks.... So that card is probably not something you should go with. ;)
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
If I'm not mistaken that card is limited to 2TB disks.... So that card is probably not something you should go with. ;)

I did see that, yes, I'm not worried about size limit in my case. I likely will never be larger than 146GB SAS or 1GB SATA for disk. (all refurbs, all the time...)

Other than the size limit, the card / chip set should be supported? The same as the other supported 3G LSI cards, no?

Forgot to mention: I thought this card should be supported based on this post:
http://blog.zorinaq.com/?e=10

Lists the card specifically, but it's not specific to FreeNAS, just FreeBSD.
 
Last edited:

DigitalDaz

Cadet
Joined
Sep 5, 2014
Messages
3
The HP servers are great but just bear this in mind for the G5 series, it may or may not affect you spending money on them.

The G5 series is only PCIe 1.1 so you are only ever going to get 4Gb I think it is.

Also, if you ever put SATA drives into those backplanes, they will only run at SATA-150 not even at SATA2.

This will limit your ZIL/L2ARC if using SSDs.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
The HP servers are great but just bear this in mind for the G5 series, it may or may not affect you spending money on them.

The G5 series is only PCIe 1.1 so you are only ever going to get 4Gb I think it is.
Also, if you ever put SATA drives into those backplanes, they will only run at SATA-150 not even at SATA2.
This will limit your ZIL/L2ARC if using SSDs.

That is an excellent reminder, thanks. So far, using old/cheap server hardware has done really well. It will of course never be an amazing 6G SAS pool, but I want to use it for performance comparison.

After I get educated on the real/proper way to evaluate performance, I'll be looking forward to comparing (this old equipment) to some other real systems (more current) people on here are using. I will be interested in comparing both performance and cost. Maybe it will help to prove what a few have been saying about it being better to spend some money on real parts versus being too afraid to spend money and not getting decent performance. Maybe trying to make older stuff perform as well as newer stuff would end up costing way more than buying some real parts, or maybe not, I don't know yet.
 

DigitalDaz

Cadet
Joined
Sep 5, 2014
Messages
3
IMHO these boxes are now dead for storage purpose use. Its a shame as they are very good and the build quality is excellent. The bang for the buck is just not there. Bigger SAS drives are still relatively expensive and the PCIe 1.1 issue will always restrict the speed. The fall in the price of SSDs is what has made me turn away from them.

Where they still come into play for me is with a cheap 4Gb Qlogic fiber card, I then plug them into my home brew ZFS SAN that has 4Gb fiber and use them as ESXi hosts primarily just for the CPU grunt with the resilience of the dual PSUs etc.

One other area that I haven't tested yet is making them Proxmox cluster nodes using the Ceph distributed storage model but then you really need 10Gbe for replication and you would again hit the PCIe 1.1 issue but I'm sure for many workloads they would still offer excellent performance.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We've got some HP DL365G1's (~2008-2009 era) running as ESXi hosts, dual Opteron 2346 HE (8 cores at 1.8GHz). Even with the HE CPU's these things average 250 watts.

Geekbench 3 results are around 6000 for such a machine.

We've also got some X9SCi-LN4F with E3-1230 running as ESXi hosts. With four 3.5" 7200 RPM HDD's (FreeNAS! Yay!) and running the CPU full out I can get these up to 110, 120 watts MAYBE.

Geekbench 3 results are around 11000 for these.

The build quality of the HP gear is very good but with their recent firmware upgrade policy change we won't be buying their servers.

We're in strange times now. We used to buy drive shelves like the MSA7o in order to build up IOPS capacity for spinny drives; an individual drive might only have 100 IOPS but 24 of them would be 2400 IOPS aggregate! But now it is all screwy. SAS drive prices tend to be out of line: a DAS RAID controller with a quartet of 1TB SSD's is less than $3000. It offers Spaceballs-class "Ludicrous Speed" compared to a shelf of two dozen SAS HDD's ... and fits inside the typical host, reducing power and complexity.

I am not convinced that old gear offers much in the way of value. Heh. If you look at it from a performance per dollar point of view, the E3 nodes get 4x more done per watt AND I get 4 drive NAS storage capabilities included.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
jgreco,

For some reason, you always seem to have extremely useful information/insight backed by some form of useful data and not as much raw hate, how do you do it?

We've got some HP DL365G1's (~2008-2009 era) running as ESXi hosts, dual Opteron 2346 HE (8 cores at 1.8GHz). Even with the HE CPU's these things average 250 watts.
Geekbench 3 results are around 6000 for such a machine.

We've also got some X9SCi-LN4F with E3-1230 running as ESXi hosts. With four 3.5" 7200 RPM HDD's (FreeNAS! Yay!) and running the CPU full out I can get these up to 110, 120 watts MAYBE.
Geekbench 3 results are around 11000 for these.
So I think what this shows is that the power requirements vs performance is as expected: Newer compared to older = way more performance with less power. I love to see direct comparisons though with real numbers, puts things in perspective, thanks. I'm wondering how much a supermicro builds cost, I have no experience with them yet...

The build quality of the HP gear is very good but with their recent firmware upgrade policy change we won't be buying their servers.
Like everyone else, I'm pissed about their recent douchebaggery.

We're in strange times now. We used to buy drive shelves like the MSA7o in order to build up IOPS capacity for spinny drives; an individual drive might only have 100 IOPS but 24 of them would be 2400 IOPS aggregate! But now it is all screwy. SAS drive prices tend to be out of line: a DAS RAID controller with a quartet of 1TB SSD's is less than $3000. It offers Spaceballs-class "Ludicrous Speed" compared to a shelf of two dozen SAS HDD's ... and fits inside the typical host, reducing power and complexity.

I am not convinced that old gear offers much in the way of value. Heh. If you look at it from a performance per dollar point of view, the E3 nodes get 4x more done per watt AND I get 4 drive NAS storage capabilities included.

As I have been cruising the forums in the last few weeks, I have not seen anything that directly compares old hardware to new, focusing on cost vs performance.
Do you know if such a comparison exists?
Or can you shed some light on it based on what you've personally purchased?
Maybe with your examples from above?

E.G. - From the junk (maybe too hash a word) I have laying around I'm using for FreeNAS right now (still waiting on a SAS HBA before testing)
I'll use this as an OLD GEAR example and use prices from ebay (if you were doing it as a new to you purchase):

An HP DL 360/380 G5 (1U) / dual socket E53xx series / ~2.33Ghz / 32GB ECC / usually two small internal SAS for OS if needed
(Assuming it would bench a little above the G1 series (6000) because of dual quad core)
costs around $400-$500

SAS HBA on ebay ~$35
HP MSA70 (2U / 25 total of 2.5" drive slots @ 3G speed) ebays ~$200
Best drive deal I've seen to get the small drives we have - 140GB 10K for ~$40 / usually ~$80-~100
we have Qty 10 146GB 10K SAS, Qty 8 73GB 10K SAS

Taking $500+$400(MSA70 and drives)+$35
So total cost to get this now from ebay would be around $1000

So with all that said, based on your comment
I am not convinced that old gear offers much in the way of value. Heh. If you look at it from a performance per dollar point of view, the E3 nodes get 4x more done per watt AND I get 4 drive NAS storage capabilities included.

I wonder what the cost of a current system would be offering about the same raw storage space, and how the two would compare in raw, untuned performance? (obviously as in above, newer will blow away older, but at how much cost differnce? Maybe not as much as people think?)

This discussion would have also done well in this thread, since a 'gotcha' of this guy's MSA70 is same as jgreco is talking about above.
http://forums.freenas.org/index.php...th-a-hp-dl360-g5-and-msa70s.12617/#post-59061

I think like everyone else trying to decide how much money to spend if you're doing this for fun and not business needs, you have to weigh how much money do I want to spend vs how decent is my build going to be. I wonder 'Maybe it's just better to buy the mini and throw drives in it?' or 'Maybe I'll price a super micro platform and start adding drives?' As jgreco said, maybe a few 6GB sas drives is way better than a bunch of older drives in an older platform by the time you keep spending money buying older parts?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For some reason, you always seem to have extremely useful information/insight backed by some form of useful data and not as much raw hate, how do you do it?

Failure on my part I guess.

As I have been cruising the forums in the last few weeks, I have not seen anything that directly compares old hardware to new, focusing on cost vs performance.
Do you know if such a comparison exists?
Or can you shed some light on it based on what you've personally purchased?
Maybe with your examples from above?

Not really. With the switch to virtualization there's a lot less gear flowing through here these days. The closest I can really get is to look at the gear we have, but those DL365's are just being used as ESXi hosts and don't have a bunch of 3.5" hot spinny drives hanging off them.

An HP DL 360/380 G5 (1U) / dual socket E53xx series / ~2.33Ghz / 32GB ECC / usually two small internal SAS for OS if needed
(Assuming it would bench a little above the G1 series (6000) because of dual quad core)
costs around $400-$500

SAS HBA on ebay ~$35
HP MSA70 (2U / 25 total of 2.5" drive slots @ 3G speed) ebays ~$200
Best drive deal I've seen to get the small drives we have - 140GB 10K for ~$40 / usually ~$80-~100
we have Qty 10 146GB 10K SAS, Qty 8 73GB 10K SAS

Taking $500+$400(MSA70 and drives)+$35
So total cost to get this now from ebay would be around $1000

So with all that said, based on your comment


I wonder what the cost of a current system would be offering about the same raw storage space, and how the two would compare in raw, untuned performance? (obviously as in above, newer will blow away older, but at how much cost differnce? Maybe not as much as people think?)

This discussion would have also done well in this thread, since a 'gotcha' of this guy's MSA70 is same as jgreco is talking about above.
http://forums.freenas.org/index.php...th-a-hp-dl360-g5-and-msa70s.12617/#post-59061

I think like everyone else trying to decide how much money to spend if you're doing this for fun and not business needs, you have to weigh how much money do I want to spend vs how decent is my build going to be. I wonder 'Maybe it's just better to buy the mini and throw drives in it?' or 'Maybe I'll price a super micro platform and start adding drives?' As jgreco said, maybe a few 6GB sas drives is way better than a bunch of older drives in an older platform by the time you keep spending money buying older parts?

It's a rough call. As a business issue, I look at storage as an investment. I want the gear itself to have a five year lifecycle at a minimum. I am fine with maybe upgrading hard drives, especially if they're failing. The WD20EARS we acquired back in 2010(?) are now starting to fail and have been replaced with 3TB or 4TB drives.

The problem with the MSA70 type strategy is that you're unlikely to wind up with a lot of fast storage, but it'll cost a lot of energy anyways. SSD's are nearing $100 for 240/256GB units. HDD's are about $100 for 3TB units. Depending on your goal, a small E3-1230 based system with mainboard ($150) E3 ($220) 16GB RAM ($200) chassis/PS ($100) totals in at around $670, so then add two of either kind of drives for $870 for either fast SSD mirrored storage or large HDD mirrored storage. And it's going to run 60-90 watts, probably. But it probably won't have quite as good quality as the HP gear.

So I don't really have those answers. Here, it is mostly practical to replace gear if we can justify it after maybe 5 years. But while those DL365G1's seem to eat a lot of power at 250W, it pays to remember that introducing them replaced a bunch of older gear that was eating two or three times as much power. So everything is relative. ;-)
 

AgentZero

Dabbler
Joined
Jan 7, 2013
Messages
24
To be completely honest, I've mucked around with HP servers in the lab for some time now. The current iteration is ESX running on 360/380 G6 boxes - the original intention behind the 380G6 was to be the FreeNAS box...but as it turns out, when you put an HBA card in the expansion slot and draw some power, the server spins up the fans to near 100%...which is NOT going to work for the lab.

I've since rebuilt a separate FreeNAS box in a chassis with 24 hot swap bays, a Supermicro motherboard, and two M1015 cards - I can't say enough good things about the Supermicro board. I completely agree with jgreco - I'm not a fan of the HP firmware\support policy. The ESX boxes will be replaced with some more Supermicro white box gear here soon.

Don't get me wrong - the HP gear is great, and has been completely rock solid, I just don't like the non-open-upgrade policy they have now.

As for the comparison between generations in hardware - there were some great slides from Synergy this year that address just this very subject. I'll try to find them - but basically, the next generation of hardware will give you a major upgrade in performance.
 
Status
Not open for further replies.
Top