NewEgg Refurbished HP Proliant rackmount servers.

KAWood

Cadet
Joined
Sep 9, 2020
Messages
2
I see a ton of refurbised HP Proliant rackmount products @ around $300 or less. Can these be used for a FreeNas server?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,398
Yes, but you have to be careful you get one with an LSI-based storage adapter that can be flashed to IT mode.
 

KAWood

Cadet
Joined
Sep 9, 2020
Messages
2
So, how do I figure that out?
And, I like your saying "Never..." Lot of that going around.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,398
Looks like most of the refurb NewEgg servers are using the P410 SmartArray adapter. There's a Linux hpsahba utility to convert these from RAID to HBA mode, so you may need to boot into a Linux live environment to run the utility. On the FreeNAS side, the P410 is supported by the ciss driver, which is built-into the kernel. After running the hpsahba and rebooting into a FreeNAS boot volume, see if the individual drives show up.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Looks like most of the refurb NewEgg servers are using the P410 SmartArray adapter. There's a Linux hpsahba utility to convert these from RAID to HBA mode, so you may need to boot into a Linux live environment to run the utility. On the FreeNAS side, the P410 is supported by the ciss driver, which is built-into the kernel. After running the hpsahba and rebooting into a FreeNAS boot volume, see if the individual drives show up.

Oh HELL NO, do NOT try to use a CISS based card, please read some of my introductory storage controller posts, especially https://www.ixsystems.com/community...s-and-why-cant-i-use-a-raid-controller.81931/ and https://www.ixsystems.com/community...-sas-sy-a-primer-on-basic-sas-and-sata.26145/ -- the CISS cards are flaky under FreeBSD in normal use, and should be considered unsafe for ZFS.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
Oh HELL NO, do NOT try to use a CISS based card, please read some of my introductory storage controller posts, especially https://www.ixsystems.com/community...s-and-why-cant-i-use-a-raid-controller.81931/ and https://www.ixsystems.com/community...-sas-sy-a-primer-on-basic-sas-and-sata.26145/ -- the CISS cards are flaky under FreeBSD in normal use, and should be considered unsafe for ZFS.
I can confirm. I've tried. Bad idea. Notihng wrong with the Proliant's, but buy a separate HBA :)
 
Joined
Jun 15, 2022
Messages
674
Oh HELL NO, do NOT try to use a CISS based card, please read some of my introductory storage controller posts, especially https://www.ixsystems.com/community...s-and-why-cant-i-use-a-raid-controller.81931/ and https://www.ixsystems.com/community...-sas-sy-a-primer-on-basic-sas-and-sata.26145/ -- the CISS cards are flaky under FreeBSD in normal use, and should be considered unsafe for ZFS.
2022 Information Update Request:
Reason: TruNAS SCALE uses Linux (vs FreeBSD previously).
Purpose: Tiny isolated lab, replacing existing server.
Usage: Light, other than resilvering.
Focus: Long-term data integrity.

How well does TruNAS SCALE work with:

HP ProLiant ML350 G6 server, 48GB ECC RAM
HP Smart Array P410i Controller, 256MB, v5.14 (SAS)
1 SFF box with 8 SAS hot-swap bays
8 HP SAS HDD (350GB 10K RPM x6, 2TB 7.5K RPM x2)

Would the P410 still need to be replaced with an LSI?
+ If not, does the P410 BIOS need to be flashed?
+ Would the 256MB cache need to be turned off?
- If so, which LSI card would work best?

Are there any special considerations for HP HDDs?

Notes:
  • I have experience in FreeBSD and Linux, not TrueNAS SCALE.
  • I read both linked threads, and other threads and articles here and other places. I have a relative idea what the answer most likely is, however it's generally better to ask and learn what I don't know than forge ahead into a swamp.
  • The hardware is "outdated" however more than sufficient for the need. "New" hardware at additional expense would also be under-utilized and become outdated, so it makes sense to "recycle" hardware already available to the lab.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
HP Smart Array P410i Controller, 256MB, v5.14 (SAS)

You still cannot use this. Linux is not some sort of magic fix for brain-dead hardware. Don't believe me? Go to the Proxmox hardware requirements and look yourself;

Neither ZFS nor Ceph are compatible with a hardware RAID controller. Shared and distributed storage is also possible.

That's not just a TrueNAS thing; Proxmox, Nexenta, Delphix, Joyent, etc. have all warned against it. I am not going to track down cites for all of them. Either trust me or try it. If you try it, and you lose, I'll be happy to cry with you over your lost data over a virtual beer, but you are limited to at most one free virtual beer. :smile:

Would the P410 still need to be replaced with an LSI?

YES.

+ If not, does the P410 BIOS need to be flashed?
+ Would the 256MB cache need to be turned off?

The BIOS is irrelevant. The cache is not acceptable.

- If so, which LSI card would work best?

I would shoot for the cheapest LSI 2008 card you can find. These are usually in the area of thirty bucks on eBay.

If your card looks like the one in this listing, with SFF-8087 on the end, the Dell H310 is recommended.


The H310 looks like


The H200's may be cheaper but the cables come out on top.

Purpose: Tiny isolated lab, replacing existing server.
[...]
Focus: Long-term data integrity.

So I understand the contradictory issues at play. One of the things that happens in IT is that sometimes things are just not usable any longer, or for the purpose you want to use them for. I just brought back into the shop four DL365 G1's, which I suspect have the very P410 card that we're discussing. I'm stuck on whether to put the effort in to convert them to a small Proxmox cluster. I have at least a dozen cards in inventory right now, mixed between H200, H310, M1015, etc. It wouldn't matter if the cluster hosts are 12-14 years old. I got them for half price when they were brand spanking new back then (thanks We Energies!) and you just hate to throw away 8 core, 32GB hypervisors even if they are really slow.

But I tell you what. Your profile lists you as "Milwaukee, WI". Your locality RFC-1480 hostmaster, i.e. me, says hi and wonders generally if you might be in the southwest quadrant of the county, let's say, near West Allis or Greenfield. I might be willing to gift you an H310 on condition that you NOT try using the P410. I normally Freecycle certain things that I wish to dispose of by meeting people at the West Allis PD parking lot where they have a dedicated spot for online sales that's monitored by a camera. Very safe, no risk to you or me. Let me know. I'm going to commit this message before my natural cheapskate goes all Gollum and obsesses about giving up precious Dell PERC cards.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
You want an H220 card which is the same as H310 form factor. It’s also PCIE-gen3
 
Joined
Jun 15, 2022
Messages
674
I'm glad I asked...

I understand the H310 is Dell, the H220 is HP, HP usually plays best with HP "stuff," but beyond that are the differences the

H310 is PCIe 2.0
H220 is PCIe 3.0

meaning the H220 uses 128B/130B encoding (1.5% overhead) vs 8B/10B encoding (with 20% overhead)?

I'd think the PCIe 3.0 has about twice the bandwidth of 2.0 while also being more power efficient?

That comes into play for resilvering?
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
760
The H220 is LSI 2308 based

where the H310 is LSI 2008 based.

The 2308 chipset is also in the non OEM LSI 9207-8i card, rather than the LSI 9211-8i (which is similar to H310).

Really, LSI literally overclocked the 2008 and added PCIE Gen3. They run hotter and use more power. If you are running only HDDs and no SSDs it likely won't matter. Just wanted you to have all of the info.
 
Last edited:
Joined
Jun 15, 2022
Messages
674
The system is expected to run only HDDs. The intended use should see 8 GB sequential writes occasionally, random reads across the file system somewhat regularly, so both IOPs and throughput is a consideration given both will happen during resilvering.

The bus is PCIe 3.0 v2, so eventually I'll look into upgrading the H310 and see if it will be of benefit.

----
Off-topic:

Right now I ran into an issue. The system is running Windows Server 2011.
I unplugged it from the wall and pulled the cover to inspect the cabling (for the upcoming HBA install).
Did not touch anything.
Closed it back up and plugged it into an inherited APC UPS that just had the batteries restored.*
Power-up self-tests successful.
Could not find the boot partition.
--
The boot partition, OS drive C: and data drive D: are on SAS controller channel 1, RAID 1+0, drives 1-4. This is Logical Drive 1.
The backup partition is on SAS controller channel 2, RAID 0, drives 5-6. This is Logical Drive 2.
Spare drives 7&8 are disconnected, 7 is expected to fail, 8 failed. No assignment on the RAID card.
(I realize this whole configuration is not ideal, it happened elsewhere over time somewhere else.)
--
The RAID controller seems to have lost the partition data on Logical Drive 1. The HP BIOS reports there is a bootable partition, however SystemRescue cannot see it (it's basically @jgreco's warning about why to not be use RAID cards "in action"). Removing and re-creating the logical drive in RAID BIOS did not work (HP BIOS says the OS is there but the bootstrap can't find it, Linux can't see it because it's a logical volume). All system tests pass.

At this point it looks like the bare-metal backup will have to be restored to get the system running temporarily in order to pull the data off of the RAID 0 backup set and migrate it to a second out-of-system backup. If the in-system backup is lost due to the current issue only one off-system backup exists (and that makes me nervous). I'm out of ideas on how to get the system back up without restoring the boot partition, which hopefully is enough to get it to find and load the OS which hopefully finds Data and Backup partitions.

This morning the inherited APC UPS core went critical, so I tore it down and found the batteries are cooked. Guess I'll be buying replacements today. :frown:

*The UPS will not start on dead batteries, which is how I got it. Rather than buy new batteries and find the UPS is fried the existing batteries were put on a recovery system. The UPS works, but the batteries are End Of Life and need replacement.

SVR1.jpg
 
Joined
Jun 15, 2022
Messages
674
Off-Topic Update:
HP Smart Array P410i Controller: Known for having "corrupt cache" issues. Pulling the cache usually resolves the issue, though the NVRAM might need to be cleared (possibly several times). Note the cache should first be disabled in the RAID BIOS before pulling the module.

As solid as this system is built, there are reasons why companies dump them (this thing is built like a tank, and from the NVRAM service log breaks just as often). Personally I've had good luck with Dell servers, not so good luck with HP anything excepting an HP 48SX.

@jgreco's advice hits home again....thank you.

search: HP ProLiant P410i can't boot
 
Last edited:

IronDuke

Dabbler
Joined
Jan 23, 2023
Messages
18
Just a note if anyone‘s starting with a DL360, and you put in a 530 10Gb optical networking card and replace the RAID controller with an H220 HBA, like I did. Make sure you get both chassis brackets with the network card. Mine only came with the full height bracket, so I just put it in the full height slot. Then I got the HBA, and… the 8087 cables won’t reach to the low profile slot. So right now the network card is running bracketless in the low profile slot, and the HBA is in the longer slot. I had to order the low profile brackets, which weren’t particularly expensive (like $15 for 5), but entirely unnecessary if I’d just got a 530 with both brackets in the first place. Probably would have cost the same too, there are many vendors with these on the Bay of e.

One more learned point - if you are going to boot off one of the drives in the array, you need to go into the LSI config utility during the boot sequence (F8 at the appropriate time) and set the boot drive by highlighting it and hitting Option-B, (Alt-B for Microsoft victims). I needed to be in Legacy BIOS mode for the LSI config option to appear at all.

All the above valid for G9, but I think G8 and G10 would be pretty much the same.
 
Top