Used 4u AMD System vs. Mixed New/Old Self Build

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
Hi everyone, and thanks for contributing to such a great resource - I’ve learned a tonne in the last couple of months of lurking.

I’m finally pulling the trigger on my first system for my office to clean up my current “solution” of several direct-attached RAID-5 enclosures and a big shelf of redundant naked SATA drives for cold storage. It works, but isn’t pretty. Use case is strictly file storage. Files all are large media files (100-200GB) but used infrequently, and usually an entire file is sequentially read or written (almost no NLE editing that would requires lots of non-sequential reads, or concurrent writes). Lots of “copy this 100GB file to a workstation” or “transcode this 250GB file sequentially” (over the network, not on the NAS itself). ~3 users.

I lucked into getting my hands on 12x brand new Seagate EXOS 10tb SAS drives last month - so have been in the weird position of sitting on a pile of SAS disks and trying to work backwards from that - I’m leaning towards an 11-disk Z3 pool +1 hot spare (My thinking is that’s a worthwhile tradeoff over 2x6 drive Z2 pools - slightly more space efficient, and I like having a hot spare on standby when dealing with such large drives).

This whole project wasn’t in my limited “equipment upgrades” plans for right now, but the offer on the disks was too good to pass up - so I’m now trying to find the most affordable way to get them into use.

My original thinking was to use a mix of new and used hardware, something like:
- Used Supermicro CSE-826BE 2U Chassis w/ SAS2 Backplane
- Used LSI 6GBPS HBA (would probably buy pre-flashed from Art of Server just to sidestep compatibility and counterfeit headaches).
- New Supermicro X11SCA-F Motherboard
- Core i3-9100 (3.6 Ghz 4 Core)
- 32 GB Hynix DDR4 (Supermicro approved list) 2x16GB
- M2 or SSD Boot Disk (I have spares kicking around but haven’t dug them up to check what exactly)
- 10GbE Internet Card (Probably a Chelsio - although I keep holding my breath that the Aquantia FreeBSD driver might get the bugs sorted out and make its way into a new build sooner rather than later).

That would also leave at least one 8x PCIe slot free to add a second HBA down the line so I could chain in a 24 disk shelf for future growth.

However, getting my hands on reliable used equipment in Canada is turning out to be a pain. Cross border shipping from the US is often more expensive than the components themselves, and locally available equipment is scarce and old - so anything I could find would need Backplane and PSU replacement, and those parts would need to come from the US... etc, etc. The above setup would be nearly CAD $2,000 once shipping and parts replacements get taken into account.

Another option that I started considering this morning, was to just jump to an off-lease 4U Supermicro Server, spend less money overall, and be done with it - have the caddies for future expansion built in from the get-go. This eBay listing caught my eye as a possible starting point:
- Supermicro CSE-847E16-R1400UB 4U Chassis (2x SAS2 Backplanes, 1x24 and 1x12)
- LSI 9211-8i HBA (implied pre-flashed)
- Supermicro H8DGU-F Motherboard
- 2x AMD Opteron 2.6Ghz 8 Core (16 Core total)
- 64GB DDR3 8x8GB

The shipping from this supplier is more reasonable - so I could get the whole thing for ~ CAD $850 - and the only items I’d need to add are boot drives (which I likely have) and a 10GbE card so would be considerably cheaper.
I have very little familiarity with AMD chips - but given that the auction house is pretty much selling it as a suggested FreeNAS / unraid setup, I’d expect that it’d at least run... even if I had to swap out a bum part or two - it would still likely be a cheaper option overall.

Are there any Pros / Cons between the approaches that people would suggest given their personal experience, that I might not be thinking of? Or anything in that listing that might be a red flag that I’ve missed?

Presumably the 4U would be louder, but it’s in a separate room and with only 12 drives would be under a minimal load.
Also - from what I’ve read I know I'd be better served by a higher clock single CPU - but I’m assuming this should still be plenty for my needs, and allow for cheaper DDR3 RAM to boot.

In any case - appreciate any thoughts or comments.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
  • Pro: not 2000$
  • Pro: unixplus, good rep, parts should be all tested. shipping to Canada doesn't cost more than the product. (I dont get why some sellers think it costs 600$ to ship a 400$ 2u chassis if unixplus can ship a 4u chassis for 200$....)
  • Pro: possibly full height PCIE cards.
  • Pro: loads of CPU threads.

  • Con: SMB is, I believe, still single thread, and thus, you would get (probably drastically) more SMB performance from the 9100 (possibly 2-3x)
  • Con: CPU's will use LOADS of power, (115W x2 vs 65W) and might still do less work. probably could work as a space heater...
  • Con: proprietary motherboard/chassis (no/limited upgrades, part of why it's not 2000$)

though you would need an HBA and RAM, check out:
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
Thanks so much for that - I had somehow completely missed the proprietary motherboard format (even with the big l-shaped image). Would much rather spend a few extra dollars on something that can take a standard EATX.

That's a great option to consider, particularly with the baked-in quad 10GbE and tonnes of dimms (and much better benchmarks to the i3). And while the SAS3 backplanes are overkill it certainly has a lot of possibilities open if I ever wanted to run other services, rebuild it, or run a SSD pool or something down the line.

Also appreciate knowing that unixplus has a decent rep. Hadn't got around to looking into them yet.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I think i got both of my quanta lb4m/lb6m from them, and a bunch of SFPs, everything works fine. they also have a youtube channel with a few educational/showcase vids of some of the hardware the sell (it's a bit sparse of content but there is some )
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
Just wanted to close out this thread as I, personally, love it when build question threads have followup.

IMG_8637.jpg
Took a while to get everything together what with the (vague gestures out window) but now have a lovely new NAS that I'm very excited about (not that I don't LOVE my unmanageable excel sheets noting what cold storage material is on what shelf piled full of naked drives).

I went ahead and bought the system artlessknave suggested, and upgraded with 128GB RAM and a pre-flashed LSI 9300-8i. The actual unit I received was actually slightly better than advertised (slight CPU bump, came with more RAM, and also had a LSI HBA that I think is flashable for future expansion). I suspect "here's a nicer computer than what you paid for" isn't something *everyone* gets, but made for a nice surprise.

Only other change was adding 2x Supermicro DOM's for mirrored boot drives (there's really nowhere in the chassis for an internal SSD, and I could get a passable deal for a pair.

Performance has been great and haven't had any surprises in the last couple weeks of intermittent burn-in and testing.

Anyway - happily rolling this (still unnamed) guy into full-on use today (and already starting to nose around for any deals on a 4U disk shelf) - so I just wanted to thank everyone here - not just for this particular thread, but all the threads, manuals, and guides that made studying up, and getting this thing together and deployed surprisingly painless and a nice side distraction for a stressful couple of months.
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
kind of a shame you only have the one pic. there are basically zero pics or videos of these chassis available. it's annoying.
I know I'm the one who linked it but damn, nice splurge-and-flex there.
I want to take that one but convert the back drives to a direct attach backplane with 4 drives on mobo SATA and 8 drives to controller [i also splurged and have a 9305-24i] (or even better-but-not-sure-if-possible, convert the back to 2.5") but 1500$ is...out of my current budget for awhile :`(.
supermicros pics are a b*%& to see, do the rails just snap in toolless?
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
Yeah, it's funny - against enterprise NAS systems this is such an incredibly thrifty option it doesn't feel like a big splurge- but you're right - if I ever spent the same amount on a home system for myself I'd be too stressed to enjoy it.

The key is to think about all the money you *saved* by not buying some other enterprise system for > 6x the cost and less features. ;-)

Here's some more shots if they help - the only downside with trying to adapt this particular layout is the *very* limited space to cable between the mobo and the backplanes. The channel by the PSUs (which is, obviously, full up with power cabling) and a very small gap on the opposite side of the mobo where they route the fan controllers and mini-SAS cables. You could probably get an additional SATA bundle down for your sata direct's - but it wouldn't be roomy (although the connectors are right there, so it'd be a short run and wouldn't mess up your airflow).

Can't give you any advice on the possibility to switch to a 2.5" enclosure. The supermicro cases are all clearly somewhat modular, but not sure how much surgery that would entail - I never bothered to pull up the mobo floor to look at the lower enclosure since everything tested out okay when it arrived. It looks like there might be some rivets attaching the cage rails so it's probably more involved than just unscrewing the existing enclosure and frankensteining it with another chassis - but hard to say for sure.


IMG_8638.jpg IMG_8639.jpg IMG_8641.jpg IMG_8643.jpg IMG_8642.jpg
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
ya, i totally understand not wanting to take the mobo tray out. I took out the tray of an isilon nl108 i got on the cheap...that was quite a bit of work (similar layout except only 1u for the mobo for who knows what reason, its only empty space beneath it other than the PSU's so stupid design)

the idea of the conversion would be with a single reverse-breakout to SFF8087/8643 to take 4 mobo sata, and then 2 more sff8087/8643 direct from the HBA. it would require an [A] backplane, but since the whole point would be for 2.5" capable of line SAS3 speeds (and also for boot drives off the mobo instead of HBA)....3.5 drive trays would be kind silly. :/.

i assume it's supermicro loud? as in, sounds like a jet engine almost ready to take off?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Converting from 3.5" to 2.5" would mean recreating the chassis almost from scratch. Swapping the backplane is easy, if I understand correctly, at least in the sense that the backplane is a standard 2U backplane.
That said, for booting without SAS (sidenote, it's not painful with LSI SAS3 and only the UEFI extension ROM installed, but I do agree it's needless complexity) there's an option kit that fits newish models and adds two 2.5" bays at the rear, between the PSUs and motherboard. It's pretty snazzy and costs some 40 bucks.


do the rails just snap in toolless?
Mostly, there are optional screws. It does take a team of three people to rack these things: two to lift and one to guide the rails to ensure proper mating.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
2.5" bays at the rear
ya, i've seen that, however, buying any chassis is super expensive (canada - shipping is sometimes MORE than the chassis, and local outlets are...mostly non-existent), and the only ones of those I could ever find are like 3000$
Converting from 3.5" to 2.5" would mean recreating the chassis almost from scratch.
it would depend on how modular the construction is. probably not enough though, but we can all dream right?
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
i assume it's supermicro loud? as in, sounds like a jet engine almost ready to take off?

It's better than some 1U and 2U servers I've had to work near in the past, but it's certainly loud. Just measured it from around 6ft away at ~52dB(A) - which is pretty reasonable all things considered. I was actually surprised because I was expecting it to be higher than that, but the room it's in is pretty quiet, and with server fans the higher pitched noise is often as big of an issue as the actual loudness. I suspect it has the lower-noise PSUs already - but can't recall 100%.

For comparison, I've got a water cooled mITX desktop going at 100% in my office right now (140mm + 2x 120mm radiator intake fans & 120mm exhaust). It's running folding@home so CPU and GPU are both maxed and fans are running ~80%) and it's at 48dB(A) from a similar distance - but doesn't seem nearly as obnoxious with traffic noise through the windows / AC background noise / the office above me stomping around - and the lower pitched desktop fan noise.. etc.

So it's roughly "a decent OC desktop going full out" loud under load. I suspect you could bring that down if you needed to with a combination of room temperature, some of the newer SM fan options, and playing around with the cooling profile - but it's never going to be a really quiet option (at least with HDDs and old-school Xeons).
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
higher pitched noise
ya, that's where it's annoying, particularly if there is no fan RPM control at all. that nl108 I meantioned I replaced the 90mm fans because, good lord, were they loud. 100% fan speed, 100% of the time, no spindown whatsoever....
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
do the rails just snap in toolless?

If you're talking about those *specific* unixsurplus listings, they just use cheap used 3rd party kits that are literally just two pieces of metal (no bearings or sliders like the SM rails). There's just an external strip of metal (with a 1-2" adjustment bolt) that screws onto the rack with your standard rack bolts - and then the server piece is mostly tool-less (there's rack mount clips built into the chassis that clip into holes on the rail). Each server rail as 2 screw holes which aren't load bearing at all, but are just to fix them so they don't bump off when you're installing or removing. The two pieces just slide and hold together with friction (and there's no stop to keep you from pulling the unit entirely out of the rack when trying to service it). Good enough for my needs - but if I was putting it in a full rack where it had to go higher up - I'd probably shell out for a SM rail kit, just for the stop alone.
 

Techweasel

Cadet
Joined
Jan 16, 2020
Messages
7
ya, that's where it's annoying, particularly if there is no fan RPM control at all. that nl108 I meantioned I replaced the 90mm fans because, good lord, were they loud. 100% fan speed, 100% of the time, no spindown whatsoever....

You can control the fan profiles in either the BIOS or via IPMI, but I don't think there's a *tonne* of options beyond some basic profiles (I didn't really get into it at all since I've got it in a storage room). It's not "100% all the time" though, as there's a noticeable ramp up if the system is under load.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
BIOS or via IPMI
the board (x7dbu) in the nl108 (isilon) needs a propriety addin card for IPMI.....so ya, no fancy things like fan speed control (fans are controlled by the propriety isilon SAS hardware/backplane/front panel/combo....stuff). this is old stuff. but cant beat 200$ for 36 bay sas2 (it's going to be backup server), though I modified it a bit to get a x9scm in instead, which uses like 1/3 the power non idle while giving like 10x the performance.....
 
Top