Looking for some opinions on various hardware options

Status
Not open for further replies.

boatymcboatface

Dabbler
Joined
Jul 11, 2016
Messages
34
Greetings!

I'm looking for some opinions on various hardware options for my home setup. At present I have a fairly neat kit running FreeNAS Corral:

E3-1275v5 processor
Supermicro X11SSH-LN4F board
64GB ECC memory

There are several challenges on the horizon. First of all, I can't stand something running perfectly and after a while I have to mess with it ;)
Storing the data properly is important, but I'll admit immediately that a big factor in this is just the fun of messing with it.

Secondly, I am moving house soon and will be putting new cabling, switches etc in so using that as an opportunity to jump on the 10Gbit bandwagon, something this board does not support. It has limited expansion ports, ports currently filled with M1015s.

Third, and perhaps most important - my array currently consists of 10 x 8 TB drives cut into two RAIDZ-1 Vdevs of 5 disks (along with a bunch of SSD's on the mobo ports). Being the data hoarder that I am, this is rapidly filling up.
My main concern is breaking the recommended barrier of 1GB memory per 1TB above 8GB. 64 gig is the max this setup will take, it being an E3 and all.
Mind you, this is my home setup so load is very light for a setup this large.

Now, I can sell some or all of the mobo/proc/mem combo above to a friend, who is actually looking for a setup like mine, with more modest storage requirements (read: not afflicted by obsessive/compulsive data-hoardery).

Here's a couple of thoughts:

1) Keep proc + mem, get the X11SSH-CTF.
Pros: modest upgrade, SAS3 controller onboard, 10 gig ethernet on board.
Cons: Still only 64GB memory.

2) Sell proc, mem + board and get a Xeon-D board. Xeon-D 1537 I suspect will have roughly similar performance, Xeon-D 1541 is probably a bit faster (Supermicro and ASRockRack offerings, respectively - both with SAS onboard and 10 Gbit).
Pros: SAS2/3 controller onboard. 10 Gbit onboard. Memory limit goes up to 128GB.
Cons: Full upgrade, more costly. No option to upgrade processor later (though I doubt I'd need to for a while).

3) Anything I didn't think of yet?

I looked at E5 offerings, but I find those are either very slow compared to the processors above, or prohibitively expensive. I mean none of this is small change but the price goes up dramatically if you look for an E5 with similar performance as the above it seems.

Also registered memory seems to be insanely expensive. I paid approximately 350 euros for 64 gig proper DDR4 ECC memory last year. If I look at the price of a single 32GB DDR4 Registered module, which would be needed on the Xeon-D boards to take me to 128GB, they seem to be around that price range already meaning 128GB would land in the 1300ish euros area. That's pretty damn steep.

I look forward to some suggestions or other options to consider :)

Many thanks!

BmcBF
 
Joined
Feb 2, 2016
Messages
574
1. For a home user, 64GB is plenty. No jails/VMs, right? You probably wouldn't notice a performance decrease at even 32GB.

2. Swap one M1015 for an LSI 9201-16i. That gives you twice as many disks in the same number of slots.

3. Use the reclaimed slot for a 10G NIC.

You didn't mention which case you're using. It sounds like you may be better off with an external storage array as you expand. Toss a card with external ports in the server and you have virtually unlimited expansion.

Cheers,
Matt
 

boatymcboatface

Dabbler
Joined
Jul 11, 2016
Messages
34
Many thanks for the reply and suggestions Matt.

On the VM's etc - I do run a small VM to host a few containers, but this requires very little memory (6GB tops at the moment).

However shortly after the post, my friend twisted my arm and made me an offer I couldn't refuse on the existing kit. So soon I will be without motherboard, processor, and memory :)

Previously, I had this machine running ESXi and virtualized FreeNAS with a passthrough of the controllers. Next to that I had PFSense running on another VM, which I would like to start using again actually in some shape or form.

So, I now have quite a decent budget to start from scratch (save the existing HBA's which I already have).

I was contemplating this Xeon-D board from Supermicro https://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-7TP4F.cfm
That would set me back just over 1000 euros.

I then realized that for the same money, I could also get an E5-2620v4 processor to put on this board: https://www.supermicro.com/products/motherboard/Xeon/C600/X10DRH-CLN4.cfm

The advantage I see there is that, in my experience, after a while used Xeon processors flood the market and are available very cheap. The TDP on this one is not so bad - I could buy one new/used now, and then if I ever need more horsepower further down the line, I could drop in a second.
This processor is much faster than the Xeon-D 1537 to begin with.
It would also mean I have 8 DIMM slots at my disposal, and in the future even 16 with a second CPU, which I could slowly populate as the need arises.
While this board is lacking onboard 10Gbit, it does have the LSI3008 which should be great for future expansion needs, and it has 4 excellent 1Gbit intel NICs onboard which I could utilize for PFSense.
So my thinking would be to pick this up, install VMWare ESXi again, run 1 PFSense VM using the onboard NICs, 1 VM with FreeNAS passing through both my HBA cards and the onboard LSI 3008, and then a third VM with the stuff I currently have in containers on FreeNAS.

That leaves a 10Gbit NIC I would still need to get at some point but that shouldn't be hard.

I agree with your comment on the enclosure. I think I will pick up a Supermicro 24 disk enclosure further down the line. That should work very nicely with the above 3008 I reckon.

Any thoughts?

Many thanks,

BmcBF
 
Joined
Feb 2, 2016
Messages
574
pick up a Supermicro 24 disk enclosure further down the line. That should work very nicely with the above 3008

You'll need an HBA with external ports to use an external array. I don't think either of those motherboards have external disk ports. No big deal though. You have plenty of slots and an HBA with external ports can be had for less than $150.

Cheers,
Matt
 
Joined
Feb 2, 2016
Messages
574
Sorry I said disk enclosure but I mean a proper chassis

It sounds like you really like your data and want to do things the right way. You may be happier with disk enclosures long term. It's easy to daisy chain from one enclosure to another. Instead of continuously buying a larger server chassis each time you want to add more space, you just add another enclosure. The SAS 9207-8E HBA I mentioned above will support 1024 drives hanging from its external ports.

An added benefit of disk enclosures is you don't have all the drive heat to dissipate inside your core server. You can also do some load shedding by turning off entire enclosures. You may have one tray that just holds long-term backups; you may turn that tray on just once a month.

even a single connection of SAS3 should be plenty to drive 24 SATA drives right?

I'm not sure exactly which backplane is being used in that case but, yes.

Keep in mind that, while you can share a single connection with a lot of drives, those drives share the bandwidth of that single connection. For most use cases, that's not a big deal. But, if you have the choice between a backplane with a single connection or multiple connections, I'd choose one with multiple.

Cheers,
Matt
 
Status
Not open for further replies.
Top