Acquiring a used Supermicro server

rayting

Cadet
Joined
Jul 3, 2019
Messages
2
Hello FreeNAS Community!

Backstory: I've been using an old PowerEdge 2950 that's done the best it could, but I want to replace it now. It's got a Xeon E5410 and 4GB of ECC RAM, got it from a flea market years ago and it's old enough to lack AES-NI. The noise, heat, and power usage bother me quite a bit, and I've limited it as a simple file server. I've looked at upgrading components but basically everything would be replaced.

Usage: After reading through the hardware guides/recommendations, searching through the forums, I've settled on a Supermicro 2u or 3u system. My goal is to set it up as a hypervisor and pass through HBAs to a FreeNAS VM, that way I can use the hardware to host other VMs (Ubnt Unifi and UNMS, PiHole, NextCloud, a few personal applications, etc.). Storage wise, it'd be my usual multimedia and backups like before, with the addition of the VMs and surveillance cam recordings.

First server I found with specs of interest copied below:
  • Chassis: Supermicro 836
  • CPU: 2x Xeon E5-2637 V2
  • Motherboard: X9DR7-LN4F-JBOD
  • RAM: 128GB PC3-14900R
  • HBA: LSI 9200-8E and LSI 9201-16I
While it'll definitely be less noisy, hot, and would support AES-NI, I had some reservations about it:

1. It seems quite beefy and I have no doubt it'd do the trick, but I kind of wonder if it's overkill for my purposes? A counterpoint is it leaves room to expand without acquiring more hardware down the line.

2. Is the hardware particularly old or energy intensive? How long should I expect before I should look at upgrading? The CPU isn't as ancient as my PowerEdge, but it's from 2013 and I wonder if it'll soon be obsolete, and the TDP is a bit high (130W). I'm already planning on removing the second processor unless it becomes apparent I'd need it. I understand someone asked about a vaguely similar server in 2018: https://www.ixsystems.com/community/threads/advice-on-used-supermicro-build.69900/ and the hardware guide seems to imply the hardware should still be good. Does that still apply?

3. Are there different generations of the 836 chassis? Is there a risk I'm acquiring a model that's obsolete for whatever reason? Searching for the specific chassis of the server, I found this page on SuperMicro which says the product is a "Discontinued SKU". https://www.supermicro.com/en/products/chassis/3U/836/SC836E26-R1200B

Second server I've found given my concerns, and at half the cost, but less hardware of course:
  • Chassis: SuperMicro 826
  • CPU: 2x Xeon E5-2630 V1
  • Motherboard: X9DR3-LN4F+
  • RAM: 48GB
  • HBA: LSI 9211
While it definitely suits my needs for the immediate future, I wonder if I'll need an upgrade a year or two down the line, or if I need more drive bays.

Is this a good/bad choice, any suggestions, or I being ridiculous? Thank you all, I greatly appreciate your help!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So, yes, there are lots of variants on Supermicro chassis.

For the first unit:

In your case, the absence of a "B" means it is an older variant, the inclusion of the "E2" means it has two integrated LSI SAS expanders, and the "6" means 6Gbps SAS. The "7" in the mainboard model also means it includes an LSI 2208 (full RAID) controller onboard; this won't be useful with FreeNAS, but isn't an impediment.

Removing a CPU means you will lose some of the PCIe slots. Only you can tell if this is a problem. A similar unit here, SC826A, 2x2637v2, 256GB, 2x Intel 520, 1x LSI 9211, no expanders, 12x WD Red HDD, idles around 240W and when moderately busy is around 300W.

The second unit is nonobjectionable except I really do not like the "TQ" chassis because of all the individual cabling.

The X9 series is now a bit dated, but is also where there is a huge amount of value if you don't mind things like only-USB2 and moderately higher power usage. The CPU's are dirt cheap, the memory is cheap, the performance is good, and recently when I needed a pair of hypervisors to place 2500 miles away, I picked a pair of SC826A's with X9DR3-LN4F+ as a base, popped a bunch of E5-2637v2's in them, and they've been great. The cost to go to an X10 system with DDR4 is quite a bit higher, and buying new modern systems would have been ... I think I figured triple the price? I don't remember since that was a year and a half ago.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
For your use case, you probably don't need a dual CPU -- especially if you are already planning on removing 1 CPU. Look for boards like X9SCM-F which are single processor boards. Individually they are found on ebay for around $45-$50. You can build a decent system based on these and add components that you want instead of buying a pre-built system. Choose a Pentium (lower power consumption) over a Xeon if CPU grunt is not something that you require.

However if you do plan to use it for virtualization, Xeons would be best because first of all, they all support VT-d which is required for passthrough of HBAs. In Pentiums, you'd have to confirm each model. Check this post for a build under $300. Add a rackmount case with PSU based on the number of drives you want and you are golden.
 

rayting

Cadet
Joined
Jul 3, 2019
Messages
2
Thanks for the valuabe information!

The second unit is nonobjectionable except I really do not like the "TQ" chassis because of all the individual cabling.

I looked up TQ and found it mentioned for the backplane. Looking at the manual, I couldn't identify an SFF-8087 port. Is this what you meant with cabling?

The X9 series is now a bit dated, but is also where there is a huge amount of value if you don't mind things like only-USB2 and moderately higher power usage. The CPU's are dirt cheap, the memory is cheap, the performance is good, and recently when I needed a pair of hypervisors to place 2500 miles away, I picked a pair of SC826A's with X9DR3-LN4F+ as a base, popped a bunch of E5-2637v2's in them, and they've been great. The cost to go to an X10 system with DDR4 is quite a bit higher, and buying new modern systems would have been ... I think I figured triple the price? I don't remember since that was a year and a half ago.

That's good to know. I've found it a challenge to balance initial cost, age of the hardware, and power efficiency, but those sound like acceptable tradeoffs for me.

For your use case, you probably don't need a dual CPU -- especially if you are already planning on removing 1 CPU. Look for boards like X9SCM-F which are single processor boards. Individually they are found on ebay for around $45-$50. You can build a decent system based on these and add components that you want instead of buying a pre-built system. Choose a Pentium (lower power consumption) over a Xeon if CPU grunt is not something that you require.

I didn't realize dual CPU systems are intertwined to such a degree, so I might end up keeping both processers on the board, or taking your suggestion of a single processor board.

Would definitely like to custom build my FreeNAS box, and it was my initial plan years ago. However the cost always seemed to end up on par or more expensive than a complete system, given a barebone supermicro 826 or 836 is roughly $200 or $300 respectively. Embarrassing to admit, but I'm not familiar with all the components of a rack server to trust myself with assembling it all. Flashing the IBM M1015 that replaced my PowerEdge's PERC6 RAID card was already an adventure.

General follow up questions from this:

  1. At this point I'll probably keep both processors, but I'm wondering if there's a way to tell what will be lost when running with a single CPU? I couldn't find it in Supermicro's manuals, and from Google searches it seems both PCI and RAM slots could be connected to the CPU, and lost when the CPU is removed.
  2. Additionally, since noise is a bit of a concern, how does the noise of the 740W PSU compare with 1200W? I understand it's subjective, but consensus seems to be the 1200W are quite loud. However one of my friends suggests the 1200W, claiming they're quieter. Any experience on this? I've felt desktop computers were very quiet, and the PowerEdge 2950 isn't exactly a fair comparison.

At this point it's looking like the second unit might be a better choice. If I keep both processors installed, I'm fairly sure it can handle what I throw at it, and the TDP (95W) is a decent bit less than the other unit. Cables will be a world of pain, but at least I have cable ties handy.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Looking at the manual, I couldn't identify an SFF-8087 port. Is this what you meant with cabling?
TQ backplanes have SATA ports -- which means that you will either
  1. have to use multiple SATA cables from the motherboard or god-forbid SATA controller card going to each port on the TQ backplane.
  2. Use a breakout cable which would connect 1 SFF-8087 port to 4 SATA ports which will reduce the cable clutter on the controller side but still have the SATA cables on the backplane side.
All in all, the higher the number of cables, the higher the difficulty in routing them in the chassis -- leading to more dust buildup etc etc.
from Google searches it seems both PCI and RAM slots could be connected to the CPU, and lost when the CPU is removed.
You googled correctly. In most boards, you will lose PCI lanes and RAM slots, if you remove a CPU. There might be an odd board here and there which may not lose anything, but they might be pretty hard to find and even if you do, you might have to verify with the manufacturer to confirm.
Additionally, since noise is a bit of a concern, how does the noise of the 740W PSU compare with 1200W?
While the PSU will make noise, that will be the least of your concerns -- especially for a rackmount unit. The basic fans that usually come installed in these chassis are the main culprit of the noise. My 3 fans in a 2U SuperMicro unit screech like jet engines. So don't worry about PSU being the noisy one. In any case I doubt the wattage is directly related to noise. Buy a PSU that can sufficiently support your system+ drives.

You can then replace the fans in the system with quieter ones. Better yet, find a place in the basement or a closet to hide the rackmount chassis in and you will never have to worry about the loudness of your system.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You can then replace the fans in the system with quieter ones.

No, do NOT do this. It is common among hobbyists who are used to building gaming systems that they want to quiet their systems by putting "quiet" fans in them from various "premium" manufacturers, but there are huge differences here.

The big one is that these 4-drive-wide chassis all rely on high static pressure differentials to force air through the teeny gaps around each drive. This process requires substantial energy because it is a process that requires FORCE, not just a cheap 99c computer fan. So the fans in your Supermicro chassis are industrial grade fans that will last a really long time even under the relatively heavy strain that they're placed under. They are engineered for the static pressure requirements .

Your gamer grade fans, on the other hand, and I'll call out specifically companies like Noctua, make gamer-grade crap. They know gamers replace their systems frequently and their fans aren't really expected to live the ten to twenty years that an industrial grade fan usually does -- you're lucky to get five years, IMHO, and that's without stresses on the fan.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
I run an X9SCL in a gaming tower. The motherboard costs all of $50 these days, and I have room for 11 drives. The big drawback with the small single socket boards is memory capacity. You generally get 4 slots, which with DDR3 is 32Gb. Not a lot to give to VM's. The single socket X9 boards that support registered RAM are still drawing a premium, even at 5+ years EOL. If you can find a X9SRL-F for an acceptable price, more power to you.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
You generally get 4 slots, which with DDR3 is 32Gb. Not a lot to give to VM's.
You'd be surprised at how much you can do with less RAM.
I run proxmox with 16GB of RAM on an X9SCL-F. I use LXC containers for most of the services and I create a separate container for every service.
I run
  1. a transmission container with 512MB RAM
  2. a heimdall container with 512MB RAM
  3. a guacamole container with 512MB RAM
  4. a bitwarden container with 512MB RAM
  5. a Nextcloud VM with 1GB RAM
  6. Archlinux desktop with 4GB RAM

None of my containers go above 23%-28% RAM usage. After enabling clamav on my nextcloud VM, I did see RAM usage shoot up to 89%, so just yesterday I increased the RAM on the Nextcloud VM from 1GB to 2GB. Agreed, I don't have very many users on my network. Just me and my family and everything is only available on the LAN.

I still have RAM available for other containers or VMs if and when I create them. Having said that, I do plan to upgrade the RAM in there to 32GB eventually once I install a JellyFin container and move away from Emby on my FreeNAS install. That container might require 8GB of RAM or so.
 
Top