Hardware Build Check

Status
Not open for further replies.

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I'm putting together a new box for terminal servers, we expect to have around 50 or so on this box.

E5-1650 v3
PC4-17000 - 256gb Ram
2 64GB DOMs for Boot Volume
LSI 9300-8i Controller
Chelsio 10GB Nic
1 400gb pcie
1 1.2tb pci for l2arc
I need to select 24 2TB Drives, I'm leaning towards WD RE drives, i've had good luck with them.

Any thoughts?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Another option, and don't flame me, i'm buying from thinkmate because it's easy to get 4yr warranty and budget against that...

I can get dual 2637 cpu's for the same price as above.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
With the E5-16xx, be sure you're not ordering LRDIMM. Won't work.

The dual 2637's could be a better deal because you'd be able to hit the 256GB with lower density memory, which might totally offset the cost.

The E5-1650 v3 is, however, the best choice for a single socket E5, in my opinion, and most of our builds in the last year or two have been around the 1650.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I know from previous discussions you had mentioned clock speed matters more than cores. So if I can get two cpus with 3.5ghz clock speed vs one, that should help make a difference in latency with our terminal servers.

I'd run 1 400gb pcie for the slog and the 1.2tb for the l2arc.

It would cost $400 more to go to the LRDIMM with the dual cpus. Not sure if it would be worth spending the $400
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, but there's also a limit as to how many cores are meaningful. So far I've not been able to tease our VM filer (E5-1650 v3 based, 128GB RAM, 768GB L2ARC, 24 x 2TB 2.5" HDD's) into using more than 20% of its CPU, even when trying to. It makes me vaguely regret not having made it a 1620.

However, the dual socket solution brings with it more PCIe lanes, and more memory slots, and the ability to use LRDIMM, all of which could potentially be very helpful for a large scale NAS.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I guess when pricing out on thinkmates site, it works out to be the same price for some reason, so I might as well go with dual cpus running at 3.5ghhz.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
Yes, but there's also a limit as to how many cores are meaningful. So far I've not been able to tease our VM filer (E5-1650 v3 based, 128GB RAM, 768GB L2ARC, 24 x 2TB 2.5" HDD's) into using more than 20% of its CPU, even when trying to. It makes me vaguely regret not having made it a 1620.

Then you're not trying hard enough. This is my 1650 based nas:

Code:
pid: 11689;  load averages: 38.28, 26.26, 13.93
1579 processes:27 running, 1506 sleeping, 46 waiting
CPU:  0.4% user,  0.0% nice, 97.2% system,  0.3% interrupt,  2.0% idle
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Then you're not trying hard enough. This is my 1650 based nas:

Code:
pid: 11689;  load averages: 38.28, 26.26, 13.93
1579 processes:27 running, 1506 sleeping, 46 waiting
CPU:  0.4% user,  0.0% nice, 97.2% system,  0.3% interrupt,  2.0% idle

1579 processes?!?!?! WTF are you doing?!
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
1579 processes?!?!?! WTF are you doing?!

Sorry, that's "top -SH", so that's actually threads.

27 threads running with a load average of 38. More than enough load for a 6C 12T proc.

This was during a daily backup, which copies entire vmdk's to a deduplicated compressed dataset. And I was also copying a large file onto another compressed dataset at the same time.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I ordered this:
- Quad-Core Intel Xeon Processor E5-1630 v3 3.70GHz 10MB Cache
- Intel C612 Chipset- Dual Intel Gigabit Ethernet - 10x SATA3 - IPMI 2.0 with LAN
- 8 x 32GB PC4-17000 2133MHz DDR4 ECC Registered DIMM
- Thinkmate STX-4324 4U Chassis - 24x Hot-Swap 3.5" SATA/SAS - 12Gb/s SAS Single Expander - 1280W Redundant Power
- 2 x 128GB Micron M600 2.5" SATA 6.0Gb/s Solid State Drive
- No Operating System (Hardware Warranty Only, No Software Support)

I'll put in a chelsio 10gb card and an lsi 9300-8i 12gb hba, i'll use two intel 750's for slog and l2arc. should be fast!
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
with 256gb of ram, does it even make sense to run a L2Arc? We run NFS so definitely need the SLOG.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
with 256gb of ram, does it even make sense to run a L2Arc? We run NFS so definitely need the SLOG.

Of course it makes sense -- probably. ZFS is a CoW filesystem, and therefore suffers from fragmentation effects. The primary mechanism for fighting that is ARC and L2ARC. For every thing that happens to be in ARC and L2ARC, that is one less physical seek that has to be wasted performing an operation on frequently-accessed data. So if you have 10TB of data on your filer, is it better to have ~250GB of it cached, or is it better to have ~1.4TB of it cached?

Pedantic geeks will tend to point out that it's TOTALLY POSSIBLE that there's a scenario where you are only using stuff that totally fits into the 250GB of ARC. Yes, fine. Anything's possible. But usually someone willing to spend $2K on 256GB of RAM is building a big system, so my theory is that L2ARC is very likely to make sense.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
Kinda my thought too. I ordered the 1.2tb pcie drive for my l2 arc. I think it should be a pretty speedy server lol.

The big thing for me with this one is for fast response times for terminal servers, so the l2arc makes sense in my eyes.
 
Status
Not open for further replies.
Top