BUILD Supermicro X9SRL-F Build Check

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Registered memory effectively increases the number of things the CPU's memory controller can talk to. Think of it a little kind of like some amplification or repeater.

The X9SR* and X10SR* boards are real workhorse boards, and are usually the "go to" boards for single socket server tasks.

Trying to compare CPU's based on budget is a loser's game, since it's very difficult for me to identify whether or not you need the extra CPU oomph, etc. It isn't that rough to compare the 1620/2637 and the 1650/2643 because they're going to perform similarly. For a home user NAS, extra cores seem to have less value, and I don't see a lot of point in trying to optimize for more cache, and things like CIFS benefit from per-core clock speeds. However, if you're doing things like jails, then that introduces extra variables that are very difficult to account for. In the end, Intel has done a pretty good job at pricing their CPU's to squeeze profit out of them. The ones that have higher performance and more desirable characteristics tend to be much pricier.
 
Joined
Nov 11, 2014
Messages
1,174
Will you buy registered memory if you won't go over 64GB. I guest I am trying to find out if registered memory is slower compared to regular ECC ?! By the way I read somewhere you have 128GB on this 1650 v3 machine you mention before, would you share what was the memory you chose for that build. (I understand I can't use it my X9SRL-F, just for information I am asking) ?

I understand they are many variables and you can't give me simple answer on one or the other. I'll do more research and testing ,before decide .
I read about CIFS benefiting from high clock , but to be honest I am not sure why.Lot of people are saying because SMB is single thread, but I was doing some testing: just copying big files over 10Gb ( speed was around 500MB/s) and try to monitor CPU usage on my e3-1230 v2 , but it seem the utilization was spread pretty even on all cores ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, I'd buy registered, and I'd buy as big as reasonably possible, because I see stupid **** all the time, like this client's pair of X9SRW's I'm staring at right now that each have four sticks of 8GB RDIMM's in them, which is going to turn into a problem to expand at some point in the future. I'm really wishing it had been purchased with 2 x 16's, and I know during the timeframe they were purchased, they *could* have been, for minimal price difference.

Consider the biggest memory configuration you're ever likely to need. If the memory you're buying doesn't somehow fit into that endgame, you may be throwing away money. On the other hand, if you're making an informed choice because of a pricing disparity, such as the ridiculous prices today for 64GB modules, that's a different story.

Right now, people are dissin the Xeon D-1581 and other D 16-core parts. A reasonable summary of it: "one has to remember that the overall market for 16 core / 32 thread virtualization servers with only 128GB of RAM is somewhere in the 10% range. The majority of 16 core/ 32 thread virtualization servers run 256-512GB of RAM." Right-sizing resources is a tricky business, but I've rarely found buying larger sized memory modules to be hurt-y, all other things being equal.

As for the SMB, the question isn't whether the utilization was spread evenly on all cores. A process can jump from core to core very easily. It's how many processes, how many threads more specifically, were involved. Until CIFS uses multiple threads to serve a single connection, the higher frequency CPU is a better choice if you have a small number of simultaneous clients. As the number of active simultaneous clients grows, more cores become useful.
 
Joined
Nov 11, 2014
Messages
1,174
As for the SMB, the question isn't whether the utilization was spread evenly on all cores. A process can jump from core to core very easily. It's how many processes, how many threads more specifically, were involved. Until CIFS uses multiple threads to serve a single connection, the higher frequency CPU is a better choice if you have a small number of simultaneous clients. As the number of active simultaneous clients grows, more cores become useful.

That part is something that confuses a lot of people I see on the forum. Thinking of multi threads as multi cores. Which is related by the way. Perhaps explaining it to me will help other to understand too:
If we have single core CPU non HT let's say Pentium 4 and samba being single threaded process , will it matter if samba was single or multi threaded ? My guess is , NO ? So as fast as this core/cpu is as fast as samba can be , so far correct ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well you have to differentiate between the bullshit that is "hyperthreading" and the programming abstraction that is a "thread." Discard the hyperthreading since it is totally not relevant until you get the point of a thread.

A single execution thread is a program running some instructions without the benefit of being able to run more than one thing at a time. It's a sequential set of instructions. A CPU core executes those instructions.

The way Samba deals with each client is to provide it with an execution thread. That runs on a CPU core. If the CPU core is 1GHz, the code runs more slowly than if the CPU is 5GHz. Now perhaps the client is slow and never fully taxes the slow 1GHz core. That's great. But perhaps the client is fast. In that case, if Samba is being bottlenecked by the low speed of the CPU core, you're stuck.

You can have a CPU with ten thousand 1GHz cores and that won't make any speed difference to that one client. Widening the number of cores would increase the number of clients Samba can deal with, but not increase the maximum speed of a single client.

Or you can have a CPU with one 10,000GHz core. This will make a speed difference to the client. It'll be served as fast as possible. Of course, that amazing 10,000GHz CPU core won't be busy 100% of the time, or even 1% of the time, so it can also switch contexts and service other programming threads on a timesharing basis.

Those last two paragraphs illustrate why I emphasize clock speed over core count.

So "hyperthreading" is merely a gimmick that makes N CPU cores look like 2*N cores to the operating system. That means that more things can be executing in parallel without context switching. That's generally a good thing, and it should make somewhat better use of the core resources.

Samba's speed for a single client is going to be dependent on the speed of a single core, because Samba's client service code is singlethreaded per client.
 
Joined
Nov 11, 2014
Messages
1,174
Ok, will leave hyperthreading on a side. I was trying to make a point at later post , but I'll leave it out. I know what it is , and remember when it came, which is before multicores cpu's were available and it made more sence back then.

All you are saying it makes perfect sense. I start even seeing some benefits of being this way if you have multiple clients at that same time , which I don't by the way.:)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
and it made more sence back then.
I wouldn't put it like that.

Hyperthreading is just a method to keep CPU execution units from being idle.

The Pentium 4 had an insanely long (for the time) pipeline, so stalls could be rather dramatic. By having a second context the CPU can automagically transparently switch to, you could feed one execution unit (say, an integer ALU) with data to process while waiting for a second one (e.g. FP multiplier) to finish, even if a single thread running on the system doesn't actually have a need for that. The ALUs were double-pumped on NetBurst, for additional stalls in all but perfectly-optimized code.

For the various Core uArchitectures, they reduced the pipeline depth, making hyperthreading less useful, so it was cut.

However, the execution engine kept growing (and the pipeline grew back to NetBurst-like sizes over time, as the branch predictors and task reschedulers evolved, reducing stalls - Netburst had 20 stages or an insane 31 stages on the later models, Nehalem was already up to 20-24 stages, depending on the execution path, Sandy Bridge and newer dropped that back to less than twentyish), so Hyperthreading was added back to allow for these new execution units to not go to waste. Every new microarchitecture has added a couple of execution units, so some people are even mumbling that three threads per core is viable right now.

The original Atoms, meanwhile, had Hyperthreading because it was deemed cheaper (power, die size, whatever metric they used) than using an out-of-order core, but a pure in-order core would be absolutely puny (punier than the original atoms already were, barely edging out Celeron Ms clocked at one third the frequency).
 
Joined
Nov 11, 2014
Messages
1,174
Journalist ask : Jgreco you already admit to our audience you have few machines build around x10srl-f and 1600 v3. Tell us did you use active CPU heat-sink or passive ? Please be exact ?:)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Journalist ask : Jgreco you already admit to our audience you have few machines build around x10srl-f and 1600 v3. Tell us did you use active CPU heat-sink or passive ? Please be exact ?:)

Actually here it's the X10SRW and the E5-1650v3. The SRW is the "WIO" variant that allows more expansion especially in the 1U form factor:

sys-1027r-n3rf_back.jpg


Two full height full length x8 slots and a half height as well, all in 1U! This puts some significant constraints on the choice of cooler; it *has* to be passive, but due to the design it actually gets redundant fans since the fan bulkhead is basically a big bad row of fans.

sys-1018r-wc0r_top_1_.gif


We're using these as hypervisors. Works out really nice. A nice 2GB LSI 3108 RAID controller for 8 of the bays provides HDD and SSD RAID1's, and two more non-RAID drives for less critical storage. Toss a dual 10GbE card in there and there's STILL a full slot free.

I've also got our VM filer here which is a 2U SC216BE26 (26 2.5" drives) and a 2U SC213A-R740W hypervisor which are all real nice, but I'm kinda wondering if I should've avoided WIO for those because it limits them from being converted to Xeon D boxes, which I think they'd both excel as. That general 2U design also has its own narrow ILM passive cooler. They're all stock SuperMicro. You should be able to see how these benefit from the multiple 40MM fans in the fan bulkhead for the 1U's, and the bigger fans in the 2U.
 
Joined
Nov 11, 2014
Messages
1,174
We are thinking so much a alike. You'll see what I mean when I give my reason for asking.:)

First let me say how interested I am in knowing what you are using in your lab especially when I value somebody's opinion. This chassis is very nice, you don't say which model but I know which one could be out of few. Dual PSU and expansion are the 2 very important and I am missing in my ESXI which is 1u based on my favorite 1u chassis sc813mtq-441cb.
I use this chassis for my pfsense router too with A1SRM-2758/16GB (only 22W) because is the most power efficient, quite and could be use for so many MB , from atom to dual CPU board. I did look at the proprietary chassis like the WIO kind, but I am trying to stay away because the limited MB can take and so can't be re-purposed with other use. We both know there is no middle ground here. If you want dual PSU and more than 1 expansion slot , you have to go the propitiatory way. I struggle with this choice lot and decide to go 2u - I'll build my next ESXI in SC826TQ-R500LPB (which I already have I like a lot) another standart MB chassis that will have the dual platinum psu and "all you can eat" low profile expansions.

So for 1u there no question we are talking passive cooling.Now for 2u:
SNK-P0048AP4 and everything will be fine, but if the cpu fan stops on active cooler it will make it even worst cause will block the air from going through the heat-sink.

But if I use passive cooler , like the one on second pic SNK-P0048PS (which will also fit , it just recommended for 2600 instead of 1600 cpu) then I'll use the 3 fans from the chassis to move the air , and will give a redundancy of single fan failure. The same way it works in you 1u chassis. So that is the whole reason I ask if you use a passive in 2u - I rather have a passive , it just that the active one is the recommended for 2u and 1620 v2 but don't mean it won't work. I have a passive in my freenas in my signature with nonsupermicro fan for same reason.
 

Attachments

  • active cooler.jpg
    active cooler.jpg
    233.1 KB · Views: 308
  • passive.jpg
    passive.jpg
    226 KB · Views: 291

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
We are thinking so much a alike. You'll see what I mean when I give my reason for asking.:)

First let me say how interested I am in knowing what you are using in your lab especially when I value somebody's opinion.

Actually datacenter use.

This chassis is very nice, you don't say which model but I know which one could be out of few.

The 1U? 1018R-WC0R. It's actually one of the few times in recent years we've been able to order pre-integrated chassis/motherboard units.

Dual PSU and expansion are the 2 very important and I am missing in my ESXI which is 1u based on my favorite 1u chassis sc813mtq-441cb.

Nice fileserver grade chassis, but these days I'm finding the 3.5" drive thing to be problematic. Not enough drive density. Part of that may be because we need things to be redundant, so I can't treat a 4 drive unit as four drives, only two.

I use this chassis for my pfsense router too with A1SRM-2758/16GB (only 22W) because is the most power efficient, quite and could be use for so many MB , from atom to dual CPU board. I did look at the proprietary chassis like the WIO kind, but I am trying to stay away because the limited MB can take and so can't be re-purposed with other use. We both know there is no middle ground here. If you want dual PSU and more than 1 expansion slot , you have to go the propitiatory way.

Not quite. The 813MTQ-R400CB is a fine redundant power supply option.

I struggle with this choice lot and decide to go 2u - I'll build my next ESXI in SC826TQ-R500LPB (which I already have I like a lot) another standart MB chassis that will have the dual platinum psu and "all you can eat" low profile expansions.

TQ?

So for 1u there no question we are talking passive cooling.Now for 2u:
SNK-P0048AP4 and everything will be fine, but if the cpu fan stops on active cooler it will make it even worst cause will block the air from going through the heat-sink.

But if I use passive cooler , like the one on second pic SNK-P0048PS (which will also fit , it just recommended for 2600 instead of 1600 cpu) then I'll use the 3 fans from the chassis to move the air , and will give a redundancy of single fan failure. The same way it works in you 1u chassis. So that is the whole reason I ask if you use a passive in 2u - I rather have a passive , it just that the active one is the recommended for 2u and 1620 v2 but don't mean it won't work. I have a passive in my freenas in my signature with nonsupermicro fan for same reason.

The SNK-P0048PS seems to be it. And I don't see any reason you wouldn't use that with the 1600 CPUs.

See, when you're a system builder and you want to be smart about the Supermicro stuff, you try to find a similar unit and drive forward from there. So like for our 2U storage and hypervisor units, I knew I wanted to spec the X10SRW board because it was capable of addressing all our uses. So follow along:

http://www.supermicro.com/products/motherboard/xeon/c600/X10SRW-F.cfm

K. It lists the SuperO prebuilts that are available with that mainboard. Of those, two are 1U units, including the 1018R-WC0R. But one is a 2U unit, the 5028R-WR.

http://www.supermicro.com/products/system/2U/5028/SYS-5028R-WR.cfm

SO you go there. And you punch up the parts list. And you contemplate it. Because this is Very Valuable Information. Someone at Supermicro has already gone and done all the hard work of identifying all the right parts, cables, heatsinks, etc. to put together that server.

Now the thing is, often there are no "useful" prebuilts. For example, if I want a 24 drive 2U storage server, no prebuilt. But the 826BA-R920WB is "close"; it's a 3.5" version. And you can cross over from there to the SC216BA-R920WB, which is the 2.5" version, which is very near, and then you can swap in an expander with the SC216BE26-R920WB. The big thing is that there's no changes in the back half of any of those chassis.

But to complete the server, you've got to go and start walking the parts list on the 5028R-WR, to see where you might get hosed. And picking up a heatsink is definitely a major part of that.

So anyways to answer your SPECIFIC question, Supermicro ships "things like that" with a passive heatsink, in particular the P0048PS. You'll want to make sure that you properly configure the air shroud to maximize airflow in the direction of the CPU. It probably won't kill you if you don't, but will raise temps a few degrees.

The one thing to note here is that the active cooler is probably able to drive more air through the heatsink and might be able to do a somewhat better job, but there's the whole fan fail thing. Even though you'd still have the bulkhead fans, you'd see significant airflow restriction. *My* choice would be passive.

The point here is that there's a lot of useful reassurance out there on the Supermicro website about what'll work and what won't.
 
Joined
Nov 11, 2014
Messages
1,174
Actually datacenter use.
Then I guess I just think like a pro, naturally.:smile:

TQ is always my choice of backplane as you know. I quote from a nice sticky from the forum :"In general, ZFS works great if it has direct communication with a disk. It does not need to be fancy or expensive communication, but it does need to be reliable.";) That and I might add the simplicity to have any single given bay to be either to MB or to sas sontroler , and last but not least the throughput bottleneck.But Anyways you know all that.


Nice fileserver grade chassis, but these days I'm finding the 3.5" drive thing to be problematic.

Perhaps I should star thinking 2.5 bay for ESXI in 1U. I just still stick with 3.5 because of the idea to be able to multipurpose chassis. I use my 1U ESXI with 2.5 Supermicro adapters on all 3.5 bays , cause all 4 bays are ssds for the esxi, but I chose the 3.5 4 bays with the idea when I expand to bigger ESXI host I can use this for small freenas with 4 bays, which I didn't want to limit to 2.5 , cause if I go mechanical for more space I'll be very limited on hdd choices and capacity , you know.
Your chassis is the "optimal" choice for the purpose, mine is little bit more practical for the cost of being most optimal , because is for home use.


Not quite. The 813MTQ-R400CB is a fine redundant power supply option.

The problem with this chassis is that is not platinum. If this was platinum I would go for it.
When you have a very small load, difference in power consumption is not 87% against 94% which is only 7% difference. It could almost double with the same load only because of the different power supplies.(trust me on that, I don't have the habit to say things that I am not sure of just like you)

Most important thing is for me for any PSU it must have PMBus. I can't have server without IPMI and PSU without PMBus. I know this 813MTQ-R400CB it does have PMbus, but I wanted to mention it's importance to me.

The SNK-P0048PS seems to be it. And I don't see any reason you wouldn't use that with the 1600 CPUs.

That's what I think, It is just not the recommended heat-sink for this cpu by supermicro. As long as can keep the cpu in the proper temp zone will be even better than active heatsink as we both agree. When you use something apart from the recommendation is good to know if somebody already tested before you buy it.

In my freenas(the one on signature) everything is supermicro recommended except the heat-sink. I am sure you know that for 1155 socket supermicro recommendation for 2u heatsink is what intel stock cooler for desctop cpu looks like. I didn't like it and is even worst if you think to use it with air shroud if even possible to fit. So I got aftermarket passive heatsink pretty much like SNK-P0048PS but for socket 1155 , because supermicro just don't have anything like that for 1155. And if anybody is interested I can confirm that will cool the cpu very well with the use of air shroud and chassis fans.

By the way Supermicro web site is like a home page to me. I visit so often feel like I I found all the bugs or missing info.:smile:
I have use their support many times over email and phone , and while they are kind of helpful ,the problem is that they just don't know too much.So sometimes when you tell me "You should check with super-micro..." know that I probably done that and didn't help. I always do my homework and beyond before I ask for help. I ask the questions that are not in the manual you know:smile:

Here is one real world example: My chassis 813MTQ-441CB has a single PSU. When system(OS) is off the PSU fan is spinning, when you turn on PC and OS is loaded the PSU fan stops. I could of guess from having two of these chassis that perhaps is normal behavior, but boy .... if I didn't try anything to find out , read anything that there is about the PSU specs and ask supermicro twice(each new support request is handle by difference person) about it , but they didn't know either, so I gave up for now.


I have the feeling that you know what I am talking about and what's great is that other people too.:)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
TQ is always my choice of backplane as you know.

I don't know why.

I quote from a nice sticky from the forum :"In general, ZFS works great if it has direct communication with a disk. It does not need to be fancy or expensive communication, but it does need to be reliable.";)

Which TQ isn't. Connectors aren't latching and there's 4x as many cables as, oh, let's say, an A chassis. Or the beautiful simplicity of a BE16.

Take a look at this.

filthy-tq-chassis.jpg



Not one of ours, thankfully.

See, the problem with a lot of machines are that they get deployed to a data center somewhere and then they might not be opened again - ever. The TQ cabling creates a lot of surface for dust to collect, and reduces the amount of airflow available inside the chassis, which is undesirable. When you get the TQ and generic SATA cables, you then need to find some place to tie up the excess, and this sucks.

That and I might add the simplicity to have any single given bay to be either to MB or to sas sontroler ,

Generally a horrible idea, though you as one guy can get away with it. The usual problem with this is that the next guy who needs to work on the chassis sees a bunch of bays that all look the same, and the normal assumption is that they're all equal. This is why I'm not a big fan of bringing out a mix of SAS and SATA ports to handle drives, etc.

and last but not least the throughput bottleneck.But Anyways you know all that.

A potential problem, but doesn't actually seem to be in practice.

Perhaps I should star thinking 2.5 bay for ESXI in 1U. I just still stick with 3.5 because of the idea to be able to multipurpose chassis. I use my 1U ESXI with 2.5 Supermicro adapters on all 3.5 bays , cause all 4 bays are ssds for the esxi, but I chose the 3.5 4 bays with the idea when I expand to bigger ESXI host I can use this for small freenas with 4 bays, which I didn't want to limit to 2.5 , cause if I go mechanical for more space I'll be very limited on hdd choices and capacity , you know.
Your chassis is the "optimal" choice for the purpose, mine is little bit more practical for the cost of being most optimal , because is for home use.

Well, what makes sense for any given situation can vary, of course. That's part of what makes Supermicro so nice. It's like LEGO for servers.

The problem with this chassis is that is not platinum. If this was platinum I would go for it.
When you have a very small load, difference in power consumption is not 87% against 94% which is only 7% difference. It could almost double with the same load only because of the different power supplies.(trust me on that, I don't have the habit to say things that I am not sure of just like you)

Okay, but if you've got an R400CB power supply and 100 watts of load on it, that's still a 25% load, and you're almost certainly within a reasonable window for efficiency. Even if it doubled (presumably meaning going down to 80% efficiency?), and we're talking the difference between 80% and 94%, that's 14 watts, or 123 kilowatt-hours per year. At a cost of 14c/kWh, that's $17/year in power cost. Not that I really buy that the inefficiency will be quite that dramatic.

Most important thing is for me for any PSU it must have PMBus. I can't have server without IPMI and PSU without PMBus. I know this 813MTQ-R400CB it does have PMbus, but I wanted to mention it's importance to me.

What's the deal with that? PMBus is mostly useful for monitoring for the failure of one unit of a redundant PSU. Some of the new IPMI versions will also trend your power utilization, which is nice, but not really a hard requirement or anything.

That's what I think, It is just not the recommended heat-sink for this cpu by supermicro.

Where are you even getting that idea from? If you buy a 2U prebuilt from Supermicro for an E5-16whatever system, that's the heatsink they send you. How much more recommended does it get?

As long as can keep the cpu in the proper temp zone will be even better than active heatsink as we both agree. When you use something apart from the recommendation is good to know if somebody already tested before you buy it.

In my freenas(the one on signature) everything is supermicro recommended except the heat-sink. I am sure you know that for 1155 socket supermicro recommendation for 2u heatsink is what intel stock cooler for desctop cpu looks like. I didn't like it and is even worst if you think to use it with air shroud if even possible to fit. So I got aftermarket passive heatsink pretty much like SNK-P0048PS but for socket 1155 , because supermicro just don't have anything like that for 1155. And if anybody is interested I can confirm that will cool the cpu very well with the use of air shroud and chassis fans.

Yes, Supermicro's options for 1155 in the greater-than-1U area suck. As in they really don't have anything useful for a passive rackmount. Sigh.

By the way Supermicro web site is like a home page to me. I visit so often feel like I I found all the bugs or missing info.:)
I have use their support many times over email and phone , and while they are kind of helpful ,the problem is that they just don't know too much.So sometimes when you tell me "You should check with super-micro..." know that I probably done that and didn't help. I always do my homework and beyond before I ask for help. I ask the questions that are not in the manual you know:)

That's fine. Happens. It's worse elsewhere, alas.

Here is one real world example: My chassis 813MTQ-441CB has a single PSU. When system(OS) is off the PSU fan is spinning, when you turn on PC and OS is loaded the PSU fan stops. I could of guess from having two of these chassis that perhaps is normal behavior, but boy .... if I didn't try anything to find out , read anything that there is about the PSU specs and ask supermicro twice(each new support request is handle by difference person) about it , but they didn't know either, so I gave up for now.

Is it just spinning slowly when the system is off? Seems a little unusual, but could have something to do with making sure standby power wasn't cooking things. A lot of the Supermicro parts are actually designed to fit into their own server lines and it is certainly curious but by no means the only time I'd have seen something a little unusual that would make sense in some context.

I have the feeling that you know what I am talking about and what's great is that other people too.:)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
TQ is always my choice of backplane as you know.

I don't know why.

I quote from a nice sticky from the forum :"In general, ZFS works great if it has direct communication with a disk. It does not need to be fancy or expensive communication, but it does need to be reliable.";)

Which TQ isn't. Connectors aren't latching and there's 4x as many cables as, oh, let's say, an A chassis. Or the beautiful simplicity of a BE16.

Take a look at this.

filthy-tq-chassis.jpg



Not one of ours, thankfully.

See, the problem with a lot of machines are that they get deployed to a data center somewhere and then they might not be opened again - ever. The TQ cabling creates a lot of surface for dust to collect, and reduces the amount of airflow available inside the chassis, which is undesirable. When you get the TQ and generic SATA cables, you then need to find some place to tie up the excess, and this sucks.

That and I might add the simplicity to have any single given bay to be either to MB or to sas sontroler ,

Generally a horrible idea, though you as one guy can get away with it. The usual problem with this is that the next guy who needs to work on the chassis sees a bunch of bays that all look the same, and the normal assumption is that they're all equal. This is why I'm not a big fan of bringing out a mix of SAS and SATA ports to handle drives, etc.

and last but not least the throughput bottleneck.But Anyways you know all that.

A potential problem, but doesn't actually seem to be in practice.

Perhaps I should star thinking 2.5 bay for ESXI in 1U. I just still stick with 3.5 because of the idea to be able to multipurpose chassis. I use my 1U ESXI with 2.5 Supermicro adapters on all 3.5 bays , cause all 4 bays are ssds for the esxi, but I chose the 3.5 4 bays with the idea when I expand to bigger ESXI host I can use this for small freenas with 4 bays, which I didn't want to limit to 2.5 , cause if I go mechanical for more space I'll be very limited on hdd choices and capacity , you know.
Your chassis is the "optimal" choice for the purpose, mine is little bit more practical for the cost of being most optimal , because is for home use.

Well, what makes sense for any given situation can vary, of course. That's part of what makes Supermicro so nice. It's like LEGO for servers.

The problem with this chassis is that is not platinum. If this was platinum I would go for it.
When you have a very small load, difference in power consumption is not 87% against 94% which is only 7% difference. It could almost double with the same load only because of the different power supplies.(trust me on that, I don't have the habit to say things that I am not sure of just like you)

Okay, but if you've got an R400CB power supply and 100 watts of load on it, that's still a 25% load, and you're almost certainly within a reasonable window for efficiency. Even if it doubled (presumably meaning going down to 80% efficiency?), and we're talking the difference between 80% and 94%, that's 14 watts, or 123 kilowatt-hours per year. At a cost of 14c/kWh, that's $17/year in power cost. Not that I really buy that the inefficiency will be quite that dramatic.

Most important thing is for me for any PSU it must have PMBus. I can't have server without IPMI and PSU without PMBus. I know this 813MTQ-R400CB it does have PMbus, but I wanted to mention it's importance to me.

What's the deal with that? PMBus is mostly useful for monitoring for the failure of one unit of a redundant PSU. Some of the new IPMI versions will also trend your power utilization, which is nice, but not really a hard requirement or anything.

That's what I think, It is just not the recommended heat-sink for this cpu by supermicro.

Where are you even getting that idea from? If you buy a 2U prebuilt from Supermicro for an E5-16whatever system, that's the heatsink they send you. How much more recommended does it get?

As long as can keep the cpu in the proper temp zone will be even better than active heatsink as we both agree. When you use something apart from the recommendation is good to know if somebody already tested before you buy it.

In my freenas(the one on signature) everything is supermicro recommended except the heat-sink. I am sure you know that for 1155 socket supermicro recommendation for 2u heatsink is what intel stock cooler for desctop cpu looks like. I didn't like it and is even worst if you think to use it with air shroud if even possible to fit. So I got aftermarket passive heatsink pretty much like SNK-P0048PS but for socket 1155 , because supermicro just don't have anything like that for 1155. And if anybody is interested I can confirm that will cool the cpu very well with the use of air shroud and chassis fans.

Yes, Supermicro's options for 1155 in the greater-than-1U area suck. As in they really don't have anything useful for a passive rackmount. Sigh.

By the way Supermicro web site is like a home page to me. I visit so often feel like I I found all the bugs or missing info.:)
I have use their support many times over email and phone , and while they are kind of helpful ,the problem is that they just don't know too much.So sometimes when you tell me "You should check with super-micro..." know that I probably done that and didn't help. I always do my homework and beyond before I ask for help. I ask the questions that are not in the manual you know:)

That's fine. Happens. It's worse elsewhere, alas.

Here is one real world example: My chassis 813MTQ-441CB has a single PSU. When system(OS) is off the PSU fan is spinning, when you turn on PC and OS is loaded the PSU fan stops. I could of guess from having two of these chassis that perhaps is normal behavior, but boy .... if I didn't try anything to find out , read anything that there is about the PSU specs and ask supermicro twice(each new support request is handle by difference person) about it , but they didn't know either, so I gave up for now.

Is it just spinning slowly when the system is off? Seems a little unusual, but could have something to do with making sure standby power wasn't cooking things. A lot of the Supermicro parts are actually designed to fit into their own server lines and it is certainly curious but by no means the only time I'd have seen something a little unusual that would make sense in some context.

I have the feeling that you know what I am talking about and what's great is that other people too.:)
 
Joined
Nov 11, 2014
Messages
1,174
I am going to respond to the major point in separate.

Why SMBus is such a big deal for me ?

It's not just the great fact the will an email(if setup properly) when PSU fail. You'll know when anything else could be wrong with it, could be a PSU fan failure on good working PSU for example.

Besides the failure notifications, look at all the info you get with PMbus and tell is not super cool to have.(see pics)
Live info on power draw, current draw, fans speed, temp inside, so much more. It's very accurate, I compared with watt meter. You can have a graphs logs with hourly, daily, monthly stats.

Speaking from one geek to another : Tell me is not super cool to have that ?:)
 

Attachments

  • SMBus1.PNG
    SMBus1.PNG
    70.3 KB · Views: 277
  • SMBus2.PNG
    SMBus2.PNG
    91.6 KB · Views: 315

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The current draw stuff isn't always that accurate. And the shrill scream the unit puts out when there's a problem with a PSU should be sufficient motivation for a home user to go find out that a PSU module died. ;-)
 
Joined
Nov 11, 2014
Messages
1,174
The current draw stuff isn't always that accurate. And the shrill scream the unit puts out when there's a problem with a PSU should be sufficient motivation for a home user to go find out that a PSU module died. ;-)

Will you agree with me once ?:smile:
I did repeat what you said from your sticky and you still object to that.:D
 
Joined
Nov 11, 2014
Messages
1,174
Where are you even getting that idea from? If you buy a 2U prebuilt from Supermicro for an E5-16whatever system, that's the heatsink they send you. How much more recommended does it get?

The Idea I came up with myself, because I didn't like the active cooler idea. If you seen supermicro shipping that cooler with 1600 CPU in already build server it's easy to put the idea together, that these two will work , but is not something that is documented by them, and here are the too methods I used:

1. I went on their website http://www.supermicro.com/ResourceApps/Heatsink_Matrix.aspx and looked all cpu coolers for socket 2011 and the only option for 1600 was SNK-P0048AP4 not SNK-P0048P. I added picture so you can see yourself.

2. I contacted supermicro to ask them , and they send me this file.Here is the pic of that too.

So these are the only 2 options I know and use to get the ideas for the heatsink for 1620 v2 and non of them indicates the possibility of using passive heatsink for 2u.

Am I missing something ? Was there 3th option I neglected to check ?
 

Attachments

  • heatsink.PNG
    heatsink.PNG
    120.2 KB · Views: 402
  • heatsink2.PNG
    heatsink2.PNG
    66.2 KB · Views: 283
Joined
Nov 11, 2014
Messages
1,174
Okay, but if you've got an R400CB power supply and 100 watts of load on it, that's still a 25% load, and you're almost certainly within a reasonable window for efficiency. Even if it doubled (presumably meaning going down to 80% efficiency?), and we're talking the difference between 80% and 94%, that's 14 watts, or 123 kilowatt-hours per year. At a cost of 14c/kWh, that's $17/year in power cost. Not that I really buy that the inefficiency will be quite that dramatic.

That is true in this example.(see how I agree with you when you are right):) BUT
Here is another example based on actual events:
My freenas in my signature had 2x800W PSUs(not even silver rated). On idle my freenas with only 1 ssd was around 80W , when I put 2x500W Platinum PSUs on the same machine power on idle was 47W. That are actual facts.

Besides the power efficiency for the load attached , the platinum 500W PSU's had different fans(single), that were 3 times more quiet and burn less watts compared to the fans(dual) that 800W had. So that factor came in play to make such a huge difference on power consumption.

Black Ninja believe power consumption is extremely important factor when multiple servers are involved.


Is it just spinning slowly when the system is off?

The fans on 813MTQ-441CB power supply stop completely when system is powered on. The picture I used is actual from my server but this 22x rpm that shows there is actually 0 rpm. I've seen the fan with my own eyes.
And start spinning around 3800 rpm again when system is off.

P.S. I know there is a reason for that and the people who actually made it (engineers) know why and when the fans suppose to spin, but the problem is the "customer service" don't know so I don't as a result. So much about "ask" supermicro" , when someone say.

They told me "is normal behavior" , but unless I know the details(thresholds), I won't know when normal not to spin and when it fail to spin. It's just how the game works - people that know are not in contact with customers , so you end up asking people who don't know nothing for help.
 
Status
Not open for further replies.
Top