Single memory module and Supermicro X9 dual processor motherboard

Status
Not open for further replies.

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Long story short: is it possible to run a Supermicro X9DRL-IF motherboard having only one DIMM slot populated?

The story:

Is it possible to run a Supermicro X9DRL-IF dual socket motherboard (bulk or retail) having only one DIMM slot populated? (of course only one CPU then).

On one hand the manual says "For memory to work properly, follow the tables" and the table lists configurations of 2,4,6 or 8 slots populated. I also used to be able to find some posts somewhere on the Internet that claimed one using a Supermicro dual CPU motherboard needs at least two modules but cannot find it anymore.

On the other hand the same manual says (in the "Troubleshooting" chapter) "Turn on the system with only one DIMM module installed.". I've also found a post describing testing the modules one by one by an author using an X9DRL-3F motherboard (a mobo similar to my candidate X9DRL-IF) but the author didn't state if tests were run on the X9DRL-3F motherboard or some other one.

The reason I'm asking is I'm planning initially 32GB of RAM, ideally as a single DDR3 1600MHz LRDIMM module.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It's more up to the CPU and memory architecture, which usually doesn't allow that. (I'm not aware of a case where it would work).
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
It's more up to the CPU and memory architecture, which usually doesn't allow that. (I'm not aware of a case where it would work).
Supermicro uniprocessor X9 motherboards (compatible with Xeon E5-26xx v1 and v2 so the same CPU type) allow a single DIMM. Eg X9SRi-F.

EDIT:
I'm not aware of a case where it would work
ASUS manual states they do. They even allow 1DIMM+2CPU config (chapter 2.4).
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The reason I'm asking is I'm planning initially 32GB of RAM, ideally as a single DDR3 1600MHz LRDIMM module.
It is possible to run with one CPU and one memory module, but that is usually only done to test for faulty components because having more modules increases bandwidth (the number of simultaneous operations possible) to the memory and that improves performance and performance is the whole reason to go with a dual socket board... The ideal, minimum, configuration for a dual socket board is to have both CPU sockets populated and two memory modules for each CPU. One of the boards I run at work slows the memory bus if you install more than four memory modules per CPU, so it is actually better from a performance standpoint to not max out the memory, but that server can take something like 1.5TB of RAM and I don't need that for what we are using it for... If you only populate one CPU socket, some of the PCIe slots will not work because they are depending on the second CPU.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
Thanks @Chris Moore!
One of the boards I run at work slows the memory bus if you install more than four memory modules per CPU,
Is it because of what is stated in the following table? (it's from the X9DRL-IF manual which I think has 2DPC 1SPC or 2SPC limit anyway)
rdimm-penalty.png
I mean the 2/3 DIMMs per channel maximum frequency penalty...

the number of simultaneous operations possible
Can I imagine this the following way: if I had a RAMdisk/RAMdrive then having only one memory module in total and trying to copy lots of small files using many threads would be roughly twice as slow comparing to having two memory modules?

Would the performance difference be the same if I compared single threaded copy time?

I am guessing and almost sure that the performance hit when copying large files is roughly twice slower (for example 16GB/s vs 8GB/s) but this would be very easily acceptable to me since I am focused on having lots of ZFS snapshots and I guess I'll need to access lots of small memory blocks in the cache rather than large ones.

If you only populate one CPU socket, some of the PCIe slots will not work because they are depending on the second CPU.
The X9DRL-IF manual includes the following diagram so I guess it was a general advise which may not apply to this particular board (X9DRL-IF). Or am I wrong?
xeon_dp_pcie_diagram.png
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is it because of what is stated in the following table? (it's from the X9DRL-IF manual which I think has 2DPC limit anyway)
I mean the 2/3 DIMMs per channel maximum frequency penalty...
Same type of thing. It isn't peculiar to that board. The board I have at work is affected in a similar way at a higher frequency because it is an X10 generation board.
Can I imagine this the following way: if I had a RAMdisk/RAMdrive then having only one memory module in total and trying to copy lots of small files using many threads would be roughly twice as slow comparing to having two memory modules?
The CPU is much faster than RAM, so it often needs to wait for things to be fetched, which is why having more cache memory in the CPU usually makes a system perform better. So, having more memory modules allows data to move from RAM to the CPU more quickly. It is like having a bucket full of water. If you poke one hole in it, the water comes out at a certain speed, but if you poke another hole (same size) in the bucket, the water comes out at twice the speed. It isn't a perfect analogy, but the idea is that the more memory channels being used, the faster that data can be moved.
The X9DRL-IF manual includes the following diagram so I guess it was a general advise which may not apply to this particular board (X9DRL-IF). Or am I wrong?
This is different from the way most dual processor systems I have dealt with work. Two of the PCIe slots appear to be connected to the chipset (the block marked c606) which is connected to CPU1 and CPU1 is directly connected to three other PCIe slots with no slots being connected to CPU2.

On this board for example:
upload_2018-6-11_13-39-23.png

You can see how some of the PCIe slots are labeled as CPU1 and others are labeled CPU2. If a CPU is not present, the corresponding PCIe slots will not work.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The X9DRL-IF manual includes the following diagram so I guess it was a general advise which may not apply to this particular board (X9DRL-IF). Or am I wrong?
The advice I gave was general. This specific board doesn't appear to loose anything but CPU cores and RAM capacity with the second CPU not populated.
upload_2018-6-11_13-46-51.png

That is a little unusual in my experience, but obviously not impossible.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For comparison, the X10DRi-T block diagram:
upload_2018-6-11_13-51-54.png
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
performance is the whole reason to go with a dual socket board...
Initially I'd buy a single CPU and my upgrade path would be: 1 quite cheap CPU -> 1 faster-than-the-previous-one-CPU-e5-269x-v2 -> 2 CPUs.

if you poke another hole (same size) in the bucket, the water comes out at twice the speed
EDIT after learning this:
Does it mean that the Intel channels work like in AMD unganged mode? Ganged mode would be: make bigger holes... My STFW didn't help me to learn whether DDR3 Intel motherboards/CPUs always work similarly to AMD unganged mode... Or sometimes (when?) similarly to AMD ganged mode...

Ganged mode would mean somehow 128-bit each time... Is it too much (overkill/unnecessary) for ZFS disk cache? (context:snapshots) Should I try to make sure the mode is unganged? 2 separate 64bit...

Other topic:
Is the channel count separate for each CPU in dual processor mobos? In other words:
Is it a quad-channel motherboard? Is it 2SPC then (2 DIMM slots per channel) ?

If it's so then the X9DRL-IF (which is quad-channel) will be 1SPC (single slot per channel)... (or maybe not?).

The ideal, minimum, configuration for a dual socket board is to have both CPU sockets populated and two memory modules for each CPU
If there are 4 channels why not recommend 4 memory modules for each CPU? Is the 1 channel -> 2 channels gain much better than 2 channels -> 4 channels? Wikipedia says that "Another benchmark performed by TweakTown, using SiSoftware Sandra, measured around 70% increase in performance of a quadruple-channel configuration, when compared to a dual-channel configuration"... EDIT: while this benchmarks shows +35% performance gain when going dual channel instead of single channel (AD 2014).

On the other hand: since posting previously I have read the X9DRL-IF manual memory population chapter again and found "For memory to work properly, follow the tables below for memory installation". Properly. And then 2,4,6 or 8 memory modules. A bit scary: single module would make the memory to work improperly.

Should I be scared of the "properly" word? Does improper mean "only" 50% performance loss (because of using only one channel and not two. Simplified maths: twice fewer channels -> twice slower) or I should expect 90% loss because of some improperness?
 
Last edited:

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
This specific board doesn't appear to loose anything but CPU cores and RAM capacity with the second CPU not populated.
The downside of the board itself is, I guess, I haven't calculated, that it has less PCIe lanes in total, since only one CPU lanes can be in use, even if 2 CPUs are installed. It's ATX though, "small form factor in dual socket family"? ;)

Like mITX mobo with 2011-3 socket Xeon support ;) (disclaimer: not really since those mITX are more seriously capped on memory amount in my opinion)

Sent from my phone
 
Status
Not open for further replies.
Top