Should I buy a new or a used server

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. You MUST populate the lowest number socket or the board won't POST. The first socket is required, second socket is optional.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
So with a dual processor board it is possible some of the PCI-E slot will not work. What do you use the extra slots for? Maybe I don't need that xtra slots? Do you have a recommendation for single or dual processor version the works with the narrow heatsink or the square one? Does the DP use more power with only one CPU compared to a SP, any other issues?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
it is possible some of the PCI-E slot will not work.
No, for this board, it is a certainty. There are other boards where all the slots are run by the first CPU and the second CPU just provides access to additional memory and CPU cycles for crunching data, but in this board it is also there for the PCIe lanes.
What do you use the extra slots for?
In my system, I have the SAS controller in one slot, a 10Gb network card in another, and a PCIe NVMe SSD in a third, so I only use three at the moment, but it is nice to have the option to install more if I need them.
In this system board, you would need to add a second CPU to add more than three cards. You may never need to do that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Do you have a recommendation for single or dual processor version the works with the narrow heatsink or the square one?
This board is a single socket board that is almost identical functionally to the one I initially recommended, except it only has three slots, and it has a square ILM for the heatsync:
https://www.ebay.com/itm/SuperMicro...X9SRI-F-Intel-C602-LGA2011-PCI-e/283122261305
It would do the job, but you would still need to swap the heatsync you have for the one with a square mounting bracket.
Does the DP use more power with only one CPU compared to a SP, any other issues?
It definitely would, if you install the second CPU, but it is probably about the same with a single CPU because the circuits that run the second CPU should be idle. That CPU is only 95 watts, so it wouldn't be that bad even with two and they only pull the full 95 watts when they are under load.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
There are so many boards it's very confusing. This listing on ebay X9DR3-F/X9DRi-F comes with 2 processors. Do the CPUs have to be the same or could I use the one I have with one of the slower ones?

Does this board have narrow heat sinks?

Is it a reasonable plan to use my current CPU remove the CPUs it comes with and save for later or get another heatsink and leave the 2nd CPU? Or do you still think the original MB recommendation will become available at ~$200 price point?

Sorry for all the questions I'm getting concerned that there have been no other reasonable listings for the original MB.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
I called supermicro on the phone and to my surprise they were very helpful.
Do the CPUs have to be the same or could I use the one I have with one of the slower ones?
They have to be identical

Does this board have narrow heat sinks?
They said square or narrow

Looks like the best option is to keep checking ebay for a new listing of the orginal MB.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
Finally got the X9SRL-F ordered on Ebay. It should arrive in 1 week. While I've been waiting I've been reading about ESXi as I'm considering having the server do Freenas and PFsense.

I've read a bunch of sites including some in your signature list.
Could I use a 2nd SSD (both SSDs connected to the motherboard) and the HBA passed to freenas to run ESXi?
Other threads I've read have an M2 SSD connected to PCI-E slot. Is that a must have?
Is a SLOG required for it to work reasonably?
I guess I'm confused about how to adapt my purchase to using ESXi.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Finally got the X9SRL-F ordered on Ebay. It should arrive in 1 week.
That is great news.
While I've been waiting I've been reading about ESXi as I'm considering having the server do Freenas and PFsense.
One of the reasons I went with the X9SRL-f was to have enough PCIe slots to add additional controllers to the build with this idea in mind. I think it would be completely possible to do what you are thinking of, and I know that a couple of other people on the forum have done similar with their build.
I've read a bunch of sites including some in your signature list.
I am not sure if you got that far down the list, but I have some great info down there on using ESXi and FreeNAS.
Could I use a 2nd SSD (both SSDs connected to the motherboard) and the HBA passed to freenas to run ESXi?
Learning how is the hard part, but it is completely possible and there are people on the forum that have done and are sure to be willing to help if you run into questions along the way.
Other threads I've read have an M2 SSD connected to PCI-E slot. Is that a must have?
I would say, not required, but I am not sure what your were looking at or how they were using a M.2 drive. The thing that might be good to have (strictly for the speed) is a SLOG, but a typical M.2 drive would not be a good slog device.
Is a SLOG required for it to work reasonably?
It depends a lot on how you are using FreeNAS, details of the software configuration can make big differences, but typically, if you are doing virtualization, you will have what is called 'synchronous writes' and when you are doing that, a SLOG device really improves performance. @Stux did a really good analysis of the benefit of adding a SLOG device when using FreeNAS to host virtualization and I have that in my signature links.
I guess I'm confused about how to adapt my purchase to using ESXi.
You have some really good hardware to work with. It should not take a lot to adapt to a slightly different use. The things you will prob ably need are a separate boot device to boot ESXi from and a PCIe NVMe SSD.

There is a whole thread devoted to testing various configurations to see what is the best option for SLOG:
https://forums.freenas.org/index.ph...-and-finding-the-best-slog.63521/#post-454773

The one that I think is 'good enough' (value for money) and what I bought, is the Intel SSD DC P3700.
They still sell them new for about $670, but you can pick them up used, if you are lucky, for a lot less.

https://www.ebay.com/itm/Intel-SSD-...-0-x4-NVMe-SSDPEDMD400G451-M1KR3/113257031001
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
The things you will prob ably need are a separate boot device to boot ESXi from and a PCIe NVMe SSD
Appreciate your help Chris. I never would have purchased all these parts without your help due to my concerns about compatibility. I'm excited to get my build started.

So looking at Stux post he has
1) ESXi Boot device: Samsung 960 Evo 250GB M.2 PCI NVMe SSD
could I use 2.5" 120 GB SSD that I purchased for this instead?
And use it for datastore to store your FreeNAS vm configuration, FreeNAS virtual disk, boot off it and L2ARC and swap.
2) SLOG: Intel P3700 400GB HHHL AIC PCI NVMe SSD
This is needed because it has Power Loss Protection (PLP)?

That would mean I would only need one 2.5 SSD which I have and the only additional purchase would be the Intel P3700.

As the P3700 is expensive what else do you do with your ESXi setup as a new PFsense box would be about $300 and decrease the complexity of my setup. I imagine I would still use jails in my freenas setup with ESXi.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So looking at Stux post he has
1) ESXi Boot device: Samsung 960 Evo 250GB M.2 PCI NVMe SSD
I remember now. @Stux was using a mini system for that build and, if I recall correctly, he used the M.2 drive because the board had a slot for it built-in. Not a lot of room for expansion in that build. He might have chosen different hardware if he had more room to work with. Perhaps he will comment and we can find out what his take would be
could I use 2.5" 120 GB SSD that I purchased for this instead?
Just to boot ESXi, I think you can use 16GB or even 8GB. It is really small. In the two systems I run ESXi on, they use 16GB memory sticks that are plugged to a USB port on the system board. You could do that and use the entire SSD as a datastore. I would pass physical media through to the VM to use for the FreeNAS boot pool, my theory is that you would be able to boot FreeNAS from the physical media even if ESXi were taken out of the mix. I certainly don't claim to be an expert with ESXi though, so I am hopeful that people more experienced with it will add to the discussion.
As the P3700 is expensive what else do you do with your ESXi setup as a new PFsense box would be about $300 and decrease the complexity of my setup.
That is a question you will have to consider. There are a lot of benefits to having a fully independent firewall. If you reboot your NAS, or shutdown the ESXi host for maintenance, your entire network is down.
I imagine I would still use jails in my freenas setup with ESXi.
If you have a full hypervisor with ESXi, you might chose to run full virtual machines, but I don't think you would want to nest one virtualization within another. There is a diminishing return consideration and a compatibility consideration. I would choose one or the other.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I remember now. @Stux was using a mini system for that build and, if I recall correctly, he used the M.2 drive because the board had a slot for it built-in. Not a lot of room for expansion in that build. He might have chosen different hardware if he had more room to work with. Perhaps he will comment and we can find out what his take would be

That’s right. I only had six SATA ports, no hba, so the entire SATA controller was passed into FreeNAS leaving NO Sata ports for ESXi! and I needed a place for the FreeNAS vm boot image. M.2 is overkill, but was the best solution to my problem. Evo drives were cheap/reliable.

In a full size build I’d probably be using an HBA for FreeNAS, and that’d leave a lot of motherboard SATA for ESXi. Then I would use a 2.5” ESXi datastore SSD. Best practice for ESXi is to boot off a USB. In an enterprise scenario ESXi boot usbs are disposable/replaceable and a created with scripts. In a once off install, it probably makes sense to just boot off the primary datastore.

M.2 would probably get used for an optane slog. 900p?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
@NasKar another benefit of jails in FreeNAS, even under ESXi, is they have ‘local’ access to the pool with going through a network layer
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
In a full size build I’d probably be using an HBA for FreeNAS, and that’d leave a lot of motherboard SATA for ESXi. Then I would use a 2.5” ESXi datastore SSD. Best practice for ESXi is to boot off a USB. In an enterprise scenario ESXi boot usbs are disposable/replaceable and a created with scripts. In a once off install, it probably makes sense to just boot off the primary datastore.
So I don't need an Intel P3700 400GB HHHL AIC PCI NVMe SSD? Just a 2.5" SSD connected to the motherboard SATA, a USB stick and the HBA connected to my pool drives for Freenas? No Slog needed?
they have ‘local’ access to the pool with going through a network layer
So that makes disk access faster?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So I don't need an Intel P3700 400GB HHHL AIC PCI NVMe SSD? Just a 2.5" SSD connected to the motherboard SATA, a USB stick and the HBA connected to my pool drives for Freenas? No Slog needed?
I think you misunderstood what @Stux was saying there. The SLOG is about making sync writes fast if you are using FreeNAS to host virtual machines. If you are not doing sync writes, you don't need a SLOG device. It is all about how you use the system.

You tell us how you want to use it, then we can tell you how the hardware should be configured.

If this is your primary / only NAS, I would suggest keeping it as simple as possible to increase the reliability.
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
I think you misunderstood what @Stux was saying there. The SLOG is about making sync writes fast if you are using FreeNAS to host virtual machines. If you are not doing sync writes, you don't need a SLOG device. It is all about how you use the system.

You tell us how you want to use it, then we can tell you how the hardware should be configured.

If this is your primary / only NAS, I would suggest keeping it as simple as possible to increase the reliability.
Tried to read up on this more to ask a reasonable question.
I want to know what hardware would I have to add to my current build to run ESXi with a FreeNAS VM and PFSense VM?

HBA card to HDD as passthru to FreeNAS already part of my current build.

ESXi Boot Device: Kingston V300 120 GB SSD connected to motherboard SATA
also host the FreeNAS VM configuration, enable Host swap

Option 1-SLOG: Intel P3700 400GB HHHL AIC PCI NVMe SSD

Option 2-Intel DC S3700 200GB SSD plugged into the other 2.5" SSD slot in the back of my chassis

Option 3- Intel DC S3700 plugged into a HBA

Option 2 would save ~$170 and Option 3 ~$100. Is option 2 or 3 viable?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is option 2 or 3 viable?
The performance (speed) of the SSD directly affects the speed of the pool when discussing sync writes. It can work, but it will work slower. Faster is better. Didn't I share the link to the testing discussion?
 

NasKar

Guru
Joined
Jan 8, 2016
Messages
739
Update: Not going to do ESXi for now.
Got the server put together without the HDD installed. Currently running CPU Stress test from a USB stick. I've come up with a few questions.
1) Is it proper to plug one power supply into the UPS and the other into a surge protected outlet on the UPS? (I only have one UPS)
2) Is 24 hrs enough time to run the CPU stress test? Do I just monitor the CPU temp thru the IPMI, currently at 46 degrees or does it report an error?
3) How long is it reasonable to run the memory test? Do you really run it for a week or 2 per Building, Burn-In and Testing your FreeNAS system?
4) The HDD Fans are very noisy running at full speed, and are currently plugged into the backplane. Is there a way to have them slow down based on the HDD temps? I know you disconnected the fans on the back which I may have to do as one of the fan connectors does't reach the fan header.

Appreciate your help in picking the parts for this build.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
1) Is it proper to plug one power supply into the UPS and the other into a surge protected outlet on the UPS? (I only have one UPS)
Best situation would be a separate UPS for each power supply. That protects you from a single point of failure if one UPS fails, the other (hopefully) is still good. Same for the power supplies. That is the reason for having two power supplies, if one fails, etc. You can have both power supplies plugged to the same UPS, if you only have one, but if the UPS has enough capacity to carry the load, you may as well plug both supplies into the battery outlets. When you can afford the upgrade, I still advocate a separate UPS per power supply because I recently had a UPS suddenly stop working because the battery pack went bad. The UPS gave an error message on the display and just quit working at all taking the system connected to it down. Testing determined that one of the batteries in the battery pack had an internal short. The battery pack had to be replaced and the UPS was working again, but you can't rely on a UPS to be uninterruptible, not completely.
2) Is 24 hrs enough time to run the CPU stress test? Do I just monitor the CPU temp thru the IPMI, currently at 46 degrees or does it report an error?
Running the stress test on the system is all about detecting any faults before the system goes into 'production' You want to run memory tests to ensure the memory isn't having errors and check the thermals to ensure it isn't overheating. If no problems crop up, it should be fine to move on to testing the hard disks, which will also stress test the system in other ways, in addition to simultaneously testing the drives.
3) How long is it reasonable to run the memory test? Do you really run it for a week or 2 per Building, Burn-In and Testing your FreeNAS system?
I know that @jgreco tests the systems he builds for a long time, and I understand the reasoning behind it, because he wants to ensure that they are going to be able to run for years without intervention by ensuring the equipment is solid. Personally, I don't run stress tests that long at home because I am anxious to get the system into service, I buy quality components, usually used, and I only run the CPU and memory test long enough to test all the memory through a few different patterns. Generally a day or so is enough to satisfy me, using something like https://en.wikipedia.org/wiki/Memtest86
4) The HDD Fans are very noisy running at full speed, and are currently plugged into the backplane. Is there a way to have them slow down based on the HDD temps?
I have three chassis where the fans are running from the backplane but the backplane doesn't appear to do anything to control fan speed, just run full blast. In those systems, I have installed these:
https://www.ebay.com/itm/232107503676
They reduce the fan speed to about half of the original speed, which makes them quiet enough to keep me happy while still giving enough airflow to keep the drives cool. With the reduced fan speed, passive cooling of the CPU is not possible, so I had to change to an active (with fan) CPU cooler. It works for me.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Best situation would be a separate UPS for each power supply. That protects you from a single point of failure if one UPS fails, the other (hopefully) is still good. Same for the power supplies. That is the reason for having two power supplies, if one fails, etc.

That is *one* of the reasons for having two power supplies. Being able to replace a PSU without downtime, being able to modify PDU loading, etc., there's lots of reasons. A typical data center will have a number of independent power systems so that your equipment might be getting power from two different electrical grids through two different UPS/STS systems, etc. There are also other considerations such as startup current. We talk about that here sometimes, and in the data center, this can be remediated by having A+B power that can briefly spike available power far beyond the committed/billed rate. It's a delightful topic. :smile:

You can have both power supplies plugged to the same UPS, if you only have one, but if the UPS has enough capacity to carry the load, you may as well plug both supplies into the battery outlets. When you can afford the upgrade, I still advocate a separate UPS per power supply because I recently had a UPS suddenly stop working because the battery pack went bad. The UPS gave an error message on the display and just quit working at all taking the system connected to it down. Testing determined that one of the batteries in the battery pack had an internal short. The battery pack had to be replaced and the UPS was working again, but you can't rely on a UPS to be uninterruptible, not completely.

If you're desperate for a less expensive solution, you can put one leg on line power and one leg on UPS power. This gets you protection against a failed UPS, or protection against power failure, but not protection against a marginal UPS (where it fails when the power goes out due to a marginal battery).

Running the stress test on the system is all about detecting any faults before the system goes into 'production' You want to run memory tests to ensure the memory isn't having errors and check the thermals to ensure it isn't overheating. If no problems crop up, it should be fine to move on to testing the hard disks, which will also stress test the system in other ways, in addition to simultaneously testing the drives.

There's also break-in/curing time for things like thermal paste. Many people don't bother with this, because it isn't actually likely to make a huge difference, or they don't believe it to be important, but running your system for a few hundred hours at maximum load helps the process along, while also stress-testing other subsystems.

http://www.arcticsilver.com/pdf/appmeth/int/vl/intel_app_method_vertical_line_v1.1.pdf

I know that @jgreco tests the systems he builds for a long time, and I understand the reasoning behind it, because he wants to ensure that they are going to be able to run for years without intervention by ensuring the equipment is solid.

Well, also, the economics are very different when systems are deployed hundreds or thousands of miles away. It is insanely expensive to go on a service call to remediate an issue that should have been identified during build and burn-in.
 
Status
Not open for further replies.
Top