24 Bay Build Questions

Status
Not open for further replies.

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I'm debating whether I can move around some things and put the server in a different room that no one is in or not. If that doesn't work I am looking to basically copy your build.

Does anyone think 12 drives in a vdev in RAIDZ2 or RAIDZ3 is a bad idea or would you recommend 3 vdevs with 8 drives in raidz2? If both are viable options I would probably prefer to do 12 drives in raidz2.

I would definitely suggest 3xz2 vs 2x z3. Same number of disks of parity. Easier to contemplate upgrading a vdev. Easier to start acquiring your vdevs and 50% more IOPS.

But 12 disk z2. That's up to you. Most wouldn't.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Just an FYI - that's not the PSU fan's it's the system fans running full speed. I moved my fan connections from the backplane (which doesn't have temp/speed control) to the motherboard, which does, and it makes a WORLD of difference.
Yup. The internal fans are super loud in a SM case. Especially at high RPM.

At the lowest setting, they are pretty good for server grade gear but you will be forced to deal with fans ramping up and down as the CPUs heat up if the server room is not really cool.

I have mine set to balanced fan settings in the bios, fans run at ~4400 RPM and do not cycle up/down but they are LOUD. I store the server in my utility room and I had to insulate the door... And I can still hear it in my kitchen if I listen for it and my furnace/ac is off.

:)
 

Hinatanko

Dabbler
Joined
Oct 28, 2016
Messages
11
After everyone's suggestions and some additional research I have decided to base my build off Stux's build. Below is a list of everything I am buying for this build. If you see any potential issues please let me know.

Chassis: Norco RPC-4224
Chassis Rails: Norco 26-U Rails? Not sure
Motherboard: Supermicro X10SRi-F
CPU: Intel Xeon E5-1650v4
Cooler: Noctua NH-U9DX i4
Cooler Fans: 2 x Noctua 90mm NF-B9 PWM
Compound: Arctic Silver 5 AS5-3.5G Thermal Paste
PSU: Corsair RM1000x
RAM: 4x Samsung 32GB ECC DDR4-2133
Boot Drives: 2x SanDisk SSD PLUS 2.5" 120GB
Storage Drives: 8x WD Red 8TB NAS Hard Drive
HBA: IBM ServeRAID M1015
Cables: 1x 3WARE Cable Multi-lane Internal Cable (SFF-8087)
Storage Fans: 3 x 120mm Noctua NF-F12 PWM (high SP)
Exhaust Fans: 2 x 80mm Noctua NF-A8 PWM
UPS: APC SMC1000-2U Smart-UPS
Rack Cabinet: Norco C-24U 24U Rack Cabinet

I have a few last questions before I begin purchasing the items above:

Chassis
Is it better to buy the Norco case used like the SM chassis or would you recommend new? Right now I am looking at buying it new.

Rails
Any recommendations on rails? I was recommended the Norco rails on Newegg but I can't seem to find them in stock anywhere.

RAM
When reading the guide on what not do with FreeNAS I was a little concerned when they pointed out this thread. Eventually I will need to buy 2 more sticks of RAM when I get my last set of 8 drives in the future. In the thread it mentioned testing hardware before production. How do I go about testing for RAM before putting in this server if I have no motherboard that will support this RAM? Am I missing something? Would it be safer to buy all of the RAM I need upfront to avoid this issue? I am sure this issue applies to all hardware being added. I would imagine if I added an HBA I could test that on any other server, but not sure about the RAM.

My plans was only needing to add 2x 32GB of RAM and 2 more HBAs unless something breaks. If for example the CPU went, and I had to get a new one, if I didn't test it and it was bad, would the CPU actually cause corruption? I would imagine a bad CPU would not allow the server to start. I could be wrong. Is RAM the only piece of hardware that can cause this kind of corruption? I know burning in the hardware is important and their is a guide, but how do people go about this with more powerful hardware? Do I just need a test bench with a motherboard that will support this kind of hardware? Avoiding this kind of problem is important so any suggestions on how to eliminate this potential problem is appreciated.

HBA
For every 8 drives I will need to get another HBA correct? Also is 3x M1015 sound like a decent setup or should I be looking at something different?

Cables
This is the one I am most unsure about. Is the cable I am getting correct? I think I may need more cables from what I can see based off Stux's build thread.

Any other issues you see? This looks like it will cost around $6,000 so I want to make sure I make no mistakes. Any advice or help is appreciated.

Thanks!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
After everyone's suggestions and some additional research I have decided to base my build off Stux's build. Below is a list of everything I am buying for this build. If you see any potential issues please let me know.

Chassis: Norco RPC-4224
Chassis Rails: Norco 26-U Rails? Not sure
Motherboard: Supermicro X10SRi-F
CPU: Intel Xeon E5-1650v4
Cooler: Noctua NH-U9DX i4
Cooler Fans: 2 x Noctua 90mm NF-B9 PWM
Compound: Arctic Silver 5 AS5-3.5G Thermal Paste
PSU: Corsair RM1000x
RAM: 4x Samsung 32GB ECC DDR4-2133
Boot Drives: 2x SanDisk SSD PLUS 2.5" 120GB
Storage Drives: 8x WD Red 8TB NAS Hard Drive
HBA: IBM ServeRAID M1015
Cables: 1x 3WARE Cable Multi-lane Internal Cable (SFF-8087)
Storage Fans: 3 x 120mm Noctua NF-F12 PWM (high SP)
Exhaust Fans: 2 x 80mm Noctua NF-A8 PWM
UPS: APC SMC1000-2U Smart-UPS
Rack Cabinet: Norco C-24U 24U Rack Cabinet

I have a few last questions before I begin purchasing the items above:

Chassis
Is it better to buy the Norco case used like the SM chassis or would you recommend new? Right now I am looking at buying it new.

Rails
Any recommendations on rails? I was recommended the Norco rails on Newegg but I can't seem to find them in stock anywhere.

RAM
When reading the guide on what not do with FreeNAS I was a little concerned when they pointed out this thread. Eventually I will need to buy 2 more sticks of RAM when I get my last set of 8 drives in the future. In the thread it mentioned testing hardware before production. How do I go about testing for RAM before putting in this server if I have no motherboard that will support this RAM? Am I missing something? Would it be safer to buy all of the RAM I need upfront to avoid this issue? I am sure this issue applies to all hardware being added. I would imagine if I added an HBA I could test that on any other server, but not sure about the RAM.

My plans was only needing to add 2x 32GB of RAM and 2 more HBAs unless something breaks. If for example the CPU went, and I had to get a new one, if I didn't test it and it was bad, would the CPU actually cause corruption? I would imagine a bad CPU would not allow the server to start. I could be wrong. Is RAM the only piece of hardware that can cause this kind of corruption? I know burning in the hardware is important and their is a guide, but how do people go about this with more powerful hardware? Do I just need a test bench with a motherboard that will support this kind of hardware? Avoiding this kind of problem is important so any suggestions on how to eliminate this potential problem is appreciated.

HBA
For every 8 drives I will need to get another HBA correct? Also is 3x M1015 sound like a decent setup or should I be looking at something different?

Cables
This is the one I am most unsure about. Is the cable I am getting correct? I think I may need more cables from what I can see based off Stux's build thread.

Any other issues you see? This looks like it will cost around $6,000 so I want to make sure I make no mistakes. Any advice or help is appreciated.

Thanks!

The SRi-F has 10 SATA ports, so you don't necessarily need any HBAs to start.

2 boot drives + 8 NAS drives = 10 ports.

You will need two reverse breakout cables to connect the mother board to two rows in the case.

After that you need two sff-8087 to sff-8087 cables to connect each HBA to an additional two rows.

Or an expander.

The cable you linked is not the right cable.
 

Scharbag

Guru
Joined
Feb 1, 2012
Messages
620
Expander is a great option if you want to start your system with an HBA. I had an Intel RES2SV240 which, when paired with a 9211-8i type SAS HBA, will allow connections to all 24 hot swap trays.

Cheers,
 
Status
Not open for further replies.
Top