Stux
MVP
- Joined
- Jun 2, 2016
- Messages
- 4,419
I have since completed this build. Checkout the build report here
--
I'm in the process of building a new storage server, although I've done a fair bit of research, and I think I have a fairly complete parts list here, I do have some questions, and if I've made any obvious mistakes, would like some advise :)
(specific questions at the bottom)
I expect this system to have a 5-10 year lifespan. We've been using FreeNAS as a Backup target, and its time to move it up our storage hierarchy. We've been well pleased with it, and we're contemplating deploying servers in jails or VMs on the FreeNAS host hardware/system.
And system noise is a concern as the rack is in an office, hence the 120mm fan walls, and Noctuas.
Current specifications:
Chassis: Norco RPC-4224 + Rails (Received)
I priced a number of options, and it turned out that a 24 bay unit from Norco was the best option taking into account current drive needs, and possible future drive expansion.
I ordered the 120mm fan wall and OS drive bracket too, thinking the chassis didn't come with the os bracket, and only the 80mm fan wall.
The chassis arrived with a 120mm fanwall, 3 120mm fans, the os bracket, and 2 80mm fans installed. And the 'spare' 120mm fanwall and os bracket.
Which means I get to return the redundant fan wall and os bracket!
Motherboard: Supermicro X10SRi-F (Ordered)
I would like the ability to upgrade the CPU beyond 4 cores if necessary, support multiple high bandwidth PCIe devices, including at least one x16 slot, as well as plenty of memory expansion beyond 32/64GB. The i350 NIC is a bonus, and this motherboard is the one that was actually available in AU. Future expansion options include HBAs, 10gigE, PCIe NVMe SSDs, and possibly even a GPU.
Previous systems have reached their end-of-life either because of RAM capacity or PCIe2 bottlenecks.
CPU: Xeon E5-1620 v3 (or v4) (not ordered)
For our current work-loads, I believe single core performance is of primary concern, but we also think we may virtualise more systems in the future. By the time core count becomes a problem, I figure many core E5 v3/v4 Xeons should be available on the used market at significant discount to their current price. In the meantime, a 4/8 3.5Ghz+ Xeon should do.
Am thinking of going with a 1650 for the extra 2 cores. Its double the price for 50% more.
Cooler: Noctua NH-U9DX i4 (not ordered)
I believe this is the best/quietest cooler which will fit in the enclosure. The Noctua NH-U12DX i4 needs a clearance of 158mm, and this chassis only provides 155mm I believe. The NH-U9DX i4 is Noctua's recommended 4U narrow ILM cooler and has a height of 125mm.
RAM: Crucial 32GB ECC Registered PC-19200/2400Mhz (not ordered)
Either 2 x 16GB or 1 x 32GB? Crucial seems to be the only reasonable non Kingston option in Australia. Not sure if a single RDIMM is a valid configuration. 2x16 is marginally cheaper, and would provide dual-channel bandwidth. I figure we'll grow to 4x16, and then if we need more than 64GB we can grow a couple of 32s at a time, before replacing the 16s (which would be repurposed). We're at the start of DDR4s life cycle, I expect DDR4 to get cheaper with time and 16 and/or 32GB ECC RDIMMs to be a useful component for repurposing in the future.
The Noctua cooler leaves 35mm for the DIMMS. Standard DIMM form-factor is 32mm so this should be okay.
Boot: Dual Cruiser Fit 16GB USB3.0 (Ordered)
Have been happy with the Cruiser Fits on our current Backup system. In USB2 they perform at 40MB/s, in USB3 they get about 160MB/s. Essentially they provide a pair of front-mount hotswap boot disks.
HBA: none
Will use 2 breakout (received) cables to connect 8 of the motherboard's 10 SATA3 ports. If we decide to grow further I can acquire some HBAs. Is there such a thing as an LSI card which supports 16 bays?
HDs: various 1.5, 2 and 3TB drives of various grades (Red, Red Pro, RE4, Green) (Repurposing)
I have a number of smaller RAID5s in use. Will be decommissioning a few, and using their drives initially.
Will either replace with or add larger drives in future.
I'm aware that you can't add a drive to a RAIDZ vdev in ZFS. And if you want to reshape the pool, you need to backup/restore the pool.
I'm aware that RAIDZ will only expand to the largest common size of all member drives.
I'll only be considering RAIDZ2 as I don't enjoy nervous RAID5 rebuilds.
Either 6 or 8 disk vdevs in RAIDZ2 is where I'm thinking.
SSDs: none
If I determine L2ARC is needed, then I intend to purchase a PCI NVMe SSD, Intel or Samsung?
We use iSCSI, so if I determine a SLOG is needed, then I'm looking at some sort-of Intel with PLP. There should be two SATA ports left over, and the OS tray in the chassis can take two 2.5" SSDs.
I like the idea of using dual U.2 SSDs for both SLOG and L2ARC (partitioned, then mirrored and striped respectively). Is there such a thing as an 8x to Dual U.2 PCIe adapter card?
PSU: tbd.
My current plan is to use a spare ATX PSU I have on hand to determine the 'base' load without HDs, then I can add 24W * 24 (ie 2A startup draw at 12V for all drive bays) to the base load + 10%. Then I'll decide if I should use a pre-existing PSU or obtain a suitable high quality one.
How much extra does an HBA draw?
What about a PCI SSD?
A 10gbaseT card?
UPS: 5U SmartUPS 5000VAC (Existing)
The above is awesome btw. I acquired it for $500 10 years ago. Still going strong... Runs an entire office and servers at only 20-25% load. Provides 4 hours of runtime. Needs a 32A hard-line!
Backup: Replicate to another FreeNAS (Existing)
Backup system does not support ECC unfortunately.
Offsite Backup: tbd.
Currently a form of Rsync is in use, with period on-site replication when a large changeset needs to be propogated. Would like to use offsite replication to a new home-based Mini-ITX FreeNAS/Plex system in order to avoid the Rsync 'scan'. Would need to be able to have the replication be tolerant to dropped connections. This will be investigated further once the new storage server is commissioned.
----
Questions:
RAM: Should I use 2x 16GB or 1x 32GB?
CPU: Should I stump up for the 6 core Xeon instead of the 4 core?
HBA: Should I get 2 HBAs or 1 HBA for the additional eventual 16 drives?
PSU: Is my sizing approach correct? Should I take into account the potential future HBAs/NICs? or will that easily come out of the 10% buffer? I don't think a redundant PSU is worth it.
--
I'm in the process of building a new storage server, although I've done a fair bit of research, and I think I have a fairly complete parts list here, I do have some questions, and if I've made any obvious mistakes, would like some advise :)
(specific questions at the bottom)
I expect this system to have a 5-10 year lifespan. We've been using FreeNAS as a Backup target, and its time to move it up our storage hierarchy. We've been well pleased with it, and we're contemplating deploying servers in jails or VMs on the FreeNAS host hardware/system.
And system noise is a concern as the rack is in an office, hence the 120mm fan walls, and Noctuas.
Current specifications:
Chassis: Norco RPC-4224 + Rails (Received)
I priced a number of options, and it turned out that a 24 bay unit from Norco was the best option taking into account current drive needs, and possible future drive expansion.
I ordered the 120mm fan wall and OS drive bracket too, thinking the chassis didn't come with the os bracket, and only the 80mm fan wall.
The chassis arrived with a 120mm fanwall, 3 120mm fans, the os bracket, and 2 80mm fans installed. And the 'spare' 120mm fanwall and os bracket.
Which means I get to return the redundant fan wall and os bracket!
Motherboard: Supermicro X10SRi-F (Ordered)
I would like the ability to upgrade the CPU beyond 4 cores if necessary, support multiple high bandwidth PCIe devices, including at least one x16 slot, as well as plenty of memory expansion beyond 32/64GB. The i350 NIC is a bonus, and this motherboard is the one that was actually available in AU. Future expansion options include HBAs, 10gigE, PCIe NVMe SSDs, and possibly even a GPU.
Previous systems have reached their end-of-life either because of RAM capacity or PCIe2 bottlenecks.
CPU: Xeon E5-1620 v3 (or v4) (not ordered)
For our current work-loads, I believe single core performance is of primary concern, but we also think we may virtualise more systems in the future. By the time core count becomes a problem, I figure many core E5 v3/v4 Xeons should be available on the used market at significant discount to their current price. In the meantime, a 4/8 3.5Ghz+ Xeon should do.
Am thinking of going with a 1650 for the extra 2 cores. Its double the price for 50% more.
Cooler: Noctua NH-U9DX i4 (not ordered)
I believe this is the best/quietest cooler which will fit in the enclosure. The Noctua NH-U12DX i4 needs a clearance of 158mm, and this chassis only provides 155mm I believe. The NH-U9DX i4 is Noctua's recommended 4U narrow ILM cooler and has a height of 125mm.
RAM: Crucial 32GB ECC Registered PC-19200/2400Mhz (not ordered)
Either 2 x 16GB or 1 x 32GB? Crucial seems to be the only reasonable non Kingston option in Australia. Not sure if a single RDIMM is a valid configuration. 2x16 is marginally cheaper, and would provide dual-channel bandwidth. I figure we'll grow to 4x16, and then if we need more than 64GB we can grow a couple of 32s at a time, before replacing the 16s (which would be repurposed). We're at the start of DDR4s life cycle, I expect DDR4 to get cheaper with time and 16 and/or 32GB ECC RDIMMs to be a useful component for repurposing in the future.
The Noctua cooler leaves 35mm for the DIMMS. Standard DIMM form-factor is 32mm so this should be okay.
Boot: Dual Cruiser Fit 16GB USB3.0 (Ordered)
Have been happy with the Cruiser Fits on our current Backup system. In USB2 they perform at 40MB/s, in USB3 they get about 160MB/s. Essentially they provide a pair of front-mount hotswap boot disks.
HBA: none
Will use 2 breakout (received) cables to connect 8 of the motherboard's 10 SATA3 ports. If we decide to grow further I can acquire some HBAs. Is there such a thing as an LSI card which supports 16 bays?
HDs: various 1.5, 2 and 3TB drives of various grades (Red, Red Pro, RE4, Green) (Repurposing)
I have a number of smaller RAID5s in use. Will be decommissioning a few, and using their drives initially.
Will either replace with or add larger drives in future.
I'm aware that you can't add a drive to a RAIDZ vdev in ZFS. And if you want to reshape the pool, you need to backup/restore the pool.
I'm aware that RAIDZ will only expand to the largest common size of all member drives.
I'll only be considering RAIDZ2 as I don't enjoy nervous RAID5 rebuilds.
Either 6 or 8 disk vdevs in RAIDZ2 is where I'm thinking.
SSDs: none
If I determine L2ARC is needed, then I intend to purchase a PCI NVMe SSD, Intel or Samsung?
We use iSCSI, so if I determine a SLOG is needed, then I'm looking at some sort-of Intel with PLP. There should be two SATA ports left over, and the OS tray in the chassis can take two 2.5" SSDs.
I like the idea of using dual U.2 SSDs for both SLOG and L2ARC (partitioned, then mirrored and striped respectively). Is there such a thing as an 8x to Dual U.2 PCIe adapter card?
PSU: tbd.
My current plan is to use a spare ATX PSU I have on hand to determine the 'base' load without HDs, then I can add 24W * 24 (ie 2A startup draw at 12V for all drive bays) to the base load + 10%. Then I'll decide if I should use a pre-existing PSU or obtain a suitable high quality one.
How much extra does an HBA draw?
What about a PCI SSD?
A 10gbaseT card?
UPS: 5U SmartUPS 5000VAC (Existing)
The above is awesome btw. I acquired it for $500 10 years ago. Still going strong... Runs an entire office and servers at only 20-25% load. Provides 4 hours of runtime. Needs a 32A hard-line!
Backup: Replicate to another FreeNAS (Existing)
Backup system does not support ECC unfortunately.
Offsite Backup: tbd.
Currently a form of Rsync is in use, with period on-site replication when a large changeset needs to be propogated. Would like to use offsite replication to a new home-based Mini-ITX FreeNAS/Plex system in order to avoid the Rsync 'scan'. Would need to be able to have the replication be tolerant to dropped connections. This will be investigated further once the new storage server is commissioned.
----
Questions:
RAM: Should I use 2x 16GB or 1x 32GB?
CPU: Should I stump up for the 6 core Xeon instead of the 4 core?
HBA: Should I get 2 HBAs or 1 HBA for the additional eventual 16 drives?
PSU: Is my sizing approach correct? Should I take into account the potential future HBAs/NICs? or will that easily come out of the 10% buffer? I don't think a redundant PSU is worth it.
Last edited: