High Capacity Build for 10Gb Network

Status
Not open for further replies.

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
I'm looking to build out a new high capacity storage server to replace my current UnRAID bulk storage server. The #1 priority is capacity as my current server is 64TB (usable) and I want at least that much usable space on my new server as well.

This server will house data as follows:

70% media (10 simultaneous 1080p streams, 1-2 4K streams in future)
10% backups
10% surveillance
10% personal data

I realize I'm not going to get anywhere near 10Gb performance with spinners but I'm hoping to at least maximize performance as much as I can.

Hardware I have for this server is as follows:

MoBo: SuperMicro Xeon D-1541 or Xeon D-1537 (both 8C/16T)
RAM: 64GB (32GB x 2) DDR4 RDimms
SSDs: Intel 730 480GB SSDs (x2)
HDDs: ??????

Main thing I'm looking for guidance on is what capacity drives to use and in what configuration. Coming from UnRAID where the data isn't striped I didn't have to worry about losing half my usable space to a RAID10 config or wait for incredibly long rebuilds with RAID6/7. Of course such non-striped arrays have their own limitations which is why I'm here. Also, would it be possible/advisable to use the two Intel 730 SSDs as cache/log devices?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What are the ssds for? You probably don't need them. What case are you using and how are the drives attracted? I think you are going to need 16-24 drives maybe less if you choose to go with 8tb disks. If you setup 2vdevs at raidz2 with 8x6tb drives in each vdev you will get 64TB of usable space. Then you could add another 8 disk vdev when you needed more space. This is very similar to my setup, you can find it in my signature if you want.

Getting 10gig over the network is possible with this setup. My streaming reads are around 900MB/s which is getting close to 10gbps.

Can you post more hardware specs? PSU, case, hba, 10gig card, switch?

Sent from my Nexus 5X using Tapatalk
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
What are the ssds for? You probably don't need them. What case are you using and how are the drives attracted? I think you are going to need 16-24 drives maybe less if you choose to go with 8tb disks. If you setup 2vdevs at raidz2 with 8x6tb drives in each vdev you will get 64TB of usable space. Then you could add another 8 disk vdev when you needed more space. This is very similar to my setup, you can find it in my signature if you want.

Getting 10gig over the network is possible with this setup. My streaming reads are around 900MB/s which is getting close to 10gbps.

Can you post more hardware specs? PSU, case, hba, 10gig card, switch?

Sent from my Nexus 5X using Tapatalk

The SSD's are currently used in my UnRAID cache pool so I was thinking I could use them as cache/log devices to help performance but if not needed I certainly don't have to.

Case: iStarUSA M-4160-ATX
PSU: Seasonic SSR-450RM 450w Gold PSU
10Gb NICs: Dual SFP+ onboard Xeon D
HBA: I have a few options with this. My D-1541 board has an onboard LSI3008 and and I have a second PCI LSI3008 as well. The D-1537 board has an onboard LSI2116. The 3008's can support 8 disks each and the 2116 can support 16.
Switch: Cisco SG350XG-24F

EDIT: I just saw your system. I'm guessing with how much RAM you've got in there that you are running this server as an AIO server. What services are you running on it?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Looks like your chassis has 4 minisas connections. For this you either need a hba with sas connections or you can get a reverse breakout cable and go from 4 sata ports on your motherboard to a single minisas plug into your backplane. Maybe it would work to get a lsi9211 hba that has 2 minisas plugs then also use 2 reverse breakout cables off the motherboard sata ports. This would get you your 4 minisas connections for your backplane.

Some people might chime in about your power supply. I'm not sure what the recommendation would be for 16 drives. Just keep that in mind.

I started my system off with 64GB memory but then there was a sale and somehow I had 128GB. ;) I was also thinking ahead and planning out what my system would look like when Freenas Corral came out. It uses bhyve heavily to run services. Currently I'm on freenas 9.10 and run plex, transmission, and sonarr in jails. I will also be running ubiquity management software and dokuwiki in jails in the near future also. This is just a home server I use to play around with, nothing special.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
Looks like your chassis has 4 minisas connections. For this you either need a hba with sas connections or you can get a reverse breakout cable and go from 4 sata ports on your motherboard to a single minisas plug into your backplane. Maybe it would work to get a lsi9211 hba that has 2 minisas plugs then also use 2 reverse breakout cables off the motherboard sata ports. This would get you your 4 minisas connections for your backplane.

Some people might chime in about your power supply. I'm not sure what the recommendation would be for 16 drives. Just keep that in mind.

I started my system off with 64GB memory but then there was a sale and somehow I had 128GB. ;) I was also thinking ahead and planning out what my system would look like when Freenas Corral came out. It uses bhyve heavily to run services. Currently I'm on freenas 9.10 and run plex, transmission, and sonarr in jails. I will also be running ubiquity management software and dokuwiki in jails in the near future also. This is just a home server I use to play around with, nothing special.

Yea I've already got all the SAS cables I need. As for the PSU I think it should suffice considering how low power the Xeon D boards are. My entire rack consisting of 4 Xeon D servers, a C2758 server, 2 switches and some exhaust fans barely eclipse 400w in total.

Right now my home network is a bit convoluted. I have a 3 node ESXi cluster and a FreeNAS Corral server consisting of all SSDs that I use as a shared VM datastore for my ESXi cluster. All my services (Plex, Madsonic, Sonarr, CP, NZBGet, UniFi, UniFi Video, etc.) run in dockers inside 2 Linux VM's. All my data (media, surveillance, backups, etc.) is stored on an UnRAID VM on one of the ESXi hosts (which gets mirrored to a second UnRAID server on a second host). The reason I'm running my dockers in VM's is for maximum up-time. This way if I need to do maintenance on a host or or one goes down I can just vMotion the VM's between hosts and none of my services are offline.

I'd like to be able to leverage FreeNAS Corral's ability to run dockers and access my data locally as opposed to over the network to help reduce congestion but I'm concerned that if this FreeNAS server goes down or needs maintenance my services will be down and that's a no-no for me. If I can figure out an effective way to mitigate that I'll be golden.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
Also, @SweetAndLow in regards to your recommendation of the 2 RAIDz2 vdevs you have in your servers...what drives are you using? I'm debating between the WD Reds or the Red Pros.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I have wd reds. Only difference is 3y warranty vs 5y warranty.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
I have wd reds. Only difference is 3y warranty vs 5y warranty.

They are also 7200rpm compared to the 5400 Reds. But yes, that 5 year warranty is what is tempting me. Since zpools aren't nearly as flexible when it comes to upgrading / expanding the array, I want to future proof and not have to touch these drives for many years.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
They are also 7200rpm compared to the 5400 Reds. But yes, that 5 year warranty is what is tempting me. Since zpools aren't nearly as flexible when it comes to upgrading / expanding the array, I want to future proof and not have to touch these drives for many years.
zpools are pretty flexable, you can replace drives, increase drive size and add vdevs. The only thing you can do is add or remove drives to a vdev.
 

IamSpartacus

Dabbler
Joined
Feb 23, 2017
Messages
38
zpools are pretty flexable, you can replace drives, increase drive size and add vdevs. The only thing you can do is add or remove drives to a vdev.

Yes, that's what I'm referring to. Coming from UnRAID where I can slap any single new drive in and add it to my array, that's the flexibility I was referring to. There's no way to add a single new drive to my pool so that that drive's data is protected.
 
Status
Not open for further replies.
Top