BUILD About to pull the trigger on a ~100TB build, looking for input

Status
Not open for further replies.

melp

Explorer
Joined
Apr 4, 2014
Messages
55
I've been planning and saving for this build for several years now and all the pieces have just about fallen into place. I'm waiting for HDD prices to come down a bit more, but other than that, I'm about ready to do this thing. I've pored over this configuration for endless hours and I think I've got everything just about dialed in, but I would really appreciate some extra (expert) eyes and advice. I'll start with a basic list of components and then list my justifications for each choice down below.
  • Chassis: SuperMicro SC846 (used from eBay)
  • CPU: Intel Xeon E5-1630v3 (4/8 cores/threads @ 3.7GHz)
  • Motherboard: SuperMicro X10SRA SuperMicro X10SRL-F
  • Memory: 64GB Crucial DDR4 in 4 sticks (ECC/Unbuffered)
  • SAS Expander: IBM M1015
  • Replacement ATX PSU: Corsair HX1000i 2x SuperMicro PWS-920P-SQ
  • Replacement Backplane: SuperMicro BPN-SAS2-846EL1
  • Boot device: 8GB USB3.0 drive 2x Intel 40GB 320 Series SSD
  • UPS: APC 1500VA (used from eBay)
  • vdev1: 12x 8TB WD Red in RAID-Z3 8x 8TB WD Red in RAID-Z2
  • vdev2: 12x 4TB WD Red in RAID-Z3 8x 8TB WD Red in RAID-Z2
  • vdev3: 8x 4TB WD Red in RAID-Z2
Now justifications for each choice:

Chassis: SuperMicro SC846 (used from eBay) -- Solid case, ~$250 on eBay, not much to say. I'll be swapping the PSUs, backplane, and fans.

CPU: Intel Xeon E5-1630v3 (4/8 cores/threads @ 3.7GHz) -- Highest single core clock speed in this family of Xeons. I'll mostly be serving up files over SMB/CIFS, so this will come in handy. I'll explain LGA 2011 vs. LGA 1151 below.

Motherboard: SuperMicro X10SRA SuperMicro X10SRL-F -- The basic LGA 2011 server board. LGA 1151 would have worked but the SC846 chassis doesn't take micro ATX boards and the full ATX versions of SuperMicro's LGA 1151 boards are like $500. LGA 2011 will also allow me to add more RAM (if I ever need it).

Memory: 64GB Crucial DDR4 in 4 sticks (ECC/Unbuffered) -- This will mostly be static storage, so 64GB should be plenty. As mentioned above, being on LGA 2011 will allow me to throw in another 4 sticks easily. I read the tales of Kingston, so I'll probably go with Crucial, but I'm open to other vendors. Of course I'll check memory/mobo compatibility, etc., before I purchase.

SAS Expander: IBM M1015 -- Flashed to IT mode, of course. I'll only need one for all 24 drives as I'll be connecting the M1015 to the 846EL1 backplane which has its own SAS expander chip. As I understand it, I'll effectively have a 24Gbit/sec link which will be shared between all 24 drives, giving them 1Gbit/sec each. Considering I'll be connecting over 1Gbit/sec ethernet, this will be plenty.

Replacement ATX PSU: Corsair HX1000i 2x SuperMicro PWS-920P-SQ -- The PSUs that come in the SC846 are apparently very loud. I want something quiet, but I was advised I should stick with redundant PSUs, so I'll get pair of PWS-920P-SQs instead.

Replacement Backplane: SuperMicro BPN-SAS2-846EL1 -- All the SC846's I've seen on eBay come with an older backplane (BPN-SAS-846EL1) that doesn't have SAS2 support. Without SAS2, apparently the maximum capacity of the array would be limited and/or it would only recognize drives up to 2 or 3TB; I'm not totally clear on all this. The solution is this SAS2-capable backplane. Another possible option is the BPN-SAS-846A backplane which appears to be a SAS breakout cable baked into a PCB. It doesn't have its own SAS expander chip so it gets around the above SAS/SAS2 limitation. I would however need three of the M1015's, so I'm not sure if there would be any advantage in this option. The 846A backplanes are also like $450 on eBay. Doesn't seem worth it to me

Boot device: 8GB USB3.0 drive 2x Intel 40GB 320 Series SSD -- I was advised to go with SSDs over USB drives for reliability.

UPS: APC 1500VA (used from eBay) -- Cheap on eBay, does NUT place nice with APC UPSs? Or do I have to get the APC drivers loaded?

vdev1 & 2: 12x 8TB WD Red in RAID-Z3 8x 8TB WD Red in RAID-Z2 -- As soon as these hit $300, I'm ready to go. I was talked out of doing a 12-wide vdev, so I'll go with 8-wide and to Z2.

vdev3: 8x 4TB WD Red in RAID-Z2 -- I already have 8x 4TB WD Red drives, so I'll just use those for the final vdev.​

That's about everything. I'm going to replace the fans in the chassis, but that's no biggie. I'll also get a cheap rack on craigslist and mount this and the UPS. Again, thoughts and feedback would be greatly appreciated!
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Boot device: 8GB USB3.0 drive -- Can I use USB 3.0 here, or should I use USB 2.0? It probably doesn't even matter and I guess I should use USB 2.0 for guaranteed compatibility...

Given the quality of the build I'd recommend a SSD for the boot device, far more reliable than even a mirror of USB sticks ;)

Please note that with the 80 % rule you'll have about 80 TiB usable, not 100.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

philhu

Patron
Joined
May 17, 2016
Messages
258
Your vdevs should never exceed 11 disks....

I bought an x8dtn2/sc847/lsi-it mode card with front/back/sas2 backplanes, redundant power supplies and upgraded it to 2.8 dual xeon/quad core with 48g memory which uses 330 watts of power fully loaded! Retiring my old Dell 2950 which used 1770 watts!

vdev0 is 11x4tb, vdev2 is 11x6tb. System holds 36 drives, so I will add another 11x6tb (or 8tb if price comes down) at end of year. Both vdevs are in raidz3, I have an ssd boot device, and a 16g ZIL ssd

Whole thing was $700
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Looks like you've got a pretty good plan. I'd recommend the boot SSD (mirrored if possible) as well (I'm slowly migrating my systems that way).

Are you sure the SC846 will take an ATX PSU?, or are you planning to do some mods? the stock redundant 900 watt PSU's in my 847 are pretty quiet (especially compared to the system fans running full tilt).

Also, what are your plans for fan replacements? I tried this and failed (couldn't move enough air and drives quickly heated up) and have a bunch sitting in a cardboard box.

One thing to consider is just getting a JBOD chassis (like the 45 drive SC847 in my signature) and hooking it up to whatever server you want.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Curious as to why you would choose a workstation board for a largish build such as this.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
I'd wait to see what comes in your enclosure. And I'd keep the redundant PSUs as well.
I'll definitely check what comes in the enclosure. I can also get the -SQ redundant PSUs from SuperMicro, about $4-500 for a pair. Maybe that's a better choice over the ATX PSU...
Your vdevs should never exceed 11 disks....
They can exceed 11 disks; there is no hard limit on how many disks you can have, it's just a question of how much risk you want to pack in a single vdev. I did a lot of calculation comparing 3x Z2 vdevs vs. 2x Z3 vdevs and Z3 came out on top.
Looks like you've got a pretty good plan. I'd recommend the boot SSD (mirrored if possible) as well (I'm slowly migrating my systems that way).

Are you sure the SC846 will take an ATX PSU?, or are you planning to do some mods? the stock redundant 900 watt PSU's in my 847 are pretty quiet (especially compared to the system fans running full tilt).

Also, what are your plans for fan replacements? I tried this and failed (couldn't move enough air and drives quickly heated up) and have a bunch sitting in a cardboard box.

One thing to consider is just getting a JBOD chassis (like the 45 drive SC847 in my signature) and hooking it up to whatever server you want.
SSDs (maybe even mirrored) will probably happen. ATX PSU in the SC846 would definitely require some work, but I'm leaning towards keeping redundant PSUs now (getting a pair of the -SQ ones). This thing will set next to my desk, so noise is going to be a huge issue. I'm planning on replacing the middle fans with a set of 3 120mm Noctuas. I'll need to remove the existing fan wall somehow and sort of secure them in with zip ties and use strips of rubber for vibration dampening. I'll also replace the rear 80mm fans with some Noctua 80mm's.
Curious as to why you would choose a workstation board for a largish build such as this.
Didn't realize the X10SRA was a workstation board. X10SRL-F look like a better option? http://www.supermicro.com/products/motherboard/Xeon/C600/X10SRL-F.cfm
 

philhu

Patron
Joined
May 17, 2016
Messages
258
vdevs can go higher. But the guy who wrote zfs says 11 SHOULD be the max since it is optimized for up to 11. The fact it lets you go higher, doesn't make it right.

Saying that, 12 is probably fine, just 17 is probably too much ;)

About ssd vs usb. USB sticks are notorious for just failing. I redid my system from USB to mirrored SSD system drives and it sped up my boot by almost 2x. If you get SLC type SSD drives, the wear and tear on the drives should be negligible
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
They can exceed 11 disks; there is no hard limit on how many disks you can have, it's just a question of how much risk you want to pack in a single vdev. I did a lot of calculation comparing 3x Z2 vdevs vs. 2x Z3 vdevs and Z3 came out on top.
Might want to do some more research on this. Some of the more knowledgeable members here such as @cyberjock and @jgreco have experience in this area and do not recommend vdevs as wide as you are proposing.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
You're vdev's are of different size so the the one with the 8TB drives will be getting 2/3rds of the work.

Also this will only be 100TB's of usable storage if its either extremely dormant data or you don't mind it being super slow due to fragmentation.

I would really try to avoid switching out a redundant server PSU for a workstation one unless you don't mind the problems that could be associated.

What could you possibly need this much data storage for where you don't say.. have a data room where the noise isn't a big issue anyway?
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
Might want to do some more research on this. Some of the more knowledgeable members here such as @cyberjock and @jgreco have experience in this area and do not recommend vdevs as wide as you are proposing.
I've done a lot of research on this, spoken to knowledgeable folks and understand the limitations of wide vdevs. This will be a personal media server, so I'm not worried about the performance issues and I'm willing to accept the fact that it will take a long time to scrub.
You're vdev's are of different size so the the one with the 8TB drives will be getting 2/3rds of the work.

Also this will only be 100TB's of usable storage if its either extremely dormant data or you don't mind it being super slow due to fragmentation.

I would really try to avoid switching out a redundant server PSU for a workstation one unless you don't mind the problems that could be associated.

What could you possibly need this much data storage for where you don't say.. have a data room where the noise isn't a big issue anyway?
It's a personal media storage server, so yes, dormant data (and a lot of it, obviously). Noise is an issue because (for now at least) it's going to be in my home office. I've been talked out of the ATX PSU; I'll get quieter redundant ones.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Might want to do some more research on this. Some of the more knowledgeable members here such as @cyberjock and @jgreco have experience in this area and do not recommend vdevs as wide as you are proposing.

Cyberjock's a lot more absolutist. I've got more of a flexible view, but I have to say that as the width increases, the performance tanks. Quickly. Plus possible difficulties rebuilding as someone else mentioned, etc., etc., etc.

The fact that you *can* do something doesn't make it a good idea.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
We're only talking about 12 here... that's only 1 over cyberjock's suggested limit. I'm ok with it, but I do appreciate the input. If it starts to suck, I'll figure out how to migrate the data off and rebuild with smaller vdevs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've done a lot of research on this, spoken to knowledgeable folks and understand the limitations of wide vdevs. This will be a personal media server, so I'm not worried about the performance issues and I'm willing to accept the fact that it will take a long time to scrub.

That's the least of your problems. I did a 16 wide zpool and I couldn't even stream a single movie (and that was the only workload on the server). So yeah, even in your use case, I wouldn't be doing what you are doing. But to each their own. It'll just suck if you put 100TB of data on it, realize it can't do the one thing you want it to do, and have to do *something* with 100TB of data.
 

melp

Explorer
Joined
Apr 4, 2014
Messages
55
That's the least of your problems. I did a 16 wide zpool and I couldn't even stream a single movie (and that was the only workload on the server). So yeah, even in your use case, I wouldn't be doing what you are doing. But to each their own. It'll just suck if you put 100TB of data on it, realize it can't do the one thing you want it to do, and have to do *something* with 100TB of data.
Well, streaming movies will be a primary use case, so if you think the vdev config wouldn't allow that, then I'll have to rethink this. How would you configure 12 8TB drives and 12 4TB drives? 4 Z2 vdevs of 6 drives each?
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Status
Not open for further replies.
Top