BUILD Hardware selection - 1st FreeNAS build

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Great, thanks! Just wanted to make sure there was no need for any bottom support from the chassis, unlike the LGA 771 sockets my Xeon CPUs used.

Once you get it fired up, I'd be curious to know how much power it consumes. My X7DWN+ with the X5492s pulls about 450 watts just sitting idle, and that's with a single PWS-920P-1R 94%+ efficiency power supply!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Let's just say, if you put that X7 based system out of service, you should look for a heater first. Haswell EP are quite efficient already.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The difference between X7 and current gen tech is *remarkable.*
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
@pclausen No hard drives installed at all yet however been running it the last couple days with board, cpu, cooler, 4 modules ddr4 ram, 10GbE nic, 5 chassis fans, 2 cpu cooler fans and 4 psu fans. I've set the Fan Speed to HeavyIO which sets the 5 chassis fans at a steady 4100rpms, the 2 cpu fans at 900rpms and the 4 psu fans at an average of 4000rpms.

At boot I noticed a full draw of 198W
It quickly drops down to 90W without OS
With Memtest86 running it pulled an avgerage of 148W
With a single Samsung 16GB Cruzer Flash Drive and FreeNAS installed idle it draws an average of 106W

I have 4 x 4TB drives here now but not installed. I'm expecting 12 more later today, the M1015 tomorrow and the 2 64GB DOMs by Wednesday. Hopefully by end of week it'll be together with OS installed and a 12 drive raidz2 running.

Edit.. At idle the CPU is sitting at a chilly 30C and system is at 18C. Should be interesting to see how that increases with 12 drives added.
 
Last edited:

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Wow, that's pretty amazing low power consumption compared to my "heater". :)

When I run Prime95 for stress testing, my power consumption jumps a solid 120W to 570W. Similar consumption when transcoding to a handful of clients.

What power supplies do you have? Are they the high efficiency ones?

My CPUs idle at between 53 - 57C the Micron RAM between 62-66C. :eek:

So you're doing 12 drives in a raidz2? I thought best practices was to not do more than 11 drives total in a raidz2?

I'd love to do 2 x 12 raidz2 in each of my 846 chassis, but was leaning towards 3 x 8 raidz2 per chassis given the best practices advice, but if 12 disk raidz2 is considered safe, I'm all over it.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
The max recommended is 12 per vdev. Personally I would even go to 15 or 16 max with RAID-Z3 :)

So you can do 2 x 12 without a problem but, of course, 3 x 8 will be a bit more reliable.
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I'm initially doing a single 12 drive raidz2 with 4TB RED drives and 2 spare on hand. I was thinking of the other 12 bays but with dropping 800 on the Netgear 10GbE switch I'm going to wait on those other drives. This will also allow me to evaluate how best to populate the rest later. I might do another 12 drive or perhaps 2 mirrored 6 drive raidz2s. That's next year.

Iirc, my ram sticks are sitting at 21c :)

I have the regular 1200 watt 80 rated PSUs.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Well, now the question is: does a bread slice fits between two RAM sticks? because if it's the case then we can call his server "the toaster" :D
 
Last edited:

pclausen

Patron
Joined
Apr 19, 2015
Messages
267
Lol, it would be a little tight. Here's a pic my relic X7 mobo in my 846 chassis. Look how crammed things are compared to DataKeeper's clean setup. The heat from the CPUs are blown across the RAM, so no wonder it's nice and toasty. :D Oh, and look at that nice ATA133 cable going to the CD-ROM...

argonmobo.JPG


Newegg had an open box X10SRL-F so I went ahead and pulled the trigger.

Maybe I'll save enough on my power bill to get back what I'm spending on mobo, cpu and ram within a pretty short time period. :D
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Well... That's still cutesy.

Imagine a bladecenter... 16x8 DIMM slots. All filled with 2GB FB-DIMMs, heating at 8W each... that's 1+kw just for 256GB of RAM in 10U :) The same RAM and I/O masses can now be squeezed into 1/2-1U with 1/10th of the consumtion.

By the way, you can just remove the DVD drive. IPMI has Virtual Media integrated -> mounting ISOs as virtual CD drives, OS agnostic.
 
Last edited:

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
The extra red/yellow/silver wires going to the side drive and cd-rom certainly add to the mess. I had the bracket and 2 SSDs in cart due to the availability/pricing of the DOMs but they won out partly due to the extra cabling required. The DOMs are just so perfect way to go. That, and the fact that the DOMs allow me to still mount and use SSDs for a ZIL and/or ARC down the road if needed.

Question:
If I setup a virtualized ESXi server in a second system and use this freenas system to house the data from a striped mirror of 2 x 6 drive raidz2 what's the best share option? Would iscsi work best in this situation? Also would the addition of a ZIL and/or ARC be required for this?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
a striped mirror of 2 x 6 drive raidz2

Pick one of the two. This phrase doesn't parse.

I can pick for you, though, go for the striped mirror if you want to store any significant amount of virtual disk data on it. iSCSI is fine though NFS is sometimes easier. A SLOG is needed if you require guaranteed write consistency. Think: critical ecommerce VM, database VM, etc. If it's for a home lab, this isn't as important. Once you hit 64GB of RAM you can add a 240 or 256GB SSD for L2ARC without being likely to run into unexpected pain points.
 

Hobbel

Contributor
Joined
Feb 17, 2015
Messages
111
Question:
If I setup a virtualized ESXi server in a second system and use this freenas system to house the data from a striped mirror of 2 x 6 drive raidz2 what's the best share option? Would iscsi work best in this situation? Also would the addition of a ZIL and/or ARC be required for this?

Tried ESXi and FreeNAS NFS but this didn't work for me. There are several posts around, why this is a bad combination. I run iSCSI with ESXi and FreeNAS, stored on striped mirrored vdevs. This is what runs fine for me and was recommended by cyberjock.

The next system I build is similar to yours and I will go with striped mirrored vdevs. I share it via iSCSI (MPIO, round-robin) to the ESXi.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The max recommended is 12 per vdev. Personally I would even go to 15 or 16 max with RAID-Z3 :)

So you can do 2 x 12 without a problem but, of course, 3 x 8 will be a bit more reliable.

Uhh, the max recommended is 9 for RAIDZ1, 10 for RAIDZ2, and 11 for RAIDZ3. No clue where you (and others) keep getting the 12 from. I did a RAIDZ3 of 16 disks. Better to fscking die than *ever* do that to myself again.

No, if you are doing more than 11 disks in a vdev, you totally deserve the performance nightmare you are creating for yourself when you start putting data on that pool.

That's all I'm gonna say about that.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I'm almost sure to have seen the 12 in one of the stickies or one of the big threads but that it's not a sticky.

Now I'm curious: more spindles = more BW, no?
 

TXAG26

Patron
Joined
Sep 20, 2013
Messages
310
@DataKeeper - Really appreciate the updates on your build. I've been planning a E5-2600 & X10SRH-CLN4F (4x GBE i350 + LSI 3008) build for awhile now as the 32GB hard ram limit on the X10SL7-F just doesn't cut it anymore for the additional VM's I need to run. I am hoping that by moving up to the E5 with an 8 core Xeon and 8x ram slots that I will be able keep this machine around a lot longer than the prior one.

Had a couple of questions:

1) Have you tried running the free ESXi 5.5 or ESXi 6.0 hypervisors on your X10SRL-F yet?

2) Was there ever a consensus on how much of a hit these new E5-based v3 systems take when only two of the four DDR4 channels are populated? I am really leaning towards two of those 32GB LRDIMMs on a X10SRH-CLN4F w/E5-2630 v3. 64GB of ram would definitely be enough until DDR4 prices start to come down, and I would hope that even with it not being interleaved, it would still be at least as fast as the DDR3 on my X10SL7-F. At this point, unless I'm missing something, with 2x 32GB LRDIMMs being only 10% more than 4x 16GB RDIMMs, the LRDIMMs seem like the logical step, long-term, right?
 

DataKeeper

Patron
Joined
Feb 19, 2015
Messages
223
I received the hard drive orders today so I have 14 new 4TB drives and 2 6TB drives sitting in a box. Drives were all ordered through Amazon through a couple orders. The DOMs shipped out 2-Day Priority this afternoon so should be here Wed/Thr. Received the M1015 card today finally!

Might pull out the label machine tonight with a movie and print off a bunch of serial labels.

@TXAG26 No and not planning to. I picked up a smaller 1U 4-bay chassis which I'll be using for an ESXi system but I haven't even really begin to think of hardware for that yet. I'm unsure if I'll use the nas server to provide disk space or simple store backups. Regarding the ram.. I gave LRDIMMs some consideration however after receiving information earlier in this thread and direct from Supermicro I decided on the 4x16GB RDIMMs instead with the ability to add another 4 RDIMMs later if needed. You really want to install either 4 or 8 ram modules on this board. Installing 2 would be ok for an initial install but adding 2 more very soon afterward is strongly recommended. Supermicro support said the performance hit of only running 2 LRDIMMs would be significant and 4 RDIMMs would be preferable. Also, when looking at 4x32GB modules you're suddenly looking at ~$1550.00 for LRDIMMs! A bit more then 10%! :eek:
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I'm almost sure to have seen the 12 in one of the stickies or one of the big threads but that it's not a sticky.

Now I'm curious: more spindles = more BW, no?
If I'm reading this article correctly, then:
  • For streaming read performance, more disks is generally better.
  • For random IOPS, fewer disks is generally better.
So, it depends on your application.

"To summarize: Use RAID-Z. Not too wide. Enable compression."
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Top