BUILD 24 Drive Build Check & Some Questions

Status
Not open for further replies.

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
Hi

My current freenas build is nearing capacity and it's time to upgrade to something that will last me well into the future. The main usage for this server is running Plex, Transmission, and to also function as a backup for another 3 laptops. Oh.. and I never seem to delete anything.

I have the following components picked out and want to make sure there are I'm not missing anything.

Case: Supermicro SuperChassis SC846BE16-R920B
http://www.pc-canada.com/item/CSE%2D846BE16%2DR920B.html

MOBO: SUPERMICRO MBD-X10SRL-F
http://www.newegg.ca/Product/Produc...re=Supermicro_X10SRL-F-_-13-182-927-_-Product

CPU: Intel Xeon E5-2620 v3 Haswell
http://www.newegg.ca/Product/Produc...17480&cm_re=Xeon_E5_V3-_-19-117-480-_-Product

RAM: Samsung M393A2G40DB0-CPB 16GB DDR4-2133 ECC Registered x4 = 64GB
http://www.atic.ca/index.php?page=newsearch&searchterm=M393A2G40DB0-CPB&x=37&y=3

Raid Card: IBM ServeRAID M1015

Hard Drives:
8x WD Red 6TB
8x WD Red 3TB - **Existing**
8x Future Expansion Room.

=============================================================
Some questions..

1. I currently have a 6 drive Raid Z2 pool w/ 3TB HDs, however to get to 24 drives, I would ideally like to have a pool with 3 8 drive VDEVs in Raid Z2. To do this, I was considering creating a new pool w/ 1 VDEV of 8 drives using the new 6TB hard drives, then copying over my data from my existing pool and blowing it away. Then I would add 2 more 3TB drives and create a second VDEV to add to the new pool.

Does this make sense ? Or would I be better off just adding 2 new VDEVs of 9 HDs each @ RaidZ2

2. With this case, I believe I connect all these drives to a single M1015, is that correct ?

3. What about power connections for 24 drives. Do I need any SATA power splitters or something of the like ?

Thanks
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
1. I currently have a 6 drive Raid Z2 pool w/ 3TB HDs, however to get to 24 drives, I would ideally like to have a pool with 3 8 drive VDEVs in Raid Z2. To do this, I was considering creating a new pool w/ 1 VDEV of 8 drives using the new 6TB hard drives, then copying over my data from my existing pool and blowing it away. Then I would add 2 more 3TB drives and create a second VDEV to add to the new pool.
This will work fine. If you use ZFS replication properly, you can end up with the new pool having the same name as the old pool, with all your jails, shares, etc. preserved. Here's an earlier post of mine with an overview of how it would work, and a link to more detail on the replication: https://forums.freenas.org/index.php?threads/mirror-to-raidz2.25908/#post-163322
2. With this case, I believe I connect all these drives to a single M1015, is that correct ?
Correct. You could instead use an X10SL7-F motherboard, no M1015, and a reverse breakout cable from four of the SAS ports on the motherboard to the backplane.
3. What about power connections for 24 drives. Do I need any SATA power splitters or something of the like ?
No, the backplane handles that.

You may want to look and see if you can find that chassis on eBay--I managed to save about 2/3 of the cost of mine that way. This one looks almost identical, though it's using the EL1 backplane rather than the BE16. I don't remember the difference off the top of my head, but @jgreco's "SAS-sy" thread covers it, I think.
 
Last edited:

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
Thanks for the response. I will certainly try out ZFS replication to move my data over to my new pool.

As for your other suggestions.. funny thing is that those are the only 2 components that I have already pulled the trigger on... oh well.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Why would you want a DP Xeon with a UP motherboard? The E5-1650 v3 might be a better choice.

Instead of buying a separate HBA, consider the X10SRH-CF, which already has an LSI SAS 3008 controller onboard. Though, keep in mind that the SAS 3 stuff (LSI SAS 3008) isn't yet as mature as the SAS 2 stuff (LSI SAS 2008/2308 - like the M1015).
 

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
Hi Ericloewe

I'm assuming DP means Dual Proccessor..... I didn't even realize that was the case for this CPU. After some quick searching, I'm unable to find an E5-1650 v3 in stock anywhere that ships to Canada. I can find the XEON E5-1620V3, however I was hoping for 6 cores.

Will this CPU work in this board ?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Any LGA 2011-3 processor will work, no problem. It's just that the DP Xeons are tendentially a lot more expensive than the UP ones.

If you're ok with the rather low clocks (2.4GHz) the E5-2620 v3 is a decent choice (cheaper than the 1650, according to ark).
 

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
Yeah, the lower clocks work for me actually as this is primarily a media storage server that will serve up multiple transcodes via Plex. I would imagine 2.4 GHZ is enough for this type of workload.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
For plain storage you'd want E5-1620 v3 due to it's higher singlethreaded speed, however Plex likes more cores. What's the count of simultaneous transcoding sessions you're planning on?

Re pool layout: my recommendation would be 4x 6disk z2 or 2x 11disk z3. Especially since you want to store media files on there which don't compress well. Without compression it's better to keep to the recommended vdev layouts like 2^n+parity drives. Anyway, if you're buying the 2 additional disks new, go for 6TB and replace the 3TB's later on with 6TBs, that vdev will then autoexpand.
 

pmccabe

Dabbler
Joined
Feb 18, 2013
Messages
18
What's the count of simultaneous transcoding sessions you're planning on?
The typical number of transcodes would be 1-2, however I have my server shared with a few friends, so it could be as high as 4-5 some days, so average would be 3-4 I suppose.

Re pool layout: my recommendation would be 4x 6disk z2 or 2x 11disk z3. Especially since you want to store media files on there which don't compress well. Without compression it's better to keep to the recommended vdev layouts like 2^n+parity drives.

Yeah... I went back and forth on this one quite a bit. I was originally considering doing a 4x6 disk RaidZ2, but then I'm out a total of 8 drives of 24 for pairity. Going with 2x 11disk z3 is basically the same thing as I have 2 empty drive bays. So, to maximize storage, while keeping with enough redundancy I was leaning towards the 3x8disk RaidZ2. My understanding was that by going against the recommended layout I would only be sacrificing some performance, in which case I could probably live with. Am I also sacrificing additional space by doing this ? If so, roughly how much impact would this have ?


Anyway, if you're buying the 2 additional disks new, go for 6TB and replace the 3TB's later on with 6TBs, that vdev will then autoexpand.
That's a good point, I will certainly do this.

cheers,
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There were some suggestions that space might be lost in less-than-ideal vdevs, but I've never seen the supposed mechanism explained and nobody has really complained yet.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The difference of 3x8disk vs. 4x6disk would be 98TiB vs. 87TiB storage with 6TB disks. If you really think that this bit matters, I'd advise to look into the 36bay chassis of the 847 series (847BE16-R1K28LPB being the model). That would bump you to ~130TiB with 6disk z2 vdevs.

Also, did you consider the heatsink? Socket 2011-3 CPUs don't come with one as standard. The SNK-P0050AP4 would be for the 846 series chassis, the SNK-P0048AP4 for the 847 series.
 
Status
Not open for further replies.
Top