First post, first Freenas build.

Status
Not open for further replies.

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Hello all, I'm going to be taking the plunge into Nas building and decided on a Freenas build. I'm an experienced PC builder and I recently built a Pfsense box using the Supermicro C2758 board, its worked wonderfully when I don't break it fiddling. Goals: Build a functional NAS, start small and add to it as time goes on. Secondary goals: tinker, break, fix repeat. File server? Minecraft Server? Learn how to make thin clients? Stream Music and Video?
Board: Supermicro X10SLL-F, I've never had a problem with Supermicro boards.
Processor: i3 4170, mainly for the lower power consumption and cheaper price, I can move up to a Xeon if I find I need it I have a place for the i3 if it gets cut from NAS duty.
Memory: 16gb Kingston KVR16E11/8 to start, another 16gb when I expand the storage.
Boot: 64gb Sata DOM
Drives: 3x 1tb WD Red SATA III drives
in Raidz1, I know it's not much but I plan on adding another 3 later and another 3 after that.
Controller: Serveraid M1015 flashed to IT mode for flexibility, I'll populate this card and then move to fill the motherboard sata ports.
NIC: Intel X540-T2 Here is something I need to do some research on, and honestly I will drop if the rewards do not outweigh the expense/time I'm willling to put in. I want my PC, My Pfsense box, and my NAS to be linked with these 10gbe Nics. What I need to look into is if I could do something like this WAN->Pfsense->NAS->PC->Switch->rest of Network. The Pfsense, NAS and PC would be linked with the Cat6 crossover cables, but to be honest networking is still a bit of voodoo for me. Basically, I want the 3 most important components to have 10gbe connections without spending thousands on 3 Nics and a 10gbe switch.
The PSU will be a simple 400w ATX supply and I already have a UPS keeping my PC and Pfsense box protected. Wow, sorry for the huge first post, I'll take any comments or suggestions, Thanks for reading!
 
Joined
Jan 7, 2015
Messages
1,155
That board and memory combo is toxic. Up to 16gb seems to work ok, but anything more and the memory fails and causes loads of problems. Id go with something different. There are plenty of forum posts relating to it. There is a guy on eBay who sells memory specific to Supermicro Boards. It is Micron memory and is top quality as far as I can tell. It also comes in about 40 bucks cheaper than anything else I could find. I paid 170 for 32gb for the very same board.
 
Joined
Apr 9, 2015
Messages
1,258
Ok the 3 hdds will probably just barely saturate the gigabit network so you can hold off on the 10G ethernet for now. Also if you are going to add more drives and expand the RaidZ1 pool your chance of failure will increase exponentially for each additional vdev. https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/ Basically one vdev with a one drive fault tollerance then two vdev's each with a one drive fault tolerance then three vdev's each with a one drive fault tolerance. The more vdev's the higher the chance the whole pool will die.

Kingston ram is a blacksheep, they started reusing part numbers for different ram. It may work or may not and later on you may buy the same part number to expand and it may or may not work. https://forums.freenas.org/index.php?threads/i-have-stopped-buying-kingston-as-of-today.26977/ Samsung and hynix as well as some others are much more highly recommended over kingston.

The cpu should be ok for now but if you use plex and transcode as well as have some other jails in use it might become anemic. https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-

PSU is generally recommended to be a seasonic, you buy cheap you lose cheap and then may lose your data too. 400 watts should be ok for the initial setup though 55w for cpu 60 watts for MB 10 watts for each of the drives, 30 watts for case fans and I will guess at 15 watts for the add on NIC and 5 watts per stick of ram plus 15 watts for the HBA. My guess is idle around 100 watts or so and full transcode closer to 235 with the initial config. Add 10 watts per drive added down the road and a swap to a xeon and the PSU will be stretched.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
Have you decided on the chassis? If you plan to add 6 more disks, you are probable looking for something with 10+ bays. As you are:

an experienced PC builder

check ebay for old supermicro servers to use the case, psu, backplane, etc.

3x 1tb WD Red SATA III drives in Raidz1

Consider Raidz2 ... you might need more disks to get the available space you're looking for, but your data will be safer.
 
Joined
Apr 9, 2015
Messages
1,258
Just be careful on the chassis from ebay, if they only support the earlier versions of SAS they will only work with smaller drives and even then they have issues. I would suggest though that it may be cheaper to just buy a whole used system with board cpu and ram in a 3U or 4U case and then add in the drives. May not be the same board but probably close and possibly even find one that has a SAS controller built on board that can be flashed to IT mode.
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
That board and memory combo is toxic. Up to 16gb seems to work ok, but anything more and the memory fails and causes loads of problems. Id go with something different. There are plenty of forum posts relating to it. There is a guy on eBay who sells memory specific to Supermicro Boards. It is Micron memory and is top quality as far as I can tell. It also comes in about 40 bucks cheaper than anything else I could find. I paid 170 for 32gb for the very same board.
Thank you, I had thought I scoured the forum but must have missed those posts. I'll look into different ram.
 
Last edited:

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Ok the 3 hdds will probably just barely saturate the gigabit network so you can hold off on the 10G ethernet for now. Also if you are going to add more drives and expand the RaidZ1 pool your chance of failure will increase exponentially for each additional vdev. https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/ Basically one vdev with a one drive fault tollerance then two vdev's each with a one drive fault tolerance then three vdev's each with a one drive fault tolerance. The more vdev's the higher the chance the whole pool will die.

Kingston ram is a blacksheep, they started reusing part numbers for different ram. It may work or may not and later on you may buy the same part number to expand and it may or may not work. https://forums.freenas.org/index.php?threads/i-have-stopped-buying-kingston-as-of-today.26977/ Samsung and hynix as well as some others are much more highly recommended over kingston.

The cpu should be ok for now but if you use plex and transcode as well as have some other jails in use it might become anemic. https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-

PSU is generally recommended to be a seasonic, you buy cheap you lose cheap and then may lose your data too. 400 watts should be ok for the initial setup though 55w for cpu 60 watts for MB 10 watts for each of the drives, 30 watts for case fans and I will guess at 15 watts for the add on NIC and 5 watts per stick of ram plus 15 watts for the HBA. My guess is idle around 100 watts or so and full transcode closer to 235 with the initial config. Add 10 watts per drive added down the road and a swap to a xeon and the PSU will be stretched.

Ok, I thought I'd worked out the ZFS system. So start with a 5 disk Raidz1 and add another 5 disks later? Or a 5 disk Raidz2 and have 2 drives tolarance and 3tb of space...

The CPU will be upgraded when my motherboard upgrade and have a spot for the i3, Got it on the Seasonic. Great help thanks a ton.
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Have you decided on the chassis? If you plan to add 6 more disks, you are probable looking for something with 10+ bays. As you are:



check ebay for old supermicro servers to use the case, psu, backplane, etc.



Consider Raidz2 ... you might need more disks to get the available space you're looking for, but your data will be safer.

I was looking at Fractal Design Define Mini. It has 6 drive bays, and you can put adapters in the drive 5.25 bay. I would have to upgrade later, but by then I might be ready for a rebuild. I was looking at used 10gbe switches on ebay, it wouldn't be too much bother to look at chassis as well :). I hear you on the Raidz2.
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Just be careful on the chassis from ebay, if they only support the earlier versions of SAS they will only work with smaller drives and even then they have issues. I would suggest though that it may be cheaper to just buy a whole used system with board cpu and ram in a 3U or 4U case and then add in the drives. May not be the same board but probably close and possibly even find one that has a SAS controller built on board that can be flashed to IT mode.
You had me at cheaper. :) I haven't really hit ebay since I taught myself how to make vacuum tube amps.
 
Joined
Apr 9, 2015
Messages
1,258
You had me at cheaper. :) I haven't really hit ebay since I taught myself how to make vacuum tube amps.

I truthfully built two full FreeNAS boxes One is at my place the other is at my father's. Both are ebay builds, I bought a 1U rack server with a dual cpu board some RAM two cpu's, coolers, PSU and four bays that I honestly wanted the board for the onboard LSI HBA and swapped in a board without the LSI HBA. Both systems have 48GB of ram and can max at 192 GB. I think I paid like 180 plus shipping for the 1U and I had paid 80 bucks for the board I swapped.

As far as using RaidZ1 all I can say is no no no no no no no. Unless you have great backups (note great, not good) and want to waste time rebuilding your system with drive failures.

http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

and the reason why I am doing a 7 drive RaidZ3

http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

When I expand my pool in a couple years I will probably be looking at 8TB drives, with a second vDev and a total storage around 48TB available split between two vDev's one at 16TB and one at 32TB minus overhead.

Something like this http://www.ebay.com/itm/24x-Hard-Dr...ge-NAS-JBOD-/142262935880?hash=item211f882148 is pretty close to what I have with more drive bays and a slightly slower cpu. The bad thing is that the power use will be higher over time. I idle around 180 watts, a newer cpu would be much more power efficient but cost a lot more to get going.

If you want browse through http://stores.ebay.com/MrRackables/...02228010&_sid=955087150&_trksid=p4634.c0.m322 and see what you can find and the specs, if you want to spread the purchses out over time it will still give you a good idea where a price point should be close to. It will also show the kind of investment it takes to get a full system up and running.

This http://www.ebay.com/itm/Fast-Storag...2670-8-Core-/152426823653?hash=item237d58bfe5
Would throw you into the game with drives and all in a RaidZ2 you would have 36TB of storage minus overhead.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
I saw this,
You defensively won't need those for FreeNAS ;)



I have all my computers on my rack, workstation included, so I'm more on the rack chassis side. For a big number of disks, I like 4u better as allow for 120mm fans.

Believe me, if I could find a spot for some 6sn7 tubes I would.
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
I truthfully built two full FreeNAS boxes One is at my place the other is at my father's. Both are ebay builds, I bought a 1U rack server with a dual cpu board some RAM two cpu's, coolers, PSU and four bays that I honestly wanted the board for the onboard LSI HBA and swapped in a board without the LSI HBA. Both systems have 48GB of ram and can max at 192 GB. I think I paid like 180 plus shipping for the 1U and I had paid 80 bucks for the board I swapped.

As far as using RaidZ1 all I can say is no no no no no no no. Unless you have great backups (note great, not good) and want to waste time rebuilding your system with drive failures.

http://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

and the reason why I am doing a 7 drive RaidZ3

http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

When I expand my pool in a couple years I will probably be looking at 8TB drives, with a second vDev and a total storage around 48TB available split between two vDev's one at 16TB and one at 32TB minus overhead.

Something like this http://www.ebay.com/itm/24x-Hard-Dr...ge-NAS-JBOD-/142262935880?hash=item211f882148 is pretty close to what I have with more drive bays and a slightly slower cpu. The bad thing is that the power use will be higher over time. I idle around 180 watts, a newer cpu would be much more power efficient but cost a lot more to get going.

If you want browse through http://stores.ebay.com/MrRackables/...02228010&_sid=955087150&_trksid=p4634.c0.m322 and see what you can find and the specs, if you want to spread the purchses out over time it will still give you a good idea where a price point should be close to. It will also show the kind of investment it takes to get a full system up and running.

This http://www.ebay.com/itm/Fast-Storag...2670-8-Core-/152426823653?hash=item237d58bfe5
Would throw you into the game with drives and all in a RaidZ2 you would have 36TB of storage minus overhead.
I was going through ebay when you sent this, and I had seen Mrrackables. So I was searching with the terms supermicro 2u 9x and was getting some interesting items. If I could get an 8 bay starter system with 4 or 5 drives for under $1000 all in, maybe 50/50 split between drives/server.... Well, this gives me more things to research, thanks for that, it's half the fun.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
Welcome to the forums. I would still push you to populate max amount of drives you are going to use. 10 in raidZ3 or 9 in raidz2 is your personal choice but I would have 1 pool 1 vdev for anything under 12 disks and keep maximum disks I can as parity. I have a 8 disk raidz2. Ok for a media server but I owuld go for 8disk raidz3 for anything critical. :)

My suggestions buy really cheap 8,9 or 10 (maybe 250/320/500 gb) disks and set up a pool. Keep upgrading one disk a month and within a year you would have a good capacity server. :) In the beginning anyways you won't have much data to mount on the freenas. You could skip the 10gbe nic. It's your hdds which will be a bottleneck. For any home servers I would suggest 1 gbe to be sufficient.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
I saw this,


Believe me, if I could find a spot for some 6sn7 tubes I would.

That would be you changing your storage tube. I'll stick with tube amps myself :)

eniac3.gif
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Welcome to the forums. I would still push you to populate max amount of drives you are going to use. 10 in raidZ3 or 9 in raidz2 is your personal choice but I would have 1 pool 1 vdev for anything under 12 disks and keep maximum disks I can as parity. I have a 8 disk raidz2. Ok for a media server but I owuld go for 8disk raidz3 for anything critical. :)

My suggestions buy really cheap 8,9 or 10 (maybe 250/320/500 gb) disks and set up a pool. Keep upgrading one disk a month and within a year you would have a good capacity server. :) In the beginning anyways you won't have much data to mount on the freenas. You could skip the 10gbe nic. It's your hdds which will be a bottleneck. For any home servers I would suggest 1 gbe to be sufficient.
Thanks for the suggestions, the upgrade a month sounds like a great idea. I don't really need too much data storage right now so that would definitely be something that would work.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
Thanks for the suggestions, the upgrade a month sounds like a great idea. I don't really need too much data storage right now so that would definitely be something that would work.
Just remember the way freenas storage capacity upgrade works. In a 10 disk vdev you space will only increase once your 10th drive gains a higher volume. So whether you buy one a month or all together the space will only grow at the purchase and fitting the last disk.
 

athos56

Dabbler
Joined
Feb 6, 2017
Messages
21
Something like this maybe. The backplane is only SAS1 but with 12 trays I would never fill up the space even if was all 1tb drives. The specs seem to indicate that I can use SATA drives as well, so it would be cheaper. I could get a ton of cheap small drives and do a Raidz3 with 12 500gb drives would give me around 4.5 TB of storage and 3 drive fault tolerance. Total all in under $1000. Any pitfalls I'm missing here? http://www.ebay.com/itm/Supermicro-2U-Server-X9DRI-LN4F-2x-Xeon-E5-2650L-1-8ghz-16-Cores-32gb-12x-Trays-/291986099196?hash=item43fbba4bfc:g:46UAAOSwOVpXaW68

http://www.ebay.com/itm/Supermicro-2U-5017R-WRF-Server-X9SRW-F-E5-2690-8-Core-64GB-2x-PS-4x-PCI-E-3-0-/132081890940?hash=item1ec0b1b67c:g:yfAAAOSw-KFXdr7j
 
Last edited:

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
My suggestions buy really cheap 8,9 or 10 (maybe 250/320/500 gb)

I really like this suggestion. That allow for a good start in the right direction, upgrading the drives after the budget from the other parts is back on track :) Just be sure to use raidz2 or raidz3 as we never know how good the old drives will be.
 
Status
Not open for further replies.
Top