Aussie FreeNAS Build

fmarxis

Cadet
Joined
Jul 11, 2019
Messages
7
Been reading on this forum for a while and finally put together a build plan. The reason why I stress this is an "Aussie" build, is just to complain about how difficult it is to source cheap components in Australia. Generally it's more expensive, less diversity and fewer second-hand choices.

My parts list is this:

Essentially I am looking to build the foundation for a 24 drive FreeNAS system. The chassis comes with 1280W redundant power supply, and the BPN-SAS3-846EL1 24 port backplane. I am looking to running Proxmox as the host OS, then virtualise FreeNAS on top of it. The total raw drive capacity will be 64TB with 6 drives in RAIDZ2, with 128GB of RDIMM memory.

The intended use is for home use mainly for storing and streaming medias, but I do work from home as a software engineer so my archived projects will go in there. I will also be running a PLEX server, some VM/Containers on the same system.

Please comment on if anything looks out of place (bottlenecks, incompatibilities etc.), or if anything of my understanding is incorrect. I will very much appreciate it.

1. I would like to start off the system with just 1 cpu in the dual socket motherboard since at the very start I won't be using all the PCIE lanes. I wouldn't suppose this has any side effects to FreeNAS?

2. My understanding is that I will only need one LSI 9300-8i SAS controller to connect to the backplane. And one mini-SAS cable will be enough, but with two cables I can increase the bandwidth if it bottlenecks. (Just some slight confusion over the two mini-SAS ports on both my backplane and LSI 9300-8i)

3. In the future I anticipate myself adding a JBOD unit (such as the 946LE1C-R1K66JBOD) that connects to external SAS controllers (LSI 9300-8e for example) in this server. That's where the extra cpu/pcie-lanes may come helpful.

4. I will use RaidZ2, and create a vdev consisting of the 6 * 16TB drives. In the future I plan to add more vdevs of 6 drives as capacity runs out, and add more memory as well to complement the added storage.

5. I don't see any benefit with a L2ARC or SLOG device for my use case. I would still imagine myself using a PC for things like video editing, and only store finished projects in the NAS server.

6. I plan to put this setup in my garage so noise can't be heard in other rooms. However in Australia it gets pretty hot in summer, somewhere between 30-35 degrees. I hope by putting this server in a proper cabinet will help the issue. But will I have a more serious problem due to temperature?
 
Joined
Jan 4, 2014
Messages
1,644
Stick with Western Digital NAS drives. They run cooler than their Seagate counterparts and are quieter. I'm in Perth where our summers are 35-45.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Stick with Western Digital NAS drives
If you mean WD Red (and not red Pro) as they spin at 5400 RPM and consume less power than 7200 RPM drives... all adding up to less heat.

1. I would like to start off the system with just 1 cpu in the dual socket motherboard since at the very start I won't be using all the PCIE lanes. I wouldn't suppose this has any side effects to FreeNAS?
Read your MoBo carefully as you may need to place your RAM appropriately to have it recognized by that single CPU.

2. My understanding is that I will only need one LSI 9300-8i SAS controller to connect to the backplane. And one mini-SAS cable will be enough, but with two cables I can increase the bandwidth if it bottlenecks. (Just some slight confusion over the two mini-SAS ports on both my backplane and LSI 9300-8i)
Correct. Not likely to bottleneck, but the option is there.

3. In the future I anticipate myself adding a JBOD unit (such as the 946LE1C-R1K66JBOD) that connects to external SAS controllers (LSI 9300-8e for example) in this server. That's where the extra cpu/pcie-lanes may come helpful.
Sounds like a reasonable plan (call it an External Chassis rather than JBOD to avoid confusing people who will no doubt want to warn you about the dangers of using a chassis which has an onboard JBOD controller and will present your storage as a single disk).

4. I will use RaidZ2, and create a vdev consisting of the 6 * 16TB drives. In the future I plan to add more vdevs of 6 drives as capacity runs out, and add more memory as well to complement the added storage.
A fine plan, see how you go with memory before adding, 128GB is already great.

5. I don't see any benefit with a L2ARC or SLOG device for my use case. I would still imagine myself using a PC for things like video editing, and only store finished projects in the NAS server.
Correct. If you want to go in the hosting VMs direction, you would also need to look into a much higher number of mirrored VDEVs to cope with what your SLOG could do anyway. L2ARC would be of dubious benefit with the good amount of RAM you have.

6. I plan to put this setup in my garage so noise can't be heard in other rooms. However in Australia it gets pretty hot in summer, somewhere between 30-35 degrees. I hope by putting this server in a proper cabinet will help the issue. But will I have a more serious problem due to temperature?
Make sure you use the slower spinning drives where possible and employ the PID fan script to manage the heat (many of us use those scripts to keep the box quiet, but actually it can be more useful in managing the heat of the drives when you use settings directed at lower drive target temperatures).
This one is an excellent implementation of it:
 
Joined
Jul 2, 2019
Messages
648
Joined
Jul 2, 2019
Messages
648
Joined
Jul 2, 2019
Messages
648

fmarxis

Cadet
Joined
Jul 11, 2019
Messages
7
Hi @fmarxis - Did you check Amazon for a rack? I picked up a 27U/30" deep openframe rack for CDN$200.

Yeah I did, Amazon AU just sucks in terms of variety, it doesn't really have a lot of products compared to the US or other regions. There are a few second-hand racks on Gumtree(Australian local community marketplace) that's cheap so I've been watching out for these.

The main cost is actually the Supermicro Chassis for $3000, which is quite unbearable to be honest. If I find one second-hand/refurbished on ebay, generally it's 800AUD with 2000AUD delivery... There is one cheap alternative, the TGC-4824, similar to Norco. But the build quality is really scary. I have been following a fellow Australian who have been using this rack, and recently his built went busted due to the sub-par backplane, drives and data lost. I don't think I have better options really.
 
Joined
Jul 2, 2019
Messages
648
@fmarxis - I hear you. I see on eBay CND$700 servers (I'm looking for a HP DL360 G8 to replace my G7 to upgrade VMware ESXi) where there are CDN$300-400 shipping. And we are next door to the US :eek:
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
At those prices I would look into a new Dell R740xd2, especially if you have access to a Dell rep.
If you want tower you could get T640 with 18 3.5" bays.

Otherwise I would go with used 2 or 3 x12 drive systems and go hyperconverged.
 

fmarxis

Cadet
Joined
Jul 11, 2019
Messages
7
If you mean WD Red (and not red Pro) as they spin at 5400 RPM and consume less power than 7200 RPM drives... all adding up to less heat.

Thanks for your detailed reply. It's very helpful.

Currently the WD Red drives are more expensive than the Seagate Exos, which has higher density and better ratings except for the higher RPM. I am still doing some googling on how much of impact I would be expecting by using 7200RPM drives.

Sounds like a reasonable plan (call it an External Chassis rather than JBOD to avoid confusing people who will no doubt want to warn you about the dangers of using a chassis which has an onboard JBOD controller and will present your storage as a single disk).

I do have some confusion about this, I thought JBOD chassis would use the same SAS controllers except with external mini-SAS ports. And to use it I just need to connect its external mini-SAS ports to an external mini-SAS port on my main server. Other than that it only contains a bunch of drives, some HBA and SAS controllers.

Take Supermicro SC847E1C-R1K28JBOD for example, it has two BPN-SAS3-847EL1 backplanes, one control board CSE-PTJBOD-CB3 for power and one control board MCP-280-84701-0N for external SAS controllers which contains:
Qualified SAS HBA / controller:
AOC-SAS3-9300-8E - 12Gb/s 2-external ports SAS3 HBA
AOC-SAS3-9380-8E - 12Gb/s 2-external ports SAS3 RAID Controller

That seems very similar to the LSI 9300 8e if I am not mistaken, which can be used to direct connect to LSI 9300 8e controllers on my main server.

However, it sounds like there is a specific JBOD controller that I really want to avoid in any chassis because it will present all drives as a single drive?
 
Joined
Jul 2, 2019
Messages
648
Hmmm... I saw a build (I cannot recall if it was a YouTube video or a website - sorry) but the builder had simply taken the chassis and added a couple of these SAS passthrough cards. Then they had the "normal" cable between the backplane and what would normally have been the HBA connect to the SAS passthrough card. On the host server they had the typical HBA but with the external connector.

There are are some threads on the forum about doing this. You also need some type of fan controller unless you like your fans running at 100% all the time.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
You can do better $3000 for a case, that is insane!

Just buy a second hand tower case or three with 9-12 5.25" bays

Buy some 5 in 3 hot swap bays, hammer up some tabs in the cases and be happy

I know this works because I've done it with three cases

Have Fun
 
Last edited:

blueether

Patron
Joined
Aug 6, 2018
Messages
259
just move over to cooler NZ, our prices are much cheaper than your side of the ditch...
we have much better choices of hardware...

(and we don't have spiders as big as the palm of your hand)

In all honesty I just think that we are so far form the rest of the world that you are stuck with really expensive or old gen hardware
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
Yeah I did, Amazon AU just sucks in terms of variety, it doesn't really have a lot of products compared to the US or other regions. There are a few second-hand racks on Gumtree(Australian local community marketplace) that's cheap so I've been watching out for these.

The main cost is actually the Supermicro Chassis for $3000, which is quite unbearable to be honest. If I find one second-hand/refurbished on ebay, generally it's 800AUD with 2000AUD delivery... There is one cheap alternative, the TGC-4824, similar to Norco. But the build quality is really scary. I have been following a fellow Australian who have been using this rack, and recently his built went busted due to the sub-par backplane, drives and data lost. I don't think I have better options really.

Ahhh that would be me I'm guessing. Never did update my post, fortunately there was no data loss (gotta love ZFS with raidz2). I replaced the back plane for $60 thanks to scorptec, however the 6tb wd red and cable it ruined cost a bit more to replace. I had to strip the case to give it a proper clean. It's amazing how bad those burn marks were.
There is a new version of the case with the S designation. It would be interesting to see if they managed to improve the quality, but its a shorter case so no e-atx motherboard support.

There is one thing I would do differently if I did it over again. I would get a server rack with a built in aircon. Usually they are called a micro datacentre. There is a brand here in Australia Zellaox, this may be an option to keep the temps down on the server rack.
I did request/received their product catalog, and its interesting. But I never did find out how much they cost. Bet they aren't cheap, but can't imagine its much more expensive to jerry rig a cooling solution like I have done with a portable aircon with ducting and temp prope/controller. Comes down to how much you value your data, data is only as good as the hardware and the hardware is only as good as you can maintain it. Which is hard in a garage I have found.
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
just move over to cooler NZ, our prices are much cheaper than your side of the ditch...
we have much better choices of hardware...

(and we don't have spiders as big as the palm of your hand)

In all honesty I just think that we are so far form the rest of the world that you are stuck with really expensive or old gen hardware

Do you have a website for a nz retailer us hardware poor aussies can peruse ? ;)
 

fmarxis

Cadet
Joined
Jul 11, 2019
Messages
7
Ahhh that would be me I'm guessing. Never did update my post, fortunately there was no data loss (gotta love ZFS with raidz2). I replaced the back plane for $60 thanks to scorptec, however the 6tb wd red and cable it ruined cost a bit more to replace. I had to strip the case to give it a proper clean. It's amazing how bad those burn marks were.
There is a new version of the case with the S designation. It would be interesting to see if they managed to improve the quality, but its a shorter case so no e-atx motherboard support.

There is one thing I would do differently if I did it over again. I would get a server rack with a built in aircon. Usually they are called a micro datacentre. There is a brand here in Australia Zellaox, this may be an option to keep the temps down on the server rack.
I did request/received their product catalog, and its interesting. But I never did find out how much they cost. Bet they aren't cheap, but can't imagine its much more expensive to jerry rig a cooling solution like I have done with a portable aircon with ducting and temp prope/controller. Comes down to how much you value your data, data is only as good as the hardware and the hardware is only as good as you can maintain it. Which is hard in a garage I have found.

Yep, I was talking about you. The way I see it, if one 16TB HDD is gonna cost me $700, busting a couple of drives is going to pay for the gap I cheaped out on the chassis. Hence why I am almost obsessed with only using quality hardware now.

The micro data centre idea sounds interesting and exactly what I need. I'll call up these guys and probably a couple more to get some quotes. I hope they are not targeting a niche market so that a cabinet is gonna cost me a 5 figure number, I could buy a car instead...
 
Joined
Jul 2, 2019
Messages
648
I could buy a car instead
The cost of cabinets and accessories, well, freaks me out. CDN$35 of a 12" deep shelf that is stamped steel and powder coated????
 

amp88

Explorer
Joined
May 23, 2019
Messages
56
Is there a specific reason why you're looking at only using two memory DIMMs on a platform with 4 memory channels per CPU? Your current configuration (with 2 DIMMs and either one or two CPUs installed) is only going to run in single/dual channel, which could be a bottleneck. Have you considered getting four 32GB DIMMs instead (for quad channel in a single CPU config or dual channel per CPU in dual CPU config)? The increased memory bandwidth may give you a noticeable performance improvement, and there might be some cost savings too.
 

craigdt

Explorer
Joined
Mar 10, 2014
Messages
74
Yep, I was talking about you. The way I see it, if one 16TB HDD is gonna cost me $700, busting a couple of drives is going to pay for the gap I cheaped out on the chassis. Hence why I am almost obsessed with only using quality hardware now.
Yes, for 16TB drives I would highly recommend a high end case, and brand new with good warranty.

The micro data centre idea sounds interesting and exactly what I need. I'll call up these guys and probably a couple more to get some quotes. I hope they are not targeting a niche market so that a cabinet is gonna cost me a 5 figure number, I could buy a car instead...
I'll be curious to know how you go with a micro datacentre quotes. Make sure you get one that you can use like a split system so the heat is extracted outside.
 

fmarxis

Cadet
Joined
Jul 11, 2019
Messages
7
Is there a specific reason why you're looking at only using two memory DIMMs on a platform with 4 memory channels per CPU? Your current configuration (with 2 DIMMs and either one or two CPUs installed) is only going to run in single/dual channel, which could be a bottleneck. Have you considered getting four 32GB DIMMs instead (for quad channel in a single CPU config or dual channel per CPU in dual CPU config)? The increased memory bandwidth may give you a noticeable performance improvement, and there might be some cost savings too.

Mainly for future expansion, 24 * 16TB drives in this system (4 vdevs of 6 drives in RAIDZ2) is around 256TB of capacity. Say if I go ahead with my plan of adding a 45 drive external chassis to my setup, that's another 500TB of capacity, totalling 756TB of storage.

The rule of thumb for memory to storage space ratio is 1GB memory per 1TB of storage space. That means I would need 756GB of memory in this system.There are a total of 16 memory slots on this motherboard, 756/16=47.25GB per memory stick. So if I use 32GB sticks, I would eventually need to replace them, that can be expensive as well. I am trying to reduce redundant hardware as much as possible so that's why I started with 64GB sticks, and I plan to add more sticks as I grow my storage.
 
Top