My first (little) FreeNAS

Muffin.Monkey

Dabbler
Joined
Feb 2, 2019
Messages
13
I am planning to build a home server for a long time, and now the time has come ...
The use case is backup and media storage. I want to run plex on the server and be able to transcode.

I choose the following configuration:
As I am a noob, I added my toughts why choosing the articles, so if i got something wrong please correct me.
Please keep in mind that I am located in Germany, so my view of availability and price may differ from yours.
  • Mainboard: Supermicro X10-SRi-F
    • 10x SATA3 ports
    • Intel i350 LAN chipset
  • Processor: Intel Xeon E5-2620v4
    • cheapest Xeon supporting DDR4-2133
    • capable of transcoding according to plex
  • RAM: Transcend TS2GLH72V1B 4x 16GB
    • ECC RAM with 2133 MHz clock
    • in order to support 64 TB of raw storage
  • Processor fan: Alpenföhn Brocken ECO advanced
    • I realized there is a problem with "normal" processor fans on this mainboard, see details in a seperate post
  • Data hard drives: 8x Seagate IronWolf 2TB in RAID-Z3
    • Here I have the biggest question marks.
    • The case offers eight easy to access slots for HDDs. I planned to have six disks in use with at one hot spare. But then there would be one slot „wasted“.
    • Using this slot for data disk would result in seven disks on the pool. I assume that having an odd number of disks is not the greatest idea. So I would use this slot for a second hot spare.
    • I was planning to use Seagate Iron Wolf Pro 6 TB, but having read this thread am not sure if such big disks are a good idea, so I am thinking about Iron Wolf Pro 4 TB
    • What do you think about mixing different manufacterers? The alternative is WD Red Pro 4 / 6 TB
    • In his primer slideshow cyberdock mentioned that RAID-Z2 should not be used from 2019 on. Now we have 2019. What's the current point of view?
    • I will start with the 2TB modells and switch to 8TB at some point since I can not affort 8TB drives right now
  • System SSD: 2x Transcend MTS400S 32GB 2x Transcend TS32GSSD370S 32GB
    • M.2 so I don't have to use a SATA Port
    • Apparently the model in white has higher read and write speeds ...
  • Power supply: bequiet! Straight Power 11 550W
    • I added up some assumed power consumption:
      • On start up ~410W (switch on current of the HDDs)
      • afterwars: ~250W
    • 80 plus Gold
  • Chassis: Fractal Design Define R5 black
    • can house 8 3,5" HDDs + 2x SSD extra
    • easy access to the 3,5 HDD slots
  • Chassis fans: bequiet!Silent Wings 3 140mm PWM
    • Add one to the already equipped front fan
    • The case offers also slots top, bottom and side, but I assume they are not necessary
  • UPS: Eaton Ellipse ECO EL1200USBDIN
    • Some more question marks
    • I did not find a list of compatible UPSs
    • can provide the server on full duty for at least 10 min (5 min delay + 5 min shutdown)
    • is compatible to NUTS
Bonus question:
What do you think about having an encrypted space on some web hoster as a remote backup?

Thank you for your help / opinions / explanations.

Please excuse my bad English ...
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The case offers eight easy to access slots for HDDs. I planned to have six disks in use with at one hot spare. But then there would be one slot „wasted“.
Using this slot for data disk would result in seven disks on the pool. I assume that having an odd number of disks is not the greatest idea. So I would use this slot for a second hot spare.
Use eight drives in RAIDz2 and no hot spare. If you want spares, get cold spares. Hot spares just sit and wear themselves out doing nothing while they wait for another disk to fail. When one fails, you can substitute a cold spare that is fresh and ready to work. If you are terribly afraid of disk failure, use RAIDz3 which provides three disks of active fault protection.
Iron Wolf Pro
Normal Iron Wolf are fine, the "Pro" drives just run at a higher RPM and generate more heat for very little benefit.
System SSD: 2x Transcend MTS400S 32GB
M.2 so I don't have to use a SATA Port
You have ten SATA ports on that system board, what does it matter if you use two for the boot pool?
 

hervon

Patron
Joined
Apr 23, 2012
Messages
353
FYI, on some motherboards when m.2 drives are installed it deactivates 1 ou 2 sata ports.
 

Muffin.Monkey

Dabbler
Joined
Feb 2, 2019
Messages
13
If you want spares, get cold spares. Hot spares just sit and wear themselves out doing nothing while they wait for another disk to fail. When one fails, you can substitute a cold spare that is fresh and ready to work.
I wanted to use hot spares because I remember having read that it is common that another disk fails while powering up with the replaced disk.

Normal Iron Wolf are fine, the "Pro" drives just run at a higher RPM and generate more heat for very little benefit.
Thats is only true for 1 to 4 TB, from 6TB on the normal IronWolfs spin also with 7.200 RPM.

You have ten SATA ports on that system board, what does it matter if you use two for the boot pool?
FYI, on some motherboards when m.2 drives are installed it deactivates 1 or 2 sata ports.
I know there is actually no necessarity to save SATA ports, but the M.2s are not more expensive than SSDs. But with the info from hervon, I will get two SSDs.

How do you accomplish getting disks from differt loads so that an error in one production load does not kill your entire pool. What about mixing different vendors?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
How do you accomplish getting disks from differt loads so that an error in one production load does not kill your entire pool.
I don't really get what you are asking here.
What about mixing different vendors?
FreeNAS doesn't care about the vendor of the disk, but there is no advantage to mixing, each vdev is limited by the performance of the slowest disk in the vdev, so you want to have all disks working as well as possible. Some disks can still be working but run slow due to some hardware problem. I have replaced disks because they are slow even though they were still working. You just need to be observant if performance is the goal.
I wanted to use hot spares because I remember having read that it is common that another disk fails while powering up with the replaced disk.
I think you have a misunderstanding in this regard. Hot spares are most needed in a system where it is not possible for someone to tend to the system in a timely manner. ZFS does not activate a hot-spare until the failing drive is fully offline. The best practice is to remove a drive when it begins to fail before it has a chance to create data errors. This usually means taking it out when it is still viable and a hot-spare would not have been activated.
The idea that replacing a drive will cause another rive to fail is based on the hardware RAID model where all the other disks in an array group, which may be older disks, are worked very hard during a rebuild. The mitigation for this is to never use RAIDz1 (roughly equivalent to RAID-5) and only use RAIDz2 (roughly equivalent to RAID-6) so that even if another disk fails during a rebuild, you still have not lost any data.
Having a disk spinning in your pool as a hot-spare just allows that disk to wear out while it is serving no purpose because it still wears out just sitting there running even if no data is being placed on the disk. It may not wear out as fast as a normal pool disk but it is still accumulating power-on-hours. It also burns power.
If you are local to the system and able to monitor it, you can replace a faulty disk at the first sign of a problem and be better off than if you had waited for a hot-spare to activate.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I remember having read that it is common that another disk fails while powering up with the replaced disk.
This isn't true and it never has been, the "sky is falling" claims of the "RAID5 is dead" camp notwithstanding.
 

Muffin.Monkey

Dabbler
Joined
Feb 2, 2019
Messages
13
I don't really get what you are asking here.
The reason for your confusion is a bad translation. What I wanted to say is
"How do you accomplish getting disks from differt production batches so that most / all disks fail roughly at the same time."
That is the actual reason for my question about mixing vendors.

About hot spares:
Thank you for your explanation, it enlightened my a lot, Chris!
I assume there is an FreeNAS functionality / best practice for monitoring disks. Do you have a hint for me, where to find such?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
How do you accomplish getting disks from differt production batches so that most / all disks fail roughly at the same time
I wouldn't worry about trying to get drives from different production runs.
Where I work, we often get drives in by the 20 or 25 pack, like this:

20180813_190918.jpg
I have about eight of those empty cases stacked up right now from purchases over the last year. and I have four 20 pack boxes of drives on order (12TB drives) right now to upgrade one of our servers that is currently running 4TB drives. We don't see a very high failure rate out of disks normally, less than 3% failure rate in the first year on disks purchased within the last five years, and the failures are not often batch dependent. For example, we bought 60 of the WD Red Pro 6TB drives two years ago and had three failures in the first six months, two more fail in the second six monts, but zero failures in the following year. In November of 2018, we bought 60 of the Seagate Exos 10TB drives and had two fail in the first six months, but they are not old enough yet to have more statistics. Even when we order disks in quantity like that, we don't often see concentrations of failures that would make us suspicious of a fabrication defect. It appears that the trend is lower failure rates with more modern disks. I remember a time when failure rates were higher, but they appear to be trending downward. I am eager to see how the 12TB drives we have ordered do once we get them in service.
I assume there is an FreeNAS functionality / best practice for monitoring disks. Do you have a hint for me, where to find such?
You can schedule SMART tests through the FreeNAS GUI, here it the manual page for that:
https://www.ixsystems.com/documentation/freenas/11.2/tasks.html#s-m-a-r-t-tests
I also suggest running some monitoring scripts that have been developed in the forum that can email you status updates. Here is the link to the page in the forum that describes those scripts:

Github repository for FreeNAS scripts, including disk burnin
https://forums.freenas.org/index.ph...for-freenas-scripts-including-disk-burnin.28/

There are also many other useful links listed here that you might want to explore:
https://www.ixsystems.com/community/resources/links-to-useful-threads.108/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. With the SMART tests, I usually run a short test daily at 06:00 and even on the 10TB drives, that is done in under ten minutes. I also schedule a long test weekly and for us I start that at 06:30 on Saturday morning because it is a full surface scan of the disk and usually takes over twelve hours to finish.
I have a scrub of the pool scheduled for once every 24 days to start after 17:30 once people have gone home for the day. With the pool configuration I have, the scrub is usually done in about nine hours, so it is (even if it falls during the work week) finished before people come to work in the morning.
 

Mr. Slumber

Contributor
Joined
Mar 10, 2019
Messages
182
I usually run a short test daily at 06:00

I setup my machines this way but just that I get it right: if this daily test completes successfully I won't get to know this, right? Only in case errors were found I will be informed via eMail (I setup the alarm settings via eMail), right? Sorry being a little bit OT, best wishes and good luck @Muffin.Monkey for your new homer server :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
if this daily test completes successfully I won't get to know this, right?
Correct. As with most things *nix-y, it only reports on errors.
 

Mr. Slumber

Contributor
Joined
Mar 10, 2019
Messages
182

Muffin.Monkey

Dabbler
Joined
Feb 2, 2019
Messages
13
I am afraid I have to push this thread again, but today I realized I may run into a problem. It's about CPU cooler and RAMs modules.

As you can see in the pictures in this article on servethehome, there is very little space between the CPU socket and the RAM slots. The author writes something about "narrow ILM". I have never heard about this before. The linked article was not very helpful for me, maybe someone could explain that for me in other words.

I concerned the documentation from my prefered vendors for CPU coolers, bequiet! and Alpenfön, I found nothing about "narrow ILM".
I found two resources mentioning "narrow ILM":
  • I found was a shop (German), selling the Noctua NH-D9DX
  • Thermalright selling some adapter, which as I understood it, basically rotates the cooler, so that it fits as for most home PC mainboards
Currently I do not know what to do. First Noctua does not supply the cooling power of the NH-D9DX nor does it list Xeons on the compatiblity list. Second I am almost sure that "normal" CPU cooler from the vendors mentioned above do not fit for my mainboard. Third I am not sure using the adapter will solve my problem and even if it does, rotating the CPU cooler will alter the airflow within the chassis drastically.

Maybe @Stux can help, according to his signature he uses a Supermicro X10SRi-F.
 

Mr. Slumber

Contributor
Joined
Mar 10, 2019
Messages
182
Please take a look here and give Noctua a call/eMail. Their products are amazing, 5 years warranty and I'm sure they also have the right product for your X10SRi-F.
 

Mr. Slumber

Contributor
Joined
Mar 10, 2019
Messages
182
I own this fan (very happy with it!) and please take a look at the specifications. Just ask their customer service to be 110% sure.
 

Mr. Slumber

Contributor
Joined
Mar 10, 2019
Messages
182

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The Noctua is fine. It's a special version of the NH-D9L for narrow ILM systems. I use the regular model with my Xenon E5-1650 v3 workstation and it's more than fine. 50ish degrees Celsius with two fans not even close to 100% PWM running Prime95. I wanna say 50ish% PWM, but that's working from memory.

Absolutely recommend.
 

Muffin.Monkey

Dabbler
Joined
Feb 2, 2019
Messages
13
I did some research and thought about the configuration.

This is the result. Are there any last comments?
I would like to order the parts during the weekend.

  • Mainboard: Supermicro X10-SRi-F
    • 10x SATA3 ports
    • Intel i350 LAN chipset
  • Processor: Intel Xeon E5-2620v4
    • cheapest Xeon supporting DDR4-2133
    • capable of transcoding according to plex
  • RAM: 64GB - Transcend TS2GLH72V1B 4x 16GB
    • ECC RAM with 2133 MHz clock
    • in order to support 64 TB of raw storage
  • Processor fan: Noctua U9DX i4
    • According to Noctua this would be ideal choice for my mainboard / CPU combination
  • Data hard drives: 8x WD red 8 TB in RAID-Z3
    • Shucked from WD myBook
    • the 8TB myBook costs just double as much as the 2TB IronWolf
  • System SSD: 2x Transcend TS32GSSD370S 32GB
  • Power supply: bequiet! Straight Power 11 550W
    • I added up some assumed power consumption:
      • On start up ~410W (switch on current of the HDDs)
      • afterwars: ~250W
    • 80 plus Gold
  • Chassis: Fractal Design Define R5 black
    • can house 8 3,5" HDDs + 2x SSD extra
    • easy access to the 3,5 HDD slots
  • Chassis fans: bequiet!Silent Wings 3 140mm PWM
    • Add one to the already equipped front fan
    • The case offers also slots top, bottom and side, but I assume they are not necessary
  • UPS: Eaton Ellipse ECO EL1200USBDIN
    • can provide the server on full duty for at least 10 min (5 min delay + 5 min shutdown)
    • is compatible to NUTS
 

IQless

Contributor
Joined
Feb 13, 2017
Messages
142
in order to support 64 TB of raw storage

I might be wrong(and someone please correct me if I am), but I do think that the old rule of thumb that states "1GB of RAM for 1GB of storage" is referring to actual used storage, not Raw storage. If I am correct in this, you would not need 64GB of RAM, but then again, you can never have enough RAM :P
 
Top