Will it FreeNAS? Feedback on this build-idea

toastbred

Cadet
Joined
Jul 3, 2020
Messages
4
Hey guys,

I'm entirely new to FreeNAS (or any NAS at this point). The following will be my first setup, so I'd like to hear some feedback on this:

Mobo:
CPU:
Intel Core i3-9100 (i3-9100 at Intel)​
RAM:
2x Kingston Server Premier DIMM 8GB ECC (KSM26ES8/8ME) (Memory Spec at Kingston)​
HDD:
2x WD Red 4TB​
2x IronWolf 4TB​

Power Supply is at 450W, considering that I might add drives at some point later in time.

A few questions:
  • I consider adding up to 4 drives, should I consider adding another 2x8GB RAM to the system?
I know this will depend on how I intend to use it. Let's consider for now a setup where not much traffic is going on. Mainly backing up data, occasional downloads and maybe some jails (as soon as I figure out what exactly they do)?
  • Is redundancy only managed in the vDev or also in the Pool? I've read the following blogpost and I'm not sure if I understood it correct.
  • (Assuming redundancy is managed in the vDevs only) I plan on always putting 2 drives (1 WD + 1 IW) into a vDev using mirroring (I don't mind giving up half the space), which would then result in two vDevs each @4TB which form a pool of 8TB. Does this make sense?
  • Assuming I've set up my system using the four drives mentioned above, and later on I want to add another pair of drives (same manner as previous). Is there a safe procedure for this or is this discouraged?
  • Lastly, is there something I've missed? Something you would recommend to a newcomer?
Thanks in advance!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hey guys,

I'm entirely new to FreeNAS (or any NAS at this point). The following will be my first setup, so I'd like to hear some feedback on this:

Mobo:
CPU:
Intel Core i3-9100 (i3-9100 at Intel)​
RAM:
2x Kingston Server Premier DIMM 8GB ECC (KSM26ES8/8ME) (Memory Spec at Kingston)​
HDD:
2x WD Red 4TB​
2x IronWolf 4TB​

Power Supply is at 450W, considering that I might add drives at some point later in time.

A few questions:
  • I consider adding up to 4 drives, should I consider adding another 2x8GB RAM to the system?
I know this will depend on how I intend to use it. Let's consider for now a setup where not much traffic is going on. Mainly backing up data, occasional downloads and maybe some jails (as soon as I figure out what exactly they do)?
  • Is redundancy only managed in the vDev or also in the Pool? I've read the following blogpost and I'm not sure if I understood it correct.
  • (Assuming redundancy is managed in the vDevs only) I plan on always putting 2 drives (1 WD + 1 IW) into a vDev using mirroring (I don't mind giving up half the space), which would then result in two vDevs each @4TB which form a pool of 8TB. Does this make sense?
  • Assuming I've set up my system using the four drives mentioned above, and later on I want to add another pair of drives (same manner as previous). Is there a safe procedure for this or is this discouraged?
  • Lastly, is there something I've missed? Something you would recommend to a newcomer?
Thanks in advance!
This sounds like a good 'starter' FreeNAS system; it has 16GB of ECC RAM, which should be enough to support the pool size you're contemplating.

Redundancy is at the vdev level; a vdev made up of 2 mirrored disks can lose one disk; a RAIDZ2 array made up of 4 (or more) disks can lose two disks, a RAIDZ3 vdev can lose three disks. But if any of these vdevs lose one more disk, then they will fail. Since a pool is made up of one or more vdevs, it too will fail if any of its constituent vdevs fail.

Your mirrored vdev approach makes sense and will work fine: you can start out with a simple pool made up of two disks in a single mirror vdev, then add additional mirrored pairs to increase the pool's capacity. Adding additional mirror vdevs to a pool -- 'extending' the pool -- is safe and not discouraged at all.

Good luck, and I hope you enjoy your new system!
 

dak180

Patron
Joined
Nov 22, 2017
Messages
310
I'm entirely new to FreeNAS (or any NAS at this point). The following will be my first setup, so I'd like to hear some feedback on this:
I have a few general comments you may want to consider but it looks like a workable system to me.

While this will work, you may want to consider the ASRock Rack E3C246D4U (or if you can wait a bit the ASRock Rack E3C246D4U2-2L2T will be out in a month or so with more features) because the Asus, while it is a great workstation board, does not have BMC which is really handy for managing a mostly headless system; many others on the forum will suggest supermicro which is also a good choice but I like having more control over the fans and being able to easily use a custom temp probe (I care both about a quiet and well cooled build)

RAM:
2x Kingston Server Premier DIMM 8GB ECC (KSM26ES8/8ME) (Memory Spec at Kingston)
Even with the board you picked you could put 16GB DIMMs in which I would suggest even if only one for now; you are better off having more expansion for later than dual channel now.

HDD:
2x WD Red 4TB
2x IronWolf 4TB
Make sure the wd reds are cmr and not smr; an easy way to tell for the 4TB models is to check the cache size, more than 64MB and you have SMR. This is important and may mean the difference between loosing data or not; in the future look for the Red+ (proper labeling is due in a month or two) drives not the plain Reds.
Aside from the labeling issue I personally tend to prefer Reds over the IronWolfs for three reasons: lower temps, less power draw and I find the S.M.A.R.T. data much easier to read.

Power Supply is at 450W, considering that I might add drives at some point later in time.
There is some great info on power supply sizing in the resources section which you may want to check out for more details. Some good rules of thumb though: figure out you max power draw and double that and then (because in a system like this it is all about the HDDs) move up if needed to get more SATA power connectors (so I would think about how many HDDs and SSDs are going to end up in the system).
Many in the forums like seasonics while I prefer the superflower leadex platform which you can find in some of evga's supernova line (EVGA G2, G3, P2 and T2), both are good though.

A few more things to consider: boot drive, usb sticks are discouraged, a small ssd works well and I use a mirror; case and fan choices are going to make a difference for cooling, noise and future expansion. Despite what the installer says try UEFI first and choose BIOS only if that does not work (you cannot change this later short of a reinstall).

Answers to this and your other questions will be easier if you define a scope for your server; jails are basically just a safe segment to run any program you want in them so the real question is: if you are going to have a server running 24h/d 365d/y what would you rather lived on the server than on any of your other computers? (Also, keep in mind that you may not want to run anything that directly exposes the server with your backups to the wider internet.)
 
Last edited:

toastbred

Cadet
Joined
Jul 3, 2020
Messages
4
While this will work, you may want to consider the ASRock Rack E3C246D4U (or if you can wait a bit the ASRock Rack E3C246D4U2-2L2T will be out in a month or so with more features) because the Asus, while it is a great workstation board, does not have BMC which is really handy for managing a mostly headless system; many others on the forum will suggest supermicro which is also a good choice but I like having more control over the fans and being able to easily use a custom temp probe (I care both about a quiet and well cooled build)

What exactly is BMC and what does it do?

Even with the board you picked you could put 16GB DIMMs in which I would suggest even if only one for now; you are better off having more expansion for later than dual channel now.

That's true, however I hardly think that I will be using more than 32GB of RAM, as it will be my small backup server, and is not likely to be increased massively.

Make sure the wd reds are cmr and not smr; an easy way to tell for the 4TB models is to check the cache size, more than 64MB and you have SMR. This is important and may mean the difference between loosing data or not; in the future look for the Red+ (proper labeling is due in a month or two) drives not the plain Reds.
Aside from the labeling issue I personally tend to prefer Reds over the IronWolfs for three reasons: lower temps, less power draw and I find the S.M.A.R.T. data much easier to read.

The ones I got only have 64MB of cache (its the EFRX series) so I think these are the correct ones?
Regarding plain reds vs red+, what exactly is the difference here? I thought both are supposed to be proper NAS drives.
Regarding the IronWolfs: I tend to avoid buying only drives of the same brand for the same pool. If by chance these drives are from the same belt and suffer from a systemic defect they will all fail at the same time. Therefore I make sure to have redundancy also based on different brands to keep the probability of all drives failing at the same time as low as possible.


...if you are going to have a server running 24h/d 365d/y what would you rather lived on the server than on any of your other computers? ...
I'm not sure if I understand what you mean by this.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
What exactly is BMC and what does it do?

BMC or baseboard management controller, aka iLO or iDRAC or CIMC, is separate controller on the motherboard that allows for remote console operations via dedicated RJ-45 port on the motherboard. From the BMC web page, you can power the server on or off, login to the server console, and check the health of various sensors, like CPU temperature.
 

dak180

Patron
Joined
Nov 22, 2017
Messages
310
That's true, however I hardly think that I will be using more than 32GB of RAM, as it will be my small backup server, and is not likely to be increased massively.
This ties into your last question; once you have a stable server running all the time what might you end up running there rather than on another computer? All of these things use ram and given the way that ZFS uses ram there really is no such thing as too much ram.

The ones I got only have 64MB of cache (its the EFRX series) so I think these are the correct ones?
Yes, those are good.

Regarding plain reds vs red+, what exactly is the difference here?
SMR vs. CMR, see WD Red Plus Launched with CMR for more info.

Regarding the IronWolfs: I tend to avoid buying only drives of the same brand for the same pool.
That is certainly one way to do that; I just wish anyone other than WD made nas drives that ran at 5400 - 5900 rpm.
 

toastbred

Cadet
Joined
Jul 3, 2020
Messages
4
BMC or baseboard management controller, aka iLO or iDRAC or CIMC, is separate controller on the motherboard that allows for remote console operations via dedicated RJ-45 port on the motherboard. From the BMC web page, you can power the server on or off, login to the server console, and check the health of various sensors, like CPU temperature.

But I can also do all that stuff via ssh, right?

@dak180 Thanks for your reply!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
But I can also do all that stuff via ssh, right?

Yes, but if the system is down, BMC gives you a remote console out-of-band of the LAN interface. If the system is going to be close at hand, you don't need to worry about it. If the system is going to be some distance away, then BMCs are absolutely necessary.
 

toastbred

Cadet
Joined
Jul 3, 2020
Messages
4
I had some massive delays due to some vendor issues and decided do buy parts somewhere else. But I decided to give some update to the current status.The system is up and running on a minimal Linux installation for 42 days (as of today) without any issues. Currently it's still in the testing phase, the drives performing several tasks around the clock keeping them busy.

I've been using the following scripts for stress testing the drives. Maybe they can be useful for someone else as well:

Note: these tests are destructive! dd is used without count. It will write until it crashes (i.e. disk is full). Make sure that all important data is backed up on the disks before using this. Same applies to badblocks!

Synopsis: Create a file and write into it, until the drive is full and dd crashes. Delete file and repeat. This is done in parallel for all drives you have, and should keep CPU/Drives busy
Code:
#!/bin/bash
while [ 1 == 1 ]
do
    # fullblock is used to ensure that there are no partial reads (this is especially useful if you decide to use /dev/urandom)
    # on my system reading from /dev/urandom will cause it to be CPU bound, so don't pay too much attention to the speeds when using urandom
    # fdatasync will make sure to actually write the information to disk
    # make sure that blocksize (bs=) times #calls does not exceed your available RAM
    dd iflag=fullblock if=/dev/zero of=/path/to/drive1/delme.txt iflag=fullblock conv=fdatasync bs=512M  &
    dd iflag=fullblock if=/dev/zero of=/path/to/drive2/delme.txt iflag=fullblock conv=fdatasync bs=512M  &
    # ... and so on for all drives
    # date just prints some date information, however wait is important. as this is an infinite loop the above commands will all complete before starting again
    date
    wait
    date
    echo "Starting again ..."
    # this will delete the generated files and then start again
    rm -rf /path/to/drive1/delme.txt
    rm -rf /path/to/drive2/delme.txt
    # ... and so on for all drives
done


Synopsis: analyse the drives with badblocks (write different patterns and read them again and compare). This should ideally not give you any errors.
Code:
#!/bin/bash
while [ 1 == 1 ]
do
    # v adds verbosity, s adds live status and w does write tests, reading and comparison
    badblocks -wsv -b 4096 -o ./log1.txt /dev/sd<driveletter> &
    badblocks -wsv -b 4096 -o ./log2.txt /dev/sd<driveletter> &
    # ... and so on for all drives
    date
    wait
    date
done


After running these tests for over a month non-stop I've executed some S.M.A.R.T tests (long ones), to which results I've got some questions.
Basically after some general information the main part starts like this:

Code:
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


Which looked good enough. However afterwards there was a table, which was rather confusing. Note: the following table is specific to WD drives and may look differently on your drives. I hope someone can help me make sense out of this:

Code:
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   170   170   021    Pre-fail  Always       -       6483
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       6
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       1027
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       6
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       1
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       74
194 Temperature_Celsius     0x0022   123   111   000    Old_age   Always       -       27
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged


Mainly the "Value" column is causing confusion here. Values listet in this column appear quite high, occasionally worse than the corresponding values listed in "WORST" and always worse than "THRESH", which makes me wonder if there is any issue with the drive, although the overall check has passed without errors. This is especially confusing as there was always a single read error for badblocks (for this drive only). I'm not really sure how to interpret this and I hoped S.M.A.R.T would give more insight here, as of now it is more confusing.

Can anyone help me interpret these values?
 
Top