New build, FreeNAS newbie

Status
Not open for further replies.

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
RAIDZ2

Supermicro X11SSL-CF main board
Intel Xeon E3-1230 v5 Boxed CPU
10x WD Red WD60EFRX, 6TB HDD
2x Corsair Force LS V2 60GB SSD
4x Crucial CT16G4WFD8213 16GB ECC UDIMM
Fractal Design Node 804 case
Corsair RM650x PSU
Noctua NH-L12 CPU cooler

I decided to upgrade my current QNAP NAS to a FreeNAS box. I have 25+ years of Unix experience, so I'm not afraid of the config side of things. On the hardware side however it has been 12 years since I built my own system.
I've done a lot of reading about ZFS and FreeNAS and it helped me a lot in making a hardware list. Initially I wanted to get 6x 8TB RAIDZ1 (5+1). I now realize with these large drives I should be doing RAIDZ2, however (4+2)x 8TB usable space is a bit too tight for me. So I need more disks, which also means more SATA ports, so I upgraded my first choice X11SSM-F to a X11SSL-CF.

A few questions I still have: what type of cable do I need to connect the SAS ports on the X11SSL-CF to the SATA drives? How should I spread the 12 drives (10x HDD + 2x SSD) over the 6 SATA and 2 SAS connectors? Is a 8+2 setup "optimal" with respect to alignment? Or should I adjust the ZFS 128k record size to avoid wasted space? Or should I be doing (6+2) x 8TB instead?

Any remarks or comments are appreciated!
 
Last edited by a moderator:

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
Hi Richie,

I'm a newbie myself, but have read a lot on this forum lately, and can point you in the right direction.

Save your money, stock cooler that comes with the CPU is enough.
Probably drop this for now too, just get two Sandisk Cruizer Ultra Fit 32Gb as boot drives. Get back to SSDs once you decide that you need a SLOG or L2ARC.
So I need more disks, which also means more SATA ports, so I upgraded my first choice X11SSM-F to a X11SSL-CF.
Depending on the price difference between the two boards, there could be a cheaper way to add more ports - X11SSM-F with LSI 9211-8i expansion card or a clone.
what type of cable do I need to connect the SAS ports on the X11SSL-CF to the SATA drives?
see this thread:
https://forums.freenas.org/index.ph...for-supermicro-x11ssl-cf-o-sas-to-sata.43501/
For a 9211-8i or clones you need two SSF-8087 to 4x SATA cables to connect four drives.
Like this, for example:
https://www.amazon.com/StarTech-com-50cm-SFF-8087-SATA-SAS8087S4R50/dp/B008KF73CA
How should I spread the 12 drives (10x HDD + 2x SSD) over the 6 SATA and 2 SAS connectors?
I would connect SSDs directly to motherboard SATA, then populate other SATA ports, then use the expansion card. OTOH, cabling would be neater if you fully populate two of those mini SAS- to 4xSATA cables. Ether SAS controller you'd be looking at (3008 on board or 2008 add-on) would handle one HDD drive per channel with a lot of headroom.
Is a 8+2 setup "optimal" with respect to alignment?
If you are using compression (recommended AFAIK), alignment doesn't really matter. Let ZFS worry about it, and just get the best value per Tb out of the drives.
 

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
Save your money, stock cooler that comes with the CPU is enough.
The reason I added a better CPU cooler is to minimise the noise. The NAS will be in the study that doubles as a spare bedroom. But I suppose I can always swap the stock cooler later if it is too noisy.

Probably drop this for now too, just get two Sandisk Cruizer Ultra Fit 32Gb as boot drives. Get back to SSDs once you decide that you need a SLOG or L2ARC.
As this is for a 1 Gbps network I won't be needing SLOG/L2ARC soon. I read using SSD as a boot device is recommended now that it is ZFS as well.

Depending on the price difference between the two boards, there could be a cheaper way to add more ports - X11SSM-F with LSI 9211-8i expansion card or a clone.
Getting a separate HBA was my first instinct as well. The cheapest LSI 9211-8i is €230. I found a clone (Dell PERC H200) for €150. The difference between the X11SSM-F and X11SSL-CF is €70. Probably because the first is a C236 and latter C232 based.
The only HBA that I did consider was the Asus PIKE 2308 for €110. Which basically would mean I pay €40 to have a C236 instead of C232.

If you are using compression (recommended AFAIK), alignment doesn't really matter. Let ZFS worry about it, and just get the best value per Tb out of the drives.
Ok, great. I just read this thread that confirms this as well.

Thanks a lot for all the pointers!
 
Last edited:

entity279

Cadet
Joined
Jan 16, 2017
Messages
9
Hi, just walking by...

I'm also made my first freeNAS box recently (1 year ago) and went for the Noctua cooler. It will definetly make a difference noise wise if you compare it to the stock coolers (I care very deeply about noise and also have the box installed in a room in use for other purposes)

However, the 10 drives you will install will make a much greater noise than your cooler ever will. They don't need to work at all, just iddly spinning will be enoungh (I own 3 HGST NAS 6 TB drives for now). My drives only stop spinning if the clients are turned off (I'm just creating automatic backups + offering media storage for a single windows desktop)
 

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
However, the 10 drives you will install will make a much greater noise than your cooler ever will. They don't need to work at all, just iddly spinning will be enoungh (I own 3 HGST NAS 6 TB drives for now). My drives only stop spinning if the clients are turned off (I'm just creating automatic backups + offering media storage for a single windows desktop)
You're right. I slept in the spare bedroom last night (don't ask ;-) and I could heard my 4-bay QNAP NAS disks from 4m away making quite some noise. So are you saying with 10 spinning drives, just keep the stock CPU cooler because the effects of it will be lost anyway?
 

entity279

Cadet
Joined
Jan 16, 2017
Messages
9
Well , if there's a good chance the drives will stop spinning overnight , then you will get value from the Noctua Cooler. Otherwise you should keep the default cooler.
For me, the drives stopp spinning only if there's no network activity (i.e. I've shut down everything except the freeNAS server).

So you'll have to judge for yourself.
 

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
The clients will be off (in sleep mode) at night. But then Sonarr might kick in. ;-) I think I'll just have to wait and see/hear it once I start using the system.
 

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
I read using SSD as a boot device is recommended now that it is ZFS as well.
OTOH they consume two SATA ports. If you dropped them, you could possibly get away with X11SSM-F and eight 8Tb drives, giving you (8-2)*8=48Tb storage (I hope you have your backup strategy sorted..).Or get a two-slot NVMe M.2 Pci-e card and a copule of Intel 600Ps (assuming the board can boot from those).
BTW, Corsair RM650x only comes 8 SATA power connectors. AX760 has 12 (and Platinum efficiency), but costs more. Or you can use splitters/adapters. Or get something like FSP Hydro G 750W, if you don't mind how those 12 connectors are arranged on the cables.
http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story2&reid=456

Also, since noise is an issue, how good is Node 804 with noise suppression, especially when the drive cage is fully loaded? Compared to FD Define series, for example?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A couple of points:
  • Starting off with the RAM maxed out is dubious. However, 64GB is likely to be overkill, so the platform should be fine.
  • Stock cooler is essentially inaudible in a typical environment at idle.
  • Intel SATA and LSI SAS performance is essentially equivalent. SAS controllers do benefit from dedicated PCIe connectivity to the CPU, whereas intel SATA has to share with everything else on the PCH. Not that it causes a realistic bottleneck...
  • SSDs for boot are better, but they're not 100 bucks for an SAS controller better. A single SSD is fine for boot, a mirror is a must for the .system dataset.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
On the hardware side however it has been 12 years since I built my own system.

Nothing has greatly changed, still a motherboard, CPU,and RAM...

Starting out with the RAM maxed out is good thinking, it may become unobtainable, eg ECC DDR3 RAM is getting harder to find and costs 50% more than it did 6 months ago

Have Fun
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Starting out with the RAM maxed out is good thinking, it may become unobtainable,
I wholeheartedly disagree.
eg ECC DDR3 RAM is getting harder to find and costs 50% more than it did 6 months ago
The only platform still using DDR3 is Avoton. All Xeons moved to DDR4 over a year ago, with Xeon E5 doing so over two years ago. It'll be at least five years before DDR4 starts to fade.
 

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
OTOH they consume two SATA ports. If you dropped them, you could possibly get away with X11SSM-F and eight 8Tb drives, giving you (8-2)*8=48Tb storage
That was my initial plan and would work for me.

(I hope you have your backup strategy sorted..).
Yep, all really important data is synced to another off-site NAS (hooray for FTTH).

Or get a two-slot NVMe M.2 Pci-e card and a copule of Intel 600Ps (assuming the board can boot from those).
The cheapest NVMe M.2 Pci-e card I found is Startech.com PEXM2SAT32N1 for €30 and Intel 600P 128GB is €65. That puts it in the same price range as getting the X11SSL-CF. But definitely an option to consider.
Would the X11SSH-F be a good idea? It has 8x SATA and 1x M.2 ports.

BTW, Corsair RM650x only comes 8 SATA power connectors. AX760 has 12 (and Platinum efficiency), but costs more. Or you can use splitters/adapters. Or get something like FSP Hydro G 750W, if you don't mind how those 12 connectors are arranged on the cables.
Thanks for pointing that out. If I go with 10 HDD + 2 SSD I was thinking of getting 750W anyway.

Also, since noise is an issue, how good is Node 804 with noise suppression, especially when the drive cage is fully loaded? Compared to FD Define series, for example?
Good point. The Node 804 is not silent at all, but I liked the form factor. Noise is more important to me so I'll have to find another case. With 14 SATA ports I want a larger case anyway so I can add 3 more drives I have lying around to play with ZFS. Any suggestions for a case that can hold 13 HDD + 1 SSD ?

Starting off with the RAM maxed out is dubious. However, 64GB is likely to be overkill, so the platform should be fine.
I tried to follow the 1GB RAM for 1TB RAIDZ rule. With a 40-48TB RAIDZ2 that put me over the 32GB RAM alternative.

SSDs for boot are better, but they're not 100 bucks for an SAS controller better. A single SSD is fine for boot, a mirror is a must for the .system dataset.
Ok, so you would boot from mirrored flash drives instead of SSD to save €100? I'm not sure if that would come back to hunt me later.
Is putting the .system dataset will be placed on the ZFS raidz2 bad practice? Or would putting it on SSD mirrored bootdisks be better?
 
Last edited:
Joined
Dec 2, 2015
Messages
730
Good point. The Node 804 is not silent at all, but I liked the form factor. Noise is more important to me so I'll have to find another case.
I've got a Node 804, and it is somewhat noisy if the fans are running at full speed. But, it can be very quiet, if you take the time to implement an active fan speed control. Mine is quiet enough that it was sitting right beside my desk for many months, and I would never hear it.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I tried to follow the 1GB RAM for 1TB RAIDZ rule. With a 40-48TB RAIDZ2 that put me over the 32GB RAM alternative.
It's a rule of thumb. You'll almost certainly be fine with 32GB. If not, add the extra 32GB later.
Ok, so you would boot from mirrored flash drives instead of SSD to save €100?
My answer so far has been yes, but recent updates have been painful for USB flash drive users. Homework is required to choose appropriate USB drives.
Is putting the .system dataset will be placed on the ZFS raidz2 bad practice?
No, it's perfectly reasonable.
Or would putting it on SSD mirrored bootdisks be better?
It's something I'd do on a money-is-not-a-problem server, just to have the main pool not be burdened with .system dataset writes taking up a few IOPS.
 

2nd-in-charge

Explorer
Joined
Jan 10, 2017
Messages
94
Would the X11SSH-F be a good idea? It has 8x SATA and 1x M.2 ports.
It looks good to me.
Also, a single NVMe slot card should be cheaper than 30 euros, e.g.
https://www.amazon.co.uk/dp/B01MTJOV3B

Any suggestions for a case that can hold 13 HDD + 1 SSD ?
Fractal Design Define R4/R5/R2-XL can hold 11 drives if you buy an additional 3-drive cage:
http://support.fractal-design.com/s...-be-able-to-fit-an-extra-hdd-cage-in-my-case-
After that you can buy 3 or 5-bay (for 2 or 3 5.25" slots respectively).
http://www.newegg.com/global/au/Product/Product.aspx?Item=N82E16817994152
http://www.newegg.com/global/au/Product/Product.aspx?Item=N82E16817198058
That'll give you 14-16 3.5" drives :)

Also have a look at Nanoxia Deep Silence series.
 
Joined
Dec 2, 2015
Messages
730

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I find the X11SSH-F to be a rather useless product. The X11SSH-LN4F is mildly useful in some niche markets since it actually uses all available PCIe lanes, but the X11SSH-F is less versatile in every way than an X11SSM-F. For the price difference, you can easily get a PCIe card M.2 adapter and you get all four PCIe lanes, instead of the half-assed two lanes from the X11SSH-F.
 

RichieB

Dabbler
Joined
Jan 12, 2017
Messages
16
@Ericloewe Thanks for pointing that out. Are you talking about the lanes to the PCIe slots? Or the internal motherboard wiring? What does this mean for performance? Edit: or you are referring to the C232 vs C236 chipset?

I'm looking at these prices:

X11SSH-F €260 (8x SATA, 1x M.2, 2x GbE, C236)
X11SSH-LN4F €280 (8x SATA, 1x M.2, 4x GbE, C236)
X11SSM-F €285 (8x SATA, 2x GbE, C236)
X11SSL-CF €350 (14x SATA, 2x GbE, C232)

Adding M.2 to the X11SSM-F would cost me about €26 (Delock PCI-E M.2 NGFF). That makes the total cost €311 for that solution.

Is the Transcend MTS600 32GB M.2 (€30) sufficient as a boot disk?
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
@Ericloewe Thanks for pointing that out. Are you talking about the lanes to the PCIe slots? Or the internal motherboard wiring? What does this mean for performance? Edit: or you are referring to the C232 vs C236 chipset?
The X11SSH-F has one less PCIe slot than the X11SSM-F. The four lanes are divided into two for the M.2 slot and two reserved for the two additional NICs in the X11SSH-LN4F
 
Status
Not open for further replies.
Top