SuperMicro 24 x 3TB build for video

Status
Not open for further replies.

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
Hello!

I'm planning my first FreeNAS build, for professional use, as a replacement for my current internal hardware RAID (8 x 3 TB in RAID10).
Working in the video post-production industry, my needs lean towards quite large volume (30-40GB), many high speed sequential reads, and few high speed sequential writes.
The NAS will only be used to store the video files used in the edits I'm working on, and the video files I'm rendering when I'm done. I may also store on it the cache files used by the software. No virtualization, no Plex, no plugins. If I find plugins or scripts that may be useful, I'll just run them on a server accessing the NAS.

The NAS will serve the following clients (around 8 to 12 hours a day, using Samba) :
Main workstation (custom build running Windows 10 - 90% of the time)
Render workstation (Mac Pro 3,1 - 10% of the time)
Laptop (MacBook Pro 15" 2015 - 10% of the time)
Projects server (E200-8D running CentOS - 10% of the time)

Here's my part list :
Case : SuperMicro SC846E16
MoBo : SuperMicro X10SRL-F
CPU : Intel Xeon E5-1650 v4
RAM : 4 x 16 GB Crucial DDR4 2400 ECC Registered
HBA : LSI 9201-16i (instead of two 8-ports HBAs to save PCIe lanes)
NIC : Chelsio T520-CR (probably aggregated)
Switch : QNAP QSW-804-4C (clients will have 10GBASE-T NICs)
Boot : Kingston A400 SSD (already owned, in an additional fixed internal 2,5" bay)
Main drives (for source files) :
8 x Seagate SV35 3TB 7200RPM (already owned)
16 x WD Red 3TB 5400RPM (8 already owned, 8 to buy)
Render drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter
Cache drives : 4 x Samsung 970EVO 500GB M.2 in a PCIe adapter

On the setup side, here's my plan :
Main Zpool : 4 x raidZ2 Vdevs, 6 x 3TB drives in each = ~36TB usable, given the 80% max capacity
Render Zpool : 4 x 500GB striped Vdevs = ~2TB
Cache Zpool : 4 x 500GB striped Vdevs = ~2TB

Render and cache Zpools won't need redundancy as data won't stay too long on it (once the video file is rendered, I'll move it to the main volume for archive, and cache will be erased each day). The goal with the multiple Zpools is to never read and write simultaneously on the same volume.

I have some questions regarding this list :
- I red mixing 7200RPM and 5400RPM won't be an issue (all will run at the slowest speed). Is it true?

- Other forum info about the SC846E16 backplane (SC846EL1) says that only two 4-ports SAS cables are needed to connect to the HBA. However the backplane datasheet (p.9) shows three SAS connectors likely going to the HBA. Will a LSI 9207-8i be enough?

- Would you have specific advices regarding M.2 drives in PCIe adapter on this kind of build? I couldn't find much.

- According to this list, do you think that I'll be able to saturate 10GbE with sequential read/write of larges files on a single client?

- Do you think adding SLOG or L2ARC would be useful? Writes are not critical as I'll always be able to restart them (I'll mostly write from USB drives, clients internal drives, or rendering from one Zpool to another), and I'd like to configure the main Zpool to be fast enough that it won't be the bottleneck.

Many thanks for reading, feel free to throw any thought you might have!
 
Last edited:

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hello!

I have some questions regarding this list :
- I red mixing 7200RPM and 5400RPM won't be an issue (all will run at the slowest speed). Is it true?
True and not true.
ZFS access each drive individually, so each drive will be maxed out (either throughput limited or latency depending on how busy is the drive).
The overall Vdev performance on the other end will be affected by the slowest drive performance.
In a RAIDZ1, RAIDZ2 or RDAIZ3 Vdev setup, being accessed by ZFS will have to provide the raw data as well as the parity bits. I believe ZFS validate on the fly the raw data and the parity blocks and all must match otherwise resilver will be triggered.
In a nutshell, mixing drive will not cause compatibility issues.

- Other forum info about the SC846E16 backplane (SC846EL1) says that only two 4-ports SAS cables are needed to connect to the HBA. However the backplane datasheet (p.9) shows three SAS connectors likely going to the HBA. Will a LSI 9207-8i be enough?
Datasheet seems to indicate minimum connection requirement to be one SAS port (Single port configuration), you will need more SAS port if you want to connect to redundant Arrays over failover SAS expander.
I think you should be fine with the LSI-9207-8i.

- According to this list, do you think that I'll be able to saturate 10GbE with sequential read/write of larges files on a single client?
I can't talk about this subject, however, would it make more sense changing to a 5 RIADZ2 Vdevs with 5 disk each? You will need 1 more HDD.

- Do you think adding SLOG or L2ARC would be useful? Writes are not critical as I'll always be able to restart them (I'll mostly write from USB drives, clients internal drives, or rendering from one Zpool to another), and I'd like to configure the main Zpool to be fast enough that it won't be the bottleneck.
You may want to increase RAM size, or at least leave enough available slots to add some more later on.
If you go with Registered memory, why sticking to 16GB sticks and not going to 32GB.
 

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
Thanks for your answers Apollo! I'm glad that mixing RPM speeds won't cause compatibility issues, and that LSI-9207-8i will be enough.

would it make more sense changing to a 5 RIADZ2 Vdevs with 5 disk each? You will need 1 more HDD.
I didn't thought about doing that. I can see the data safety going up with this route, although at a small usable space cost (48TB vs 45TB total space). Do you know if this configuration could improve performance? That would be the selling point for me.

You may want to increase RAM size, or at least leave enough available slots to add some more later on.
If you go with Registered memory, why sticking to 16GB sticks and not going to 32GB.
That's a point indeed. Given the 8 RAM slots on the motherboard, I guessed I could still upgrade to 128GB total with 4 additional 16GB sticks if I need more space in the future. Needing 256GB seems pretty unlikely. 16GB sticks are also a bit cheaper than 32GB right now.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Thanks for your answers Apollo! I'm glad that mixing RPM speeds won't cause compatibility issues, and that LSI-9207-8i will be enough.


I didn't thought about doing that. I can see the data safety going up with this route, although at a small usable space cost (48TB vs 45TB total space). Do you know if this configuration could improve performance? That would be the selling point for me.
Increasing the number of Vdevs doesn't really increase data safety. It will increase your IO.
Having too many Vdev in a single pool can sometime be problematic, especially if you are planning on having encrypted pools. This is more of a risk when updating to a newer Freenas Release.
 

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
I'd do some more reading on the SV35's, before mixing them with your WD-Red's in your data pool. They are optimized for use as in a video surveillance system, not for NAS systems.
Thanks for the caution. As they are doing fine in a hardware RAID10 for 3 years now, should I assume I'll be OK with them under FreeNAS?

Increasing the number of Vdevs doesn't really increase data safety. It will increase your IO.
Having too many Vdev in a single pool can sometime be problematic, especially if you are planning on having encrypted pools. This is more of a risk when updating to a newer Freenas Release.
Oh right, I didn't look at it this way for the safety part. I don't plan to use encryption on the pool.
About the IO gain, as I'm only looking for large sequential reads and writes performance, and per this guide, it seems like I should go the fewer&wider Vdevs route?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
About the IO gain, as I'm only looking for large sequential reads and writes performance, and per this guide, it seems like I should go the fewer&wider Vdevs route?
I used to have my setup as a single vdev and listing the files in Windows with files explorer wasn'r really that snappy. Transitioning to 2 Vdev made a huge difference.
I am not really sure having lots of Vdev really affect throughput performance on large files. It just requires more drives to maintain similar capacity.
Having larger Vdev will increase scrub time, and it wouldn't be uncommon to have scrub take a few days to complete.
I think 2 Vdev should be the minimum and 3 would be a good starting point.
 

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
Thanks for sharing your experience on this subject. I'll take the time to test various setups once the build is complete before putting it into exploitation.
 

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
After looking into the M.2 PCIe adapters, it seems to be quite an experimental choice. I'm afraid to find myself stucked with a full 24bay and no easy way to add fast little volumes.

I'm wondering if going the 2066 route would be a safer bet. A SuperMicro X11SRA-RF would give me two M.2 ports and two U.2 ports, which would be enough for the cache and render volumes. I'm thinking about putting a Xeon W-2135 inside, faster than an E5-1650 v4 (3.70Ghz vs 3.60Ghz, turbo 4.50Ghz vs 4.00Ghz), but with less cache (8.25MB vs 15MB).

I'm reading mixed things here about the 2066 platform, being advertised more as a workstation than a server. On the other hand, it provides IPMI and ECC Registered RAM, which should be what I'm supposed to look for in a server build.

Do you think it would be a good choice, or am I missing something?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Workstation and server terminology is a marketing game.
Server grade, target IT people and manufacturer will not look at optimizing away system performance but rather stay within a more reliable (manageable) envelope (i. No fancy BIOS/UEFI settings, no audio...).
Workstation is a big word most often overrated to do word processing or surfing the web. Joke aside, workstation is usually targetting drafting people running Solidworks and the like and will require significant video processing the motherboard manufactutere do not provide.
For me my home PC is my workstation, but if anything goes wrong I will have to fix it myself, but at least I can size each part individually to my liking.

If you look at the z230 Workstation from HP, the board itself isn't very spectacular.

A 6 core / 12 thread workstation isn't that significant in today's standard.
CPU frequency aside, one aspect of having several fast drives (NVME and HDD ) will require more CPU power, either as raw clock speed (higher frequency) or higher number of cores.
I am not quite sure how ZFS will scale with NVME.

I don't know if the SuperMicro X11SRA-RF is a good choice. IPMI and 512GB LRDIMM ECC is interresting. Other than that I would say it's not much of a step up.

The reason I believe this board is categorized as Workstation is due to the lack of a decent amount of SATA or SAS ports.

I think EPYC might be a better choice.
 

shrödinger

Cadet
Joined
Nov 23, 2018
Messages
8
Thanks again for this detailed explanation.

I don't know if the SuperMicro X11SRA-RF is a good choice. IPMI and 512GB LRDIMM ECC is interresting. Other than that I would say it's not much of a step up.

The reason I believe this board is categorized as Workstation is due to the lack of a decent amount of SATA or SAS ports.

The main point I see in this board are the M.2 and U.2 ports, which look like a more robust choice than going the PCIe adapter route. I don't really need many SATA or SAS ports on the board since all the 24 HDDs will be connected through the HBA, and the chassis itself doesn't allow to fit many more HDDs.

A 6 core / 12 thread workstation isn't that significant in today's standard.
CPU frequency aside, one aspect of having several fast drives (NVME and HDD ) will require more CPU power, either as raw clock speed (higher frequency) or higher number of cores.
I am not quite sure how ZFS will scale with NVME.

Do you think I should go with a more powerful CPU to run the NVME drives? I don't plan to max speed on them since the 10GbE will likely be the bottleneck for these volumes.

Since I want to share the volumes through SMB, which seems to be single-threaded for a single client connection, I thought it would be good to keep the cores/clock balance towards higher frequency and lower cores count.

The whole CPU/board question is the last issue I'm trying to figure out before ordering all the other parts, as a mistake on it would be quite painful to correct.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Status
Not open for further replies.
Top