Thinking about building my own NAS

LJ247

Cadet
Joined
Oct 10, 2023
Messages
6
Hi All,

I have been thinking about expanding my home network by adding another storage device.
I started looking at QNAP rack NAS devices, but going from 4xHDD to 8xHDD the price is getting to a level that I'd like to avoid if possible.
So why not to build my own solution? It will be cheaper and I may learn something new :)

I'm not looking for a solution on a plate, but if I could get some steer within my questions - I would be more than happy!

Currently I have an old desktop with 4x3TB HDD (RAID10), QNAP with 4x8TB (RAID5) and I'm looking for something that could store 50TB with possible expansion in the future.
I don't need any bells and whistles, just a simple storage where I can have dedicated datasets for specific tasks and that I can control user permissions.
It will be primarily used as a backup for few machines, file sharing and maybe some movie/music library (more on that later).
It doesn't have to be a production solution, working 25hrs a day, running critical, life-supporting tasks.
I don't expect massive data transfers or hundred users at the same time - it will be just a home NAS. I also have an off-site backup, so I don't need super redundancy.

I read a lot of posts on this community forum, read the hardware guide, learnt about ZFS, issues with SATA expanders, etc.
I have a rough idea what I'm going to achieve, but I still have some items that I'd like to clarify with experts.

Case:
- 2U/3U case, with 12/16 bays. Leaning towards 3U as it will give me more space inside for any future work, bigger/quieter fans, etc. Something like IPC 3U-3416, it has 4 backplanes with SFF-8087.

HDDs:
- I'm thinking of 4x20TB SATA (RAIDZ1), later add another 4 (I understand issues with pool expansions, have no problems with a separate pool), maybe even move the 4x3TB from desktop after a PC upgrade.
- I was considering getting 6 or 8 of smaller drives to utilise RAIDZ2 or 3 but I ditched the idea.

Motherboard - here is something I need an advise on, should I get:
- good desktop motherboard with 8x SATA - pulled out from this idea, not expandable, most backplane support SFF8087 so looking for other cases or using adapters? nah
- good desktop motherboard with LSI 9207-8i HBA. The 8 channels will be ok now, I could could get another 8i later if needed, or even get 16 and rearrange the disks as TrueNAS doesn't care about disk location/controllers (I'm not talking about hardware RAID, just a passthrough) according to the documentation.
- cheap motherboard (but still branded, let's say ASUS or ASRock, not some super cheap knock-offs) and LSI 9207-8i HBA?
- I know, having intel desktop motherboard and i5/i7 won't get me the ECC, but do I really need it for home purposes?
- intel xeon board? supermicro?

CPU:
- PCI lanes - having 2 LSI cards and other stuff I can't go with some poor, slow, cheap CPU, so I thought i5/i7 would be a good choice. These are relatively not that expensive but they have their own limitations, agree
- Xeon? Ryzen? What other options I have? I can spend a little more to what a good desktop board + i7 would cost, but I don't want to go 2x or 3x the cost.

RAM:
- ECC or non-ECC (depending on the above) are roughly within similar brackets, so 64 or 128GB is what I'd aim for.

Services running on top of TrueNAS SCALE (I'm more familiar with Debian), such as Jellyfin or Syncthing (played with them a little using Oracle Virtual Box, pretending to be TrueNAS). Should I:
- use the Syncthing plugin? I've noticed limited configuration and everything was sitting in the same Jail. It is a quick and easy solution, but has its limitations.
- create a VM and install it there? Probably an overkill to run just one item. It will use more resources than the plugin/Jail but will offer more to tinker with.
- dedicated RaspberryPi running it and using the NAS as, well... a NAS :) It worked pretty well for few weeks, had great control about a dedicated network share just for this purpose, didn't touch anything on TrueNAS so it is still uncompromised. I understand aspects of additional configuration, network IP reservation, etc.

Finally, something I didn't explore yet. TrueNAS encryption. How should I split and encrypt datasets for:
1) shared folders with:
- 'public' stuff - downloaded files, etc.
- 'private' stuff - photos, etc.
- 'sensitive' stuff - scanned documents, emails, etc. (access for only 2-3 users, but I know how to setup TrueNAS permissions)

2) backup folders for 3 devices:
- new dataset for each device (eg. /pool/backup_pc1/, /pool/backup_pc2/, /pool/backup_pc3/)
- one dataset for backup with each device as sub-dataset (eg. /pool/backup/pc1/, /pool/backup/pc2/, /pool/backup/pc3/)

Should that be pool encryption? or dataset? or inherit? All separate keys?
The 2nd option - I could use iSCSI, have it mounted as a volume in Windows and encrypt using for example, a bit-locker.


Thanks in advance for any tips given!

Regards,
LJ
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I would personally NEVER have a RAIDZ1 with 20 TB drives.

As to the board, I would go for a used Supermicro X10 or X11, depending on what you can find.
 

LJ247

Cadet
Joined
Oct 10, 2023
Messages
6
Thanks for the X10/X11, ChrisRJ, but could you explain why now RAIDZ1 on 20TB? Is it performance while scrubbing? CPU hit on restoring?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Data safety. Only 1 drive as "reserve" is considered too little for this capacity.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I also have an off-site backup, so I don't need super redundancy.
You call the shot… but, in view of the expected level of pain to restore 50+ TB from offsite backup, I'd want some quite good resilience on-site. Meaning: No raidz1, no 2-way mirror.

Case:
- 2U/3U case, with 12/16 bays. Leaning towards 3U as it will give me more space inside for any future work, bigger/quieter fans, etc. Something like IPC 3U-3416, it has 4 backplanes with SFF-8087.
12 bays could be 2 * 6-wide Z2. 16 bays, 2 * 8-wide Z2, 2 * 7-way Z2 + 2 spares/free slots for replacements, or even 3 * 5-wide + 1 spare.
Four separate backplanes means these are of the simplest kind, without SAS expanders.

- I was considering getting 6 or 8 of smaller drives to utilise RAIDZ2 or 3 but I ditched the idea.
Why "smaller"? Except for the cost, there's no drawback to overshooting with 6*20 TB (Z2), 80 TB raw / 64 TB usable as a start.

Motherboard - here is something I need an advise on,
On-board SATA ports are usable with reverse breakout cables to SFF-8087. So, for up to 16 bays, your options are:
- 8 SATA, and a -8i HBA when needed;
- two -8i HBAs;
- single -16i HBA.
No major issue here, even you also want 4-8 lanes for a 10 GbE NIC.
- I know, having intel desktop motherboard and i5/i7 won't get me the ECC, but do I really need it for home purposes?
i5/i7 do not have more PCIe lanes than i3. So why go for a desktop motherboard and i5/i7 over a server C2x6 motherboard and Core i3/Xeon E with ECC? If you are buying the hardware, you may as well "do it by the textbook".
At the very least, consider Ryzen + ECC.
For your limited needs of extra services, embedded boards with Xeon D-1500 (or even Atom C3000 for 12 bays) and also possible options, and I suspect that you may not encompass that under "Xeon".

Services running on top of TrueNAS SCALE (I'm more familiar with Debian),
That's an argument of little value for an appliance OS. SCALE is not a "Debian distro".
- use the Syncthing plugin? I've noticed limited configuration and everything was sitting in the same Jail. It is a quick and easy solution, but has its limitations.
"Plugin" or "jail" are CORE terminology and plugins are deprecated; go for jails.
SCALE has "charts", which are containers running under Kubernetes—not Docker. So the question becomes: Does a chart from the iX or TrueCharts repository do what you want?
- create a VM and install it there? Probably an overkill to run just one item. It will use more resources than the plugin/Jail but will offer more to tinker with.
- dedicated RaspberryPi running it and using the NAS as, well... a NAS :) It worked pretty well for few weeks, had great control about a dedicated network share just for this purpose, didn't touch anything on TrueNAS so it is still uncompromised. I understand aspects of additional configuration, network IP reservation, etc.
If you're able and willing to tinker, it looks like these last two options would actually be very valid for you, including under CORE (possibly with "only" 64 GB RAM where 128 GB would be more like it under SCALE).
 

LJ247

Cadet
Joined
Oct 10, 2023
Messages
6
Thanks Etorix, I will take your advise into consideration!

12 bays could be 2 * 6-wide Z2. 16 bays, 2 * 8-wide Z2, 2 * 7-way Z2 + 2 spares/free slots for replacements, or even 3 * 5-wide + 1 spare.
Four separate backplanes means these are of the simplest kind, without SAS expanders.
I meant 12/16 in a way of future expansion. Due to cost I don't expect to fill all bays at this time. So the 4x 20TB would be the minimum (going for 6 isn't probably an option now, going for 6 smaller ones is, but at this moment 20TB is cheaper in price per TB than 12 or 14 for example. That was the reason. Also, if I went with the new 4/6/8 and the old 4x8TB then I could fill in the bays quickly.
And I'm doing this as a hobbyist, I don't want to replace (or add) a new NAS in 3 years time :)

On-board SATA ports are usable with reverse breakout cables to SFF-8087. So, for up to 16 bays, your options are:
No major issue here, even you also want 4-8 lanes for a 10 GbE NIC.
True, yeah.
Forgot to ask about cache - but for lightway usage and a plenty of RAM I don't think it is crucial, but going with the desktop motherboard I could have one M.2 SSD as boot drive, the other one for L2ARC.

For your limited needs of extra services, embedded boards with Xeon D-1500 (or even Atom C3000 for 12 bays) and also possible options.
That's a fair point.

Thanks again!
Regards
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Will comment in a while regarding hardware, but unless you have valid reasons to do so it's not worth using ZFS encryption.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I meant 12/16 in a way of future expansion. Due to cost I don't expect to fill all bays at this time.
I understood it that way. But you don't have to fill all bays at start. You may begin with a single vdev, and later add a further vdev to the pool when you need the space / have the money for more drives.
But with raidz# you cannot change the width of vdevs after the fact. So you must decide on the path for future expansion, and design the first (and at the time only) vdev accordingly. 4-wide locks you into a 3*4w or 4*4w future: Quite safe, needing batches of "only" four drives to expand, but not as space-efficient as 6-8-wide (my sweet spot).

Forgot to ask about cache - but for lightway usage and a plenty of RAM I don't think it is crucial,
Build the NAS first. Use it for some time, and then looks at arc_summary. This will tell whether you would benefit from L2ARC at all—and if so, you may add it at any time.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm looking for something that could store 50TB with possible expansion in the future.
I'm not clear if you need 50TB now with space for further expansion, or further expansion up to 50TB; be aware that 4x 20TB drives in RAIDZ1 give you around 47.7 TB of usable space (usage of 85% of the pool).

I don't need any bells and whistles, just a simple storage where I can have dedicated datasets for specific tasks and that I can control user permissions.
It will be primarily used as a backup for few machines, file sharing and maybe some movie/music library (more on that later).
It doesn't have to be a production solution, working 25hrs a day, running critical, life-supporting tasks.
I don't expect massive data transfers or hundred users at the same time - it will be just a home NAS. I also have an off-site backup, so I don't need super redundancy.

CORE is for you then, and 16GB of RAM will be enough assuming a 1Gbps LAN speed.
If you go RAIDZ1 with 20TB or similarly sized drives (your call), I suggest at least using ones with a 1e-15 URE value (so NOT the WD RED line); I would suggest a different layout, but you know your budget and your data.

I would not give up ECC RAM at all, especially given the cost of your drives: i3s, Pentiums, and Atoms are your best shots in terms of CPUs, do not be afraid of the used marked for any of these components (you only want drives and PSU brand new).

As I wrote, do not go for encryption unless you have valid reasons to do so (which are generally rare for home users): the price you pay in terms of performance and configuration hassle/potential issues is too great compared to the benefits, especially with low-power CPUs.

EDIT: if you require hw transcoding speak now or forever hold your peace.
 
Last edited:

Patrick_3000

Contributor
Joined
Apr 28, 2021
Messages
167
Motherboard - here is something I need an advise on, should I get:
- good desktop motherboard with 8x SATA - pulled out from this idea, not expandable, most backplane support SFF8087 so looking for other cases or using adapters? nah
- good desktop motherboard with LSI 9207-8i HBA. The 8 channels will be ok now, I could could get another 8i later if needed, or even get 16 and rearrange the disks as TrueNAS doesn't care about disk location/controllers (I'm not talking about hardware RAID, just a passthrough) according to the documentation.
- cheap motherboard (but still branded, let's say ASUS or ASRock, not some super cheap knock-offs) and LSI 9207-8i HBA?
- I know, having intel desktop motherboard and i5/i7 won't get me the ECC, but do I really need it for home purposes?
- intel xeon board? supermicro?
Perhaps the easiest and cheapest way to get a motherboard and CPU that support ECC is to go with an ASRock Rack motherboard that supports Ryzen CPUs, either the x570d4u-2L2T, x570d4u, x470d4u-2L2T, x470d4u, or x570d4i-2T.

Those boards range in price from around $200 to $500, depending on the model. You can pair it with a used Ryzen 3, 5, or 7 "Pro" series CPU that you can pick up used for around $100, and you've got a setup that not only will accommodate ECC but also has IPMI for remote management, which is a nice bonus.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Well, a used Supermicro board is probably cheaper. I do not say that Ryzen is a bad choice. But in the end it depends on what is available on the local market.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
I would just add given the use case from the OP, highly unlikely l2arc would provide any benefit at all.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I would just add given the use case from the OP, highly unlikely l2arc would provide any benefit at all.
As usual maxing out RAM would be better.
 

sfatula

Guru
Joined
Jul 5, 2022
Messages
608
But his use case is so light, 128gb way overkill. Backing up, maybe media center, and maybe syncthing very light usage. Unless there are other uses not mentioned, wouldn't even consider it, could always be added later. Not even sure I'd spend for 64gb myself.
 

StrangeOne

Dabbler
Joined
Sep 7, 2023
Messages
12
Lots too digest.

Lets be honest here. What is your budget?

Lets run a consumer route here:

CPU- you don't need much for what your applications call for. But if we are talking about expandability. PCIE lanes matter. AM5 has 28 PCIE lanes which could be more than enough if you want to run some HBA and maybe adding in a NIC down the road. 13th gen intel offers what 16-20 lanes? Not much room there.

RAM- ECC or no ECC. AM5 I believe officially supports EEC? I know AM4 didn't officially support it but it does work as long as the motherboard does. As for how much. Usually its recommended 1GB ram per 1TB.

Motherboard- The sky is the limit. Do you want IPMI? Bifurcation? Or just need some SATA ports and that's it.

Case- For your applications, do you plan to put anything beefy in it? I don't see a 3u beneficial and still limits things. Either do 2u or 4u. The in between will be a regret. At least for me it was. Tons of cheap 2u cases out there. I'm biased with supermicro chassis. They are industry standard and work with pretty much all consumer hardware. So no need to worry about compatibility. And for the price pretty dang cheap.

Now lets get into enterprise route:

I love PCIE lanes, so of course I shall mention the EPYC gen 1/2 linup. You can pick up an X11HSSLI-i, EPYC 7551p, 128gb ddr4 ram and a CSE-826 2u case all for under $600-700. I mention the 7551p just because its abundant and cheap. But a bit overkill. Any other model will work. Plus brings power draw down.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
But his use case is so light, 128gb way overkill.
Depending on which CPU he chooses, 64GB might be the limit; for storage needs anything more than 16GB is likely overkill, L2ARC including.

@StrangeOne all your CPU reccomendations are massively overkill.
 
Last edited:

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Lets be honest here. What is your budget?
That, and clarifying the requirements for network speed (10 GbE?), number of bays and possible additional drives (SSD pool?).

On the consumer(ish) side of things, with UDIMM:
Anything with DDR5 is a no-go due to cost—and lack of proper scheduling support on the Intel side.
The choice is between AM4 Ryzen and Coffee Lake i3/Xeon E-2000 on a C246 motherboard (more likely AsRockRack than Supermicro). C256+Xeon E-2300 is likely to go over budget for no benefit.

On the RDIMM side of things:
EPYC or Xeon Scalable would be massive overkill, but may be considered if one finds the right bargain.
A good old Xeon E5, or the odd X11SRL-F with a Xeon W-2000.
Or embedded Xeon D-1500. Mini-ITX boards would fit 12 bays with a -8i HBA (6+8); 16 bays would require a -16i HBA. Flex-ATX boards allow two -8i HBAs (10 GbE onboard if required); -7TPnF models have the HBA onboard.

Depending on which CPU he chooses, 64GB might be the limit; for storage needs anything more than 16GB is likely overkill, L2ARC including.
50 TB of storage, plus future expansion… I'd recommend at least 32 GB as a start.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

LJ247

Cadet
Joined
Oct 10, 2023
Messages
6
Thanks a mill for a the replies! Really appreciate them!

I'd like to get within $/£/€3000 (including the drives).

20TB disks (new) are 350 approximately, they have the best price/TB so far - so this is about half of the budget.

So for 1500, I could have roughly:
- 4x 20TB (80TB raw, but 50TB in RAIDZ1 or 40TB in mirror)
- 6x 12 (70TB raw, almost 50TB win RAIDZ2)
- 8x 8TB (64TB raw, 44TB in RAIDZ2 or 35TB in RAIDZ3)

As mentioned previously (and asked about SSDs by Etorix), the potential could be:
- 2 SSDs for a small (but fast) pool
- 4x 8TB from my current QNAP NAS (I'd probably keep it as a separate device, maybe move it somewhere else, so I have a separate device for backups)
- 4x 4TB from my current desktop PC
That's why I mentioned the 12 (or 16) bay as a prep for future expansions.

As StrangeOne said: "X11HSSLI-i, EPYC 7551p, 128gb ddr4 ram and a CSE-826 2u case all for under $600-700".

That would leave me with the rest (800) for:
- a power supply
- 10GbE NIC (I have UDM Pro SE as well as USW-Pro-48-POE that has 4 SFP ports, so 10GbE would be a nice feature)
- HBA (LSI 9207-8i - maybe 16i or another 8i in the future if needed).
- any other miscellaneous items, cables, mounts, fans, etc.

The issue with 'old' hardware I have is:
- if you buy used server equipment (from a dismantler, recycling centre or some random seller from China) - there is a risk that something won't work (even if the board itself is better quality and has better lifespan than a desktop motherboard) and any returns is painful. I don't have any contacts in this area that could lend me a board for a month and I'd buy it if I'm happy then. With new desktop boards there is a warranty that makes my life easier. New server stuff is expensive.
- power consumption - having an old EPYC CPU, or older board, or 8 HDDs brings the power up. I don't need a CPU drawing 100W on idle for 90% of the day. But I don't need anything super low-power. I know, I could build it on RaspberyPi, running of a phone charger and having 4 disks connected over USB :) But I'd like to make it 'properly' but not enterprise level for a home use. (and learn something new)

Thanks again for all your input!
 

StrangeOne

Dabbler
Joined
Sep 7, 2023
Messages
12
That, and clarifying the requirements for network speed (10 GbE?), number of bays and possible additional drives (SSD pool?).

On the consumer(ish) side of things, with UDIMM:
Anything with DDR5 is a no-go due to cost—and lack of proper scheduling support on the Intel side.
The choice is between AM4 Ryzen and Coffee Lake i3/Xeon E-2000 on a C246 motherboard (more likely AsRockRack than Supermicro). C256+Xeon E-2300 is likely to go over budget for no benefit.

On the RDIMM side of things:
EPYC or Xeon Scalable would be massive overkill, but may be considered if one finds the right bargain.
A good old Xeon E5, or the odd X11SRL-F with a Xeon W-2000.
Or embedded Xeon D-1500. Mini-ITX boards would fit 12 bays with a -8i HBA (6+8); 16 bays would require a -16i HBA. Flex-ATX boards allow two -8i HBAs (10 GbE onboard if required); -7TPnF models have the HBA onboard.


50 TB of storage, plus future expansion… I'd recommend at least 32 GB as a start.
Its not overkill if you want to run all NVME/SSD storage like me lol. I need those PCIE lanes!!!

Agreed on the above. Solid choices.
 
Top