For a very long time I want to build NAS, I wanted to build a tiny machine, Fractal Node 304, mITX format, but I have always hit the wall (a problem, AMD requires Ryzen, which requires dGPU, which would take the only PCI-e slot, which I would not be able to use as I would not have enough SATA ports, and problems like that (on Intel side too)). To be completely honest, until this very day I am not decided on OS, but I dislike FreeNAS the least (I want to use ZFS and FreeNAS is most ready for it out of box, OMV requires Proxmox kernel and an additional plugin..meh).
I want to build NAS with 24TB of usable storage (well, more like 23,16TB of a usable storage) stripped across 3 VDevs (each running 3 HDDs in RaidZ1 -- I really hope I got this right), ZFS all the way with 32GB of ECC RAM. I will not mention prices on purpose, what I link below is more the less what I would like to go with, but there is still a room for maneuvering, but I right now it is already almost at $1500 (actually around 1450-1500€, but whatever, let's ignore the currency now, prices are all over the place when comparing US and EU pricing (mostly in favor of US pricing)).
I am playing around with two build ideas so far:
0. Components common for both below:
CASE: Fractal Define R6
PSU: Corsair RM650x (+ one extra power SATA cable from my RM750x in my desktop)
CPU Cooler: SilentiumPC Fera 3 (you have probably no idea what it is, it is our little EU secret, better than any CM Hyper 212-series cooler for just $25, I love that little beast)
HDD: 7x WD RED 4TB (WD40EFRX) - I was thinking about this a lot but the server will be in my office/bedroom, so I cannot risk 7200RPM drives (though in the same time I also hate my decision, here 6TB WD HC310 drives cost just about 70% extra compared to 4TB WD RED, which is justifiable for 50% extra capacity an much better speed and reliability specs IMO).
1. Building a "simple" NAS-only NAS, just a storage, no Plex, no VM. With this one I would minimize costs where possible (in this case on CPU), I was considering this build:
MB: Gigabyte C246-WU4
CPU: Intel Pentium G5400 (I believe 2/4 CPU is plenty for a "simple" NAS)
RAM: 32GB, 2x Crucial CT16G4WFD8266 (QVL'd)
A short explanation, 10 SATA ports, WS board. I was originally looking here on some Supermicro boards, such as MBD-X11SCL-F, but if I can avoid yet another add-in card or yet another extra component I take it - fortunately because G5400 has iGPU, the usefulness of IPMI, for me at least, is not very significant. And what's more, it is significantly cheaper than that Supermicro board.
2. Building a NAS with one VM, which I would use for ffmpeg transcoding (it will be probably some minimal CLI Debian or Ubuntu, I will literally just run ffmpeg on it, mostly ripping series from BRs which I would then probably integrate into Plex (no transcoding, just a centralized delivery - I have not yet dived into Plex or Jellyfin (I prefer open-source solutions), there will be plenty time after):
MB: Asrock X370 Taichi (albeit it is one of the consumer boards, one of the reasons I really like it is that it has 10 SATA3 ports, second is that it is an absolute monster in terms of VRM quality, it is stellar, great if I ever decide to go with a more powerful CPU)
CPU: AMD R5 Ryzen 1600 AF (it is basically R5 2600, I would in the future upgrade when 4XXX series gets to the market, whether that 4900X or some cheap offer for 3900X)
GPU: Literally whatever low-end completely passively cooled nVidia or AMD card, this will be merely a video output
RAM: 32GB, 2x Samsung M393A2K40CB2-CTD (not QVL'd, unfortunately)
This one gets me going the most, I am fully lubricated thinking about this, I would like to have a dedicated transcoding machine outside my PC. I guess I don't want that much from the virtualization and media streaming server that much, but what is your experience? Shall I expect some problems when running VM? Some bad experience with bhyve? Is this man page is still valid and bhyve doesn't support more than 16 threads (that could be potentially a real bummer)?
Couple of notes to one or both build ideas:
- I feel there will be a suggestion to the second build as I should go Xeon or just turn around and go with some older platform running some Xeon - no, I don't want to, I will be completely honest, I don't, and any new Xeon would put me many hundreds of dollars back for what? Nothing, they don't deserve people's money (I wish there were some AM4 Ryzen-compatible server motherboards though, now it is just for Epyc)
- the reason I want to buy 7 HDDs is that I already have 2 WD RED 4TB, so I want to first build 2 VDevs, get it up and running, copy all 7.5TB of data to NAS, then zero them, test them as per burn-in guidelines on the Hardware Guide site, and if they are okay, adding that 7th + those 2 drives as the third VDev and rebuilding it into the full capacity
- NICs are a bit of a problem, I plan on 2.5/5GbE interconnection between my desktop and NAS, in my desktop I have no PCI-e slots left, so I will need to go with USB>ETH adapters Club3D, model CAC-1420 (2.5GbE) or Qnap, model qna-uc5g1t. As for the server itself, I could go with some PCI-e based solution, but anything which I would consider available (under $100, preferably new (which I believe I will not find in such price for at least next 3-5 years), not something from China supposedly from some decommissioned server with who knows what real condition) is whether 10GbE, but only for fiber interconnects and 2.5/5GbE PCI-e NICs seem to be not existing.
And as for other headaches...
RAM
In case I will manage to build the second one where I will run 1 VM for transcoding, from my testing I will most likely need to assign 4GB of RAM for that machine, which leaves me with just 28GB of RAM for NAS, and I know about a recommendation of 1GB of RAM for 1TB of storage, but is that for 1TB of the usable part of the storage or of a grand total? If so, would it be wise to bite the tongue and buy 3x 16GB sticks (or bit even harder, go insane and go straight to 64GB)?
I was, originally, thinking about NVMe SSD or Optane for caching (ZIL and/or L2ARC), but now, considering this above I am not sure if the money would not be better spent on getting most RAM.
ECC RAM - compatibilty? Any or only QVL'd?
Here is one thing which makes me thinking ECC RAM market is pretty much an organized crime. I can find 16GB stick of Kingston memory, around $20/stick cheaper as any of those, which I can (if I am lucky) find on the QVL of MB (I recall Supermicro C242 boards I was looking at previously had like one Hynix model in the QVL). One, I am referring to as that Kingston memory is model KTL-TS426D8/16G, but from Kingston site it lists some Lenovo servers - do those sticks have some sort of DRM chip on them to prevent anyone braking this organized crime end of the market's tradition?
I am asking this also because that second build idea is great and all, but that X370 Taichi has just some two random 8GB sticks QVL'd, so I wonder how to increase my chances as far as finding compatible ECC RAMs go. Another reason is that in case I would potentially go 64GB of RAM (as above), I would probably consider 2 32GB sticks (about $20-30 cheaper than 4x 16GB sticks) as it is "easier" for the compatibility for those dual-channel memory layouts.
Drivers (mostly bound to deciding on NICs)
As a part of the build, I will need faster than 1GbE. 10GbE NIC would work, but finding one with RJ-45 is not a simple task (and I don't think $300+ Intel 10GbE NIC is a good deal in my opinion), so I am looking at couple of USB>ETH 2.5GbE and 5GbE adapters, one is by Club3D, model CAC-1420 (2.5GbE) and by Qnap, model qna-uc5g1t. None of these have officially any sort of Linux/FreeBSD driver. How can I then find out if it can work? Inquiry manufacturers what chipsets those products use? Must it be included on FreeBSD's Hardware list?
Interconnection with other devices
As I outlined above, I would like to ask if how I want to interconnect NAS with rest of devices can be done and I am not just going nuts, I want to connect NAS with:
1. My desktop PC via direct ETH<>ETH connection (2.5 or 5GbE connection)
2. Rest of my LAN for occasional access from a laptop or phone or whatever.
Can I do it? For point 1 do I need to just set static IPs from the same subnet? For point 2, if both devices from point 1 will be connected to the rest of LAN, is there a chance of any possible or impossible conflict?
Any help will be very much appreciated. Thank you all for stopping by in advance.
I want to build NAS with 24TB of usable storage (well, more like 23,16TB of a usable storage) stripped across 3 VDevs (each running 3 HDDs in RaidZ1 -- I really hope I got this right), ZFS all the way with 32GB of ECC RAM. I will not mention prices on purpose, what I link below is more the less what I would like to go with, but there is still a room for maneuvering, but I right now it is already almost at $1500 (actually around 1450-1500€, but whatever, let's ignore the currency now, prices are all over the place when comparing US and EU pricing (mostly in favor of US pricing)).
I am playing around with two build ideas so far:
0. Components common for both below:
CASE: Fractal Define R6
PSU: Corsair RM650x (+ one extra power SATA cable from my RM750x in my desktop)
CPU Cooler: SilentiumPC Fera 3 (you have probably no idea what it is, it is our little EU secret, better than any CM Hyper 212-series cooler for just $25, I love that little beast)
HDD: 7x WD RED 4TB (WD40EFRX) - I was thinking about this a lot but the server will be in my office/bedroom, so I cannot risk 7200RPM drives (though in the same time I also hate my decision, here 6TB WD HC310 drives cost just about 70% extra compared to 4TB WD RED, which is justifiable for 50% extra capacity an much better speed and reliability specs IMO).
1. Building a "simple" NAS-only NAS, just a storage, no Plex, no VM. With this one I would minimize costs where possible (in this case on CPU), I was considering this build:
MB: Gigabyte C246-WU4
CPU: Intel Pentium G5400 (I believe 2/4 CPU is plenty for a "simple" NAS)
RAM: 32GB, 2x Crucial CT16G4WFD8266 (QVL'd)
A short explanation, 10 SATA ports, WS board. I was originally looking here on some Supermicro boards, such as MBD-X11SCL-F, but if I can avoid yet another add-in card or yet another extra component I take it - fortunately because G5400 has iGPU, the usefulness of IPMI, for me at least, is not very significant. And what's more, it is significantly cheaper than that Supermicro board.
2. Building a NAS with one VM, which I would use for ffmpeg transcoding (it will be probably some minimal CLI Debian or Ubuntu, I will literally just run ffmpeg on it, mostly ripping series from BRs which I would then probably integrate into Plex (no transcoding, just a centralized delivery - I have not yet dived into Plex or Jellyfin (I prefer open-source solutions), there will be plenty time after):
MB: Asrock X370 Taichi (albeit it is one of the consumer boards, one of the reasons I really like it is that it has 10 SATA3 ports, second is that it is an absolute monster in terms of VRM quality, it is stellar, great if I ever decide to go with a more powerful CPU)
CPU: AMD R5 Ryzen 1600 AF (it is basically R5 2600, I would in the future upgrade when 4XXX series gets to the market, whether that 4900X or some cheap offer for 3900X)
GPU: Literally whatever low-end completely passively cooled nVidia or AMD card, this will be merely a video output
RAM: 32GB, 2x Samsung M393A2K40CB2-CTD (not QVL'd, unfortunately)
This one gets me going the most, I am fully lubricated thinking about this, I would like to have a dedicated transcoding machine outside my PC. I guess I don't want that much from the virtualization and media streaming server that much, but what is your experience? Shall I expect some problems when running VM? Some bad experience with bhyve? Is this man page is still valid and bhyve doesn't support more than 16 threads (that could be potentially a real bummer)?
Couple of notes to one or both build ideas:
- I feel there will be a suggestion to the second build as I should go Xeon or just turn around and go with some older platform running some Xeon - no, I don't want to, I will be completely honest, I don't, and any new Xeon would put me many hundreds of dollars back for what? Nothing, they don't deserve people's money (I wish there were some AM4 Ryzen-compatible server motherboards though, now it is just for Epyc)
- the reason I want to buy 7 HDDs is that I already have 2 WD RED 4TB, so I want to first build 2 VDevs, get it up and running, copy all 7.5TB of data to NAS, then zero them, test them as per burn-in guidelines on the Hardware Guide site, and if they are okay, adding that 7th + those 2 drives as the third VDev and rebuilding it into the full capacity
- NICs are a bit of a problem, I plan on 2.5/5GbE interconnection between my desktop and NAS, in my desktop I have no PCI-e slots left, so I will need to go with USB>ETH adapters Club3D, model CAC-1420 (2.5GbE) or Qnap, model qna-uc5g1t. As for the server itself, I could go with some PCI-e based solution, but anything which I would consider available (under $100, preferably new (which I believe I will not find in such price for at least next 3-5 years), not something from China supposedly from some decommissioned server with who knows what real condition) is whether 10GbE, but only for fiber interconnects and 2.5/5GbE PCI-e NICs seem to be not existing.
And as for other headaches...
RAM
In case I will manage to build the second one where I will run 1 VM for transcoding, from my testing I will most likely need to assign 4GB of RAM for that machine, which leaves me with just 28GB of RAM for NAS, and I know about a recommendation of 1GB of RAM for 1TB of storage, but is that for 1TB of the usable part of the storage or of a grand total? If so, would it be wise to bite the tongue and buy 3x 16GB sticks (or bit even harder, go insane and go straight to 64GB)?
I was, originally, thinking about NVMe SSD or Optane for caching (ZIL and/or L2ARC), but now, considering this above I am not sure if the money would not be better spent on getting most RAM.
ECC RAM - compatibilty? Any or only QVL'd?
Here is one thing which makes me thinking ECC RAM market is pretty much an organized crime. I can find 16GB stick of Kingston memory, around $20/stick cheaper as any of those, which I can (if I am lucky) find on the QVL of MB (I recall Supermicro C242 boards I was looking at previously had like one Hynix model in the QVL). One, I am referring to as that Kingston memory is model KTL-TS426D8/16G, but from Kingston site it lists some Lenovo servers - do those sticks have some sort of DRM chip on them to prevent anyone braking this organized crime end of the market's tradition?
I am asking this also because that second build idea is great and all, but that X370 Taichi has just some two random 8GB sticks QVL'd, so I wonder how to increase my chances as far as finding compatible ECC RAMs go. Another reason is that in case I would potentially go 64GB of RAM (as above), I would probably consider 2 32GB sticks (about $20-30 cheaper than 4x 16GB sticks) as it is "easier" for the compatibility for those dual-channel memory layouts.
Drivers (mostly bound to deciding on NICs)
As a part of the build, I will need faster than 1GbE. 10GbE NIC would work, but finding one with RJ-45 is not a simple task (and I don't think $300+ Intel 10GbE NIC is a good deal in my opinion), so I am looking at couple of USB>ETH 2.5GbE and 5GbE adapters, one is by Club3D, model CAC-1420 (2.5GbE) and by Qnap, model qna-uc5g1t. None of these have officially any sort of Linux/FreeBSD driver. How can I then find out if it can work? Inquiry manufacturers what chipsets those products use? Must it be included on FreeBSD's Hardware list?
Interconnection with other devices
As I outlined above, I would like to ask if how I want to interconnect NAS with rest of devices can be done and I am not just going nuts, I want to connect NAS with:
1. My desktop PC via direct ETH<>ETH connection (2.5 or 5GbE connection)
2. Rest of my LAN for occasional access from a laptop or phone or whatever.
Can I do it? For point 1 do I need to just set static IPs from the same subnet? For point 2, if both devices from point 1 will be connected to the rest of LAN, is there a chance of any possible or impossible conflict?
Any help will be very much appreciated. Thank you all for stopping by in advance.