Low power, high speed (NVME), quiet Home NAS??

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
So yea I have some good requirements, but I hope it's doable esp. since I dont need much storage (~6TB), but need some advice...

This will be primarily for sharing and backing up (sync'g) family photos from various devices (Linux/Windows/Android), occasionally streaming content up to and including 4k, thinking Plex here. Zone Minder for CCTV, and some misc IoT devices that may need stg.

Originally I was thinking the TrueNas Mini X+, however I am leaning more towards ssd and nvme since all my current PC's are all SSD or NVME.
  • I don't need much more than maybe 4TB right now, so I think starting with 6-8 TB would be good and I can expand as needed.
  • I am thinking 6-8 1TB NVME drives, but really not sure which systems support this and HOW that works/connects? Preferably the newer Pcie-4 here!
  • I am familiar with building typical PC's but those normally have 1-2 NVME slots at most.
  • How are others supporting a large number of nvme devices? I know there are serer class systems but it seems like those have many small fans which are prob very loud!! The device would sit very close to my office and PC so that wont work.
  • I dont need any more than around 64-128GB of supported RAM.
I see Supermicro systems such as the A+ Server 1114S-WN10RT, but fully loaded its close to $10k!

Any other DIY suggestions?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
  • Depends on how much $$$ you want to spend. Quality 1TB drives are going to cost about $100 on eBay, so that's going to be $800 if you're considering a Z2 6TB setup. If you want to go PCIe 4, I'd expect that cost to be even higher.
  • NVME riser cards are an option for NVME sticks but that rules out the Mini or any other Mini-ITX-sized case. AFAIK, the Mini XL will also not handle a non-SATA drive in its tower without major surgery. I'd also research the use of riser cards carefully - there have been report of such bifurcating or switching cards having issues with FreeBSD but not Windows or Linux.
  • I also happen to think that your eye towards PCIe 4, etc is going to result in a very expensive rig. I'd buy something used that can handle the transcoding, VMs, etc you're considering. But I'm also weird in the sense that I believe in having the local device do the transcoding, not the server. Much easier to build a good server to serve up than a server whose CPU will be oversized 100x most of the time just to do some transcoding on occasion.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
With your use case, and likely just a 1G connection, you are not going to use the speed NVMe gives you. NVMe can make a lot of sense when you are going 10G or you are doing block storage, but for just family photos and streaming: A pair of 8 or 10TB SATA HDDs would be inexpensive, particularly if "shucked", and able to saturate that 1G link, easy. Even 12TB is reasonable at 175 a drive.

Keep in mind that for file storage, you don't want to go above 75-80% full on ZFS, maximum. 6 TiB used means you'd want at least 10TB (~9 TiB) drives.
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
Excellent advice guys, thanks!

So yep, I have a 1G hard wired connection throughout my home (cat 6a) which most all devices use as their primary connection. I am considering 10G, but that would only be for my primary PC back to the TrueNAS, so prob only helping with initial backups and such. It is an option though.

Now back to HDD's, yea they are soo much cheaper and more compatible, however I was also thinking about latency? For example if I browse a photo share from the TV and each photo thumbnail is loading, is this going to take forever with a HDD as it would on a PC, or does ZFS really make this soo much better? I never had a NAS before so not sure what to expect, so shooting for the top tech I guess lol...

Re: nvme options, yea I did read a few articles about the NVME and FreeBSD issues, I guess that's still a concern? One card I was looking at was a ASUS Hyper M.2 x16 Gen 4 (PCIe 4.0/3.0) supports 4x M.2 NVMe devices. It Looks like I could fit 4 nvme modules in each, and maybe 2 of these total but not sure this is even a good idea. It does mention bifurcation, but honestly I dont even know much about what that does. I think I need a supermicro MB (or similar) since I really prefer to have the IPMI when needed.

Interestingly, I see both of you have the HGST drives in your signature builds, and from a quick google search these appear to be excellent for a NAS, so I am sure thats why you both have them!!

I would get plenty of storage that way but was concerned with the latency and possible power consumption, but I see Constantin seems to be using 126W, so not too bad I guess. :smile:
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Quite honestly, I think that SSDs are serious overkill for your requirements. For what you specified, an old 4-core Xeon E5 2600 series and 16 GB will likely be sufficient. If Plex includes dynamic transcoding that will change. But for the amount of storage, you could also just have a second copy at 4k.

All in all your setup seems a bit unbalanced to me. What is your budget and would you like to go "over the top" for the fun of it?
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
Quite honestly, I think that SSDs are serious overkill for your requirements. For what you specified, an old 4-core Xeon E5 2600 series and 16 GB will likely be sufficient. If Plex includes dynamic transcoding that will change. But for the amount of storage, you could also just have a second copy at 4k.

All in all your setup seems a bit unbalanced to me. What is your budget and would you like to go "over the top" for the fun of it?
Hey, good points. So I am sure I can do this cheaply and it "may work" but I also dont want to do this and have to upgrade shortly after or replace older parts as they eventually fail. I would rather build it once, and with newer parts that will last and perform well into the future. I guess you can call this future proofing. I will likely start addiing more to this over time once I realize all the possiblites with a TrueNAS setup. I think under $3k for budget but that will depend on many things, so not sure on budget yet. I can start with minimal storage and add more later.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
With the exception of spinning hard disks and perhaps power supplies, used hardware is less likely to fail :smile: . As a "reference point" my new FreeNAS has just be moved to production after the burn-in phase. I run 8 Seagate Exos 16 TB hard disks for storage on an old Supermicro 9SRi-F board with a 4 core Xeon and 64 GB of DDR3 EC RDIMMs (and 32 GB would likely be enough). Before that I ran an AMD Phenom II (4 cores, 3 GHz, no HT) with 16 GB and 6 WD Red 4 TB for about 7 years.

The point is that for sequential and mostly read access, even 6-8 year old hardware is usually more than enough. If you want to do more than that it becomes more complicated, in that all components must play together. Otherwise one bottleneck will ruin the whole chain. Esp. 10 GBit network is not that easy to saturate.

Personally I would recommend that you start relatively small and get "your hands dirty". I would think that without hard disks, which I would indeed buy new, you could start with less than 500 USD and have more than enough power for what you laid out.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
For example if I browse a photo share from the TV and each photo thumbnail is loading, is this going to take forever with a HDD as it would on a PC

Not necessarily. This will likely fit into ARC, particularly with 64GiB of RAM. If that gets annoying ... ARC has to fill first ... you can always add persistent L2ARC, ~250-500 GiB SSD (SATA or NVMe, your choice) or so, and that will most certainly take care of it.

It's likely that on a PC, a lot of that "thumbnails are slow" experience is from figuring out where the thumbnails are in the first place (metadata), and then loading them. ZFS keeps metadata in ARC, and will also keep data in ARC, after the first access, that is.
 
Last edited:

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I see both of you have the HGST drives in your signature builds

Shucked :). I'm a cheap bastard and don't see why I should pay extra, when WD Elementals 8TB and 12TB give me HGST Helium drives. These are reliable and quiet. Elementals 10TB is now most likely Air, btw.
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
Not necessarily. This will likely fit into ARC, particularly with 64GiB of RAM. If that gets annoying ... ARC has to fill first ... you can always add persistent L2ARC, ~250-500 GiB SSD (SATA or NVMe, your choice) or so, and that will most certainly take care of it.

It's likely that on a PC, a lot of that "thumbnails are slow" experience is from figuring out where the thumbnails are in the first place (metadata), and then loading them. ZFS keeps metadata in ARC, and will also keep data in ARC, after the first access, that is.
Great to hear how this actually works, thanks! Sounds like things will be ok here. I like this idea and will try to get as much RAM as possible to create a much better exp with this setup, hopefully with enough RAM I wont even need things such as L2ARC!
Shucked :). I'm a cheap bastard and don't see why I should pay extra, when WD Elementals 8TB and 12TB give me HGST Helium drives. These are reliable and quiet. Elementals 10TB is now most likely Air, btw.
I knew nothing about air and He drives until now! I guess He drives are the new cool drives to get.
Maybe I should consider that AMD MB above with a few He HDD's and lots of RAM! Rather simple and still a solid setup!
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
SO... I have been throwing around many ideas on how I may want to set this up, lots of which are based on comments here and on other posts.

I would like to setup a 3 Tier system as follows. (redundancy is hidden in there)

Tier-1 Pool:

I am still leaning towards NVME drives as my primary pool. The NVME riser card (Asus) should work fine in the ASRock MB with 4x1TB drives setup in a Stripe RAID0 (4TB)! This will be super fast and yes I know with zero redundancy! I am planning to have 10G between my primary PC, switch, and NAS, then 1G toTV using Nvidia Shield Android TV box, and 1G wired to everything else.

Tier-2 Pool:

Next I will create a 12TB single vdev for a Teir-2 pool, which will host ALL data on the Teir-1 pool, and any other data I have. Yes, no redundancy in this pool again, but my plan is to have this pool sync'd with the Teir-1 pool either rail-time or maybe nightly. So ..... if the NVME pool crashes, all data is still on this Teir-2 pool and accessible until I get the Teir-1 pool fixed. Also, I will be able to "A/B" compare both pools and see how much of a performance difference there really is. If there is not much, I do have other plans with the NVME card and drives since I can use in my next PC build. So these will not be wasted if not needed here on the NAS. I am considering a 2 vdev mirror here also, but figured with the 3 and possibly 4 copies of the data (4th being an extra USB) I should be good.

Tier-3: USB Backup

So I already purchased a WD 12TB Element and ran a SMART long test for 24 hours, and no issues or bad sectors, etc.. May run badblocks too but for now its good.

I will use this of course to backup weekly or whenever there are major changes to the pool. I think ideally I will want this to be disconnected most of the time, not due to power savings but for possible corruption and the like. So if the pool is clean I can plug in and backup as needed.

I know extra redundancy is always good but there is always a trade-off.

Thoughts on this and specifically how to setup the Tier-1 and 2 pools and sync?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
One card I was looking at was a ASUS Hyper M.2 x16 Gen 4 (PCIe 4.0/3.0) supports 4x M.2 NVMe devices. It Looks like I could fit 4 nvme modules in each, and maybe 2 of these total but not sure this is even a good idea. It does mention bifurcation, but honestly I dont even know much about what that does.
"Bifurcation" means that the ASUS card is just wiring lanes physically from the PCIe slot to the M.2 connectors but that the motherboard is responsible for logically splitting the PCIe slot into several devices, through a BIOS setting. So from a x16 slot without bifurcation, you get only one working M.2! With x8x8 bifurcation, you can use two M.2 (you will have to test where the good slots are). With x8x4x4 or x4x4x8, three M.2. To get all four M.2 working the motherboard has to support x4x4x4x4 bifurcation. Check the specifications!

But, as said, NVMe is overkill, especially with a gigabit network, and the number of drives will be limited, unless you go for a Xeon Scalable (Bronze would do!) or EPYC board with full bifurcation in all slots. A more reasonable approach to an all flash array is to just use SATA SSDs; with QLC, 2 TB and 4 TB SSDs become relatively affordable. (Keyword is "relatively".)

Your planned setup is dangerous. Without redundancy, any failure, any single read error will fault the entire pool and loose all your data. Let's say Tier-1 fails (btw, "RAID0" is not ZFS terminology) and you have to go to your Tier 2 to restore, reading 6 TB from a non-redundant vdev. With a URE rate of 1e-14, the probability of not encountering an error while restoring 6 TB is 62%: Put otherwise, you have a 38% probability of failure. And then, your Tier 3 is not better than your Tier 2…

Put redundancy at all ZFS levels. If you want ultimate performance (before being bottlenecked by 1 GbE), mirror your drives. For storage efficiency, and serving a limited number of clients (which should be fine for your home setting), use RAIDZn. SSDs typically have URE rates of 1e-17, so RAIDZ1 is still usable. For HDD, safety requires RAIDZ2—or three-way mirrors.
Tier 1: QLC SSD in RAIDZ1, 3*4TB (8TB, ca. 6 TB usable), 4*4TB or 6*2TB (12TB, ca. 8TB usable); 4*4 TB as stripe of mirrors for (pointless) performance with 6 TB usable.
Tier 2: HDD is RAIDZ2. 4 is the minimum reasonable number, so choose capacity according to price—but avoid SMR drives at all cost (if in doubt, avoid WD).
Then you can have a single 12+ TB as ultimate USB backup.
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
First Thanks Etorix for the excellent advise!
"Bifurcation" means that the ASUS card is just wiring lanes physically from the PCIe slot to the M.2 connectors but that the motherboard is responsible for logically splitting the PCIe slot into several devices, through a BIOS setting. So from a x16 slot without bifurcation, you get only one working M.2! With x8x8 bifurcation, you can use two M.2 (you will have to test where the good slots are). With x8x4x4 or x4x4x8, three M.2. To get all four M.2 working the motherboard has to support x4x4x4x4 bifurcation. Check the specifications!
YES, I chcked with ASRock and this does both Bifurication and the 4x4x4x4, so that would work.

Your planned setup is dangerous. Without redundancy, any failure, any single read error will fault the entire pool and loose all your data. Let's say Tier-1 fails (btw, "RAID0" is not ZFS terminology) and you have to go to your Tier 2 to restore, reading 6 TB from a non-redundant vdev. With a URE rate of 1e-14, the probability of not encountering an error while restoring 6 TB is 62%: Put otherwise, you have a 38% probability of failure. And then, your Tier 3 is not better than your Tier 2…
Wow, I definitely did not know how much a high failure rate there is for a rebuild of data! 38%. WOW. Yea that changes things, LOL!

Put redundancy at all ZFS levels. If you want ultimate performance (before being bottlenecked by 1 GbE), mirror your drives. For storage efficiency, and serving a limited number of clients (which should be fine for your home setting), use RAIDZn. SSDs typically have URE rates of 1e-17, so RAIDZ1 is still usable. For HDD, safety requires RAIDZ2—or three-way mirrors.
Tier 1: QLC SSD in RAIDZ1, 3*4TB (8TB, ca. 6 TB usable), 4*4TB or 6*2TB (12TB, ca. 8TB usable); 4*4 TB as stripe of mirrors for (pointless) performance with 6 TB usable.
Tier 2: HDD is RAIDZ2. 4 is the minimum reasonable number, so choose capacity according to price—but avoid SMR drives at all cost (if in doubt, avoid WD).
Then you can have a single 12+ TB as ultimate USB backup.
So here is where things get interesting....

So.... based on all the aforementioned reasons/posts, I have decided to drop the NVME ideas... Agreed, Done! :)

Next, I am also leery about HDD in general for a few reasons, so not too keen on investing alot in these when SSD's would to ME be a better "investment" long term. So SSD's it is!!

As far as Tier's, I think I only want/need 1 Pool for now so we can all it Tier-1 - SSD.

So remaining questions are now which SSD, how many, and how they will be setup....

I like the RAIDZ1 idea for SSD's, so since I only need 4-6TB of data for now, maybe a 5-6 1TB SSD vdev?
Another option for (pointless) performance reasons could be a stripe of mirrors, maybe 2 4x1TB RAIDZ1 vdev's. This should provide ONE disk redundancy per vdev, right? With the pretty solid reliability of the SSD's I think this should be acceptable and perform great.

I was looking into these on Amazon for about $110 each: Samsung SSD 860 EVO 1TB

Thoughts on this? Am I good to get started?
 

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
I currently have something similar currently running. I've got x3 Samsung 512Gb NVMe drives in a stripe. No redundancy. Im only using it for nextcloud and data dumps for photos and other misc media. Nothing I could care about if I lose it. I have a 10Gb network and a couple thunderbolt interfaces for data transfers when i need. I have the pool set for replication within a seperate 32Tb storage pool of hard drives, for when the inevitable happens. When it eventually does, Im just going to rebuild the NVMe pool.
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
I currently have something similar currently running. I've got x3 Samsung 512Gb NVMe drives in a stripe. No redundancy. Im only using it for nextcloud and data dumps for photos and other misc media. Nothing I could care about if I lose it. I have a 10Gb network and a couple thunderbolt interfaces for data transfers when i need. I have the pool set for replication within a seperate 32Tb storage pool of hard drives, for when the inevitable happens. When it eventually does, Im just going to rebuild the NVMe pool.
How do you rate the performance, and can you compare to a non nvme NAS?
 

Velcade

Contributor
Joined
Mar 28, 2019
Messages
108
Synology makes what you're looking for. The HS-453DX-8G
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
Synology makes what you're looking for. The HS-453DX-8G
So I could not resist and had to look that up, lol. Surprisingly, it looks to be a decent purpose built device specifically for home media content, 10G, Plex, etc. I think if one does not want to go all out with a DIY NAS, this could be a nice "middle ground".

For me, I will stick to the full TrueNAS I am planning, but good to know this exists.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Thoughts on this? Am I good to get started?

You're probably close to go. Bear in mind that "ZFS is not a backup", so you still need some tier 2 in case something goes wrong with your SSD pool. While "spinning rust" is not the future, it's not going to disappear either for cheap, high capacity storage.
As for the number and geometry of drives, capacity is the issue. ZFS pool should not be more than 75-80% full, so neither 6*1 TB(Z1) not 2*(4*1 TB) can hold 6 TB of actual data. Vdevs cannot be expanded, replacing drives by larger ones is a costly affair, and adding a further RAIDZ1 vdev may require more slots than are available. If possible, try to setup a larger pool than what you think you need right now, to be more future proof.
 

banshee28

Dabbler
Joined
Oct 19, 2020
Messages
28
You're probably close to go. Bear in mind that "ZFS is not a backup", so you still need some tier 2 in case something goes wrong with your SSD pool. While "spinning rust" is not the future, it's not going to disappear either for cheap, high capacity storage.
As for the number and geometry of drives, capacity is the issue. ZFS pool should not be more than 75-80% full, so neither 6*1 TB(Z1) not 2*(4*1 TB) can hold 6 TB of actual data. Vdevs cannot be expanded, replacing drives by larger ones is a costly affair, and adding a further RAIDZ1 vdev may require more slots than are available. If possible, try to setup a larger pool than what you think you need right now, to be more future proof.
Ah yes I am still going to use my 12TB External for full backup, in fact its on my desk now ready to use! I did just read up on the ~ 80% full too (more money, lol) but so far I will only have 3-4TB of actual data but room to grow is a great point to, plus the expansion issue. I will figure out the sizing to ensure the overhead and future proof. Thanks....
 
Top