Performance driven Ryzen 3600 NAS

Joined
Dec 2, 2019
Messages
30
Hey guys,

So spend the last couple of weeks trying to figure out the best possible build for my usecase/budget. Compared different OS solutions and hardware, ended up with freenas and think I got the parts mostly figured out now and planning to order in the next few weeks.
Posting here to see what you guys think about it, and if you can answer some of the questions I still have (like if adding L2ARC would benefit this build)

So about what it will be used for:
I am a freelance visual effects artists. I do a lot of simulation work, which generates a ton of data (some overnight sims can generate hundreds of gigabytes)
High sequential read speeds are pretty import here, more important than fast random IO.
I will also be doing work on output renders which will also be quite heave (100/200mb per frame) so reads also important here. I thought this might be where L2ARC might come in handy but i'm not sure how that works exactly (new to freenas, only tried it on a VM)
Also I frequently have to send caches to clients / co-workers. I am currently using crushFTP for that but I am thinking of starting to use Nextcloud for that once I move all my data to my freenas build.

I will probably be using 1 or 2 VM's on it as licence server, repository and some other stuff, but nothing that requires very high compute.
Will probably also move my personal plex server to this thing, because why not.

I will be connecting through 10gbe with one workstation, and additional 1 or 2 other system will be pulling or writing data for renders / simulations but not 24/7 (I assume I can also limit their speeds in Freenas so they don't pull full reads?..)

Build I had in mind:
CPU - AMD ryzen 3600
Motherboard - ASrock Rack X470D4U2-2T
RAM - 4x16gb Kingston server Premiere (as appears on motherboard QVL)
Case - silverstone DS 381
HDD - 8x Seagate IronWolf (either 6tb or 8tb, haven't decided yet)
PSU - Corsair TX550m
SAS controller - LSI SAS 9207
SSD for VM - 2x Crucial 256gb (SATA)

And maybe this for L2ARC: 2x 1tb M.2 Samsung 970 evo

How it will be configured:
So I was thinking of configuring this as striped/mirrored vdevs to get the best possible performance as RAIDZ2 seems too slow for my use.
The 2x SSD I have in there would to run the VM from, because that seems faster for IO. Or do you reckon I could also just run them from the normal pool?

Additionally the board supports 2 m.2 drives, so I was thinking of maybe adding 2x1tb drives for L2ARC, but I'm not sure how much sense that makes in my usecase. RAM is already maxed out in this build. Hope someone could shed some light on this. If it won't make enough different I'll probably leave it out.
Can always be added later right?...

Current case will be maxed out with drives, but I was thinking I could just get a external SAS drive box if I'd ever want to increase the pool right? (as I have one pci-e port open so i could add another SAS card there in the future)

Let me hear your thoughts.
Thanks guys!

 
Joined
Oct 18, 2018
Messages
969
I will also be doing work on output renders which will also be quite heave (100/200mb per frame) so reads also important here. I thought this might be where L2ARC might come in handy but i'm not sure how that works exactly (new to freenas, only tried it on a VM)
Read speeds as measured from your FreeNAS server itself can be improved by adding more ARC/L2ARC, designing your pool appropriately, and using SSDs to back the pool.

Regardless of what you choose to build the pools out of, SSDs or HDDs, the most effective way to improve read speeds is to add more ARC. That means adding more memory. How much depends on your use cases. If you're really concerned about read speeds you may want to consider a build which supports 128GBytes or more of memory.

L2ARC functions much the same as the ARC except that it doesn't live in ram, it lives on an actuall disk, typically an SSD. For this reason L2ARC will always be slower than ARC. Furthermore, L2ARC takes space in main memory to index the L2ARC thereby decreasing the ARC size. What most folks recommend is to max out your ram before you consider L2ARC.

I can't say whether an HDD backed pool with a huge ARC will serve your needs. You may want to consider if your needs and budget justify SSD backed pools.

I will probably be using 1 or 2 VM's on it as licence server, repository and some other stuff, but nothing that requires very high compute.
Will probably also move my personal
You're going to want to limit the memory footprint of VMs etc so you can maintain a large ARC size.

I will be connecting through 10gbe with one workstation, and additional 1 or 2 other system will be pulling or writing data for renders / simulations but not 24/7 (I assume I can also limit their speeds in Freenas so they don't pull full reads?..)
This will be the other main bottleneck. Even if your machine could push data super fast; if you were trying to do it via 1gbps connections you'd have a rough time. 10gbps is what you'll want here; and this may require some network tweaking.

RAM - 4x16gb Kingston server Premiere (as appears on motherboard QVL)
I recommend you not buy 16GByte modules. It looks like the board supports 128GBytes so maybe pick up 32GByte modules so that you can eventually max out your memory and make sure your CPU can support it. This will have a bigger impact on system performance.

I also highly recommend you consider ECC memory or understand the tradeoff to NOT going with ECC memory.
I'm not usually a fan of silverstone; I don't know why. This one looks possibly okay. The worry I would have is whether there is enough air flow over your drives. You may want to look at the Fractal Design Define R5 or R6. You won't get the drive caddies but may get cooler drive temps with these cases.

Another consideration is getting a server chassis; if you have a room to put it in where the noise won't matter. They will typically have hot-swap bays and can be had used on ebay in the US for cheaper than you might think.
HDD - 8x Seagate IronWolf (either 6tb or 8tb, haven't decided yet)
Keep in mind that 7200rpm drives will get hot and will require more cooling.

And maybe this for L2ARC: 2x 1tb M.2 Samsung 970 evo
Be careful here. As I mentioned above the L2ARC consumes some memory for indexing. How much L2ARC is appropriate is a bit of an art it seems. Search the forums for L2ARC size for some info about how much L2ARC to get. Consider this though; if those M.2 cards cost 300 dollars you could possibly find a different board,cpu combo that supports 128GBytes+ of memory without it costing too much more and put a bit more money into ram; this will have a better performance impact I think.

So I was thinking of configuring this as striped/mirrored vdevs to get the best possible performance as RAIDZ2 seems too slow for my use.
Where high IOPS are a concern striped mirror-vdevs are typically preferred.

The 2x SSD I have in there would to run the VM from, because that seems faster for IO. Or do you reckon I could also just run them from the normal pool?
Running vms off of SSDs is pretty common.

Additionally the board supports 2 m.2 drives, so I was thinking of maybe adding 2x1tb drives for L2ARC, but I'm not sure how much sense that makes in my usecase. RAM is already maxed out in this build. Hope someone could shed some light on this. If it won't make enough different I'll probably leave it out.
Can always be added later right?...
In my view M.2 is SUPER fast and most useful for either SLOG devices of L2ARC if you don't need it for SLOG. I typically suggest people NOT use M.2 for the boot device because the speed is a bit of a waste on the boot device.

What do you plan to boot from? Small SSDs are ideal.
Current case will be maxed out with drives, but I was thinking I could just get a external SAS drive box if I'd ever want to increase the pool right? (as I have one pci-e port open so i could add another SAS card there in the future)
You can certainly do that, yes. Or you can get a case with extra space, :)


Anyway, just my thoughts. Are you very set on the Ryzen build or would you consider a used Xeon build? You may get more performance for your dollar this route.
 
Joined
Dec 2, 2019
Messages
30
Thanks so much for the reply!

This will be the other main bottleneck. Even if your machine could push data super fast; if you were trying to do it via 1gbps connections you'd have a rough time. 10gbps is what you'll want here; and this may require some network tweaking.

I have 10gbe. Will be getting a switch though (currently on direct attached copper) So no issue here.

I recommend you not buy 16GByte modules. It looks like the board supports 128GBytes so maybe pick up 32GByte modules so that you can eventually max out your memory and make sure your CPU can support it. This will have a bigger impact on system performance.

I also highly recommend you consider ECC memory or understand the tradeoff to NOT going with ECC memory.

There weren't any 32gb DIMMs on the motherboard QVL that I could find to buy here.
I could opt with one that's not on the QVL but i'd rather not. 64gb seems to be enough for the amount of storage i'm putting in the sytem though right? It's definitely above the 1gb per Tb recommended.

I'm not usually a fan of silverstone; I don't know why. This one looks possibly okay. The worry I would have is whether there is enough air flow over your drives. You may want to look at the Fractal Design Define R5 or R6. You won't get the drive caddies but may get cooler drive temps with these cases.

Another consideration is getting a server chassis; if you have a room to put it in where the noise won't matter. They will typically have hot-swap bays and can be had used on ebay in the US for cheaper than you might think.

I also highly recommend you consider ECC memory or understand the tradeoff to NOT going with ECC memory.

The reason why I chose this is because they have a SAS backplane and I can plug all of them into a single SAS card.
The office i'm in doesn't have a seperate server room. All the servers are located on top of something center of the room and I'm not sure people will like me if I put a loud server chassis there, so that's a no go.
My current NAS is in the define R5, defenitely a nice case and would also be an option. However with the current 8 drives i'm planning to put in there i'd need to get an expansion bay regardless if i ever want to upgrade as the define R6 cannot fit 16 drives.

Be careful here. As I mentioned above the L2ARC consumes some memory for indexing. How much L2ARC is appropriate is a bit of an art it seems. Search the forums for L2ARC size for some info about how much L2ARC to get. Consider this though; if those M.2 cards cost 300 dollars you could possibly find a different board,cpu combo that supports 128GBytes+ of memory without it costing too much more and put a bit more money into ram; this will have a better performance impact I think.
I also highly recommend you consider ECC memory or understand the tradeoff to NOT going with ECC memory.

If it's not adding much i'll probably leave those out. Can always add a read cache later right?..

Anyway, just my thoughts. Are you very set on the Ryzen build or would you consider a used Xeon build? You may get more performance for your dollar this route.

Not set on ryzen but it seems to be the cheapest option for what i'm trying to build.
This board has IPMI which is quite nice for a board in this pricerange.
If you have a good Xeon / motherboard combo you can recommend in similar pricerange then it would be worth considering as well. But I couldn't find anything that was similar in price/performance with similar functionality.
 
Joined
Dec 2, 2019
Messages
30
In my view M.2 is SUPER fast and most useful for either SLOG devices of L2ARC if you don't need it for SLOG. I typically suggest people NOT use M.2 for the boot device because the speed is a bit of a waste on the boot device.

What do you plan to boot from? Small SSDs are ideal.

Was planning to boot from USB drives, that's recommended right?..
What is SLOG if you don't mind me asking?...
 
Joined
Oct 18, 2018
Messages
969
Was planning to boot from USB drives, that's recommended right?..
It used to be. Now that SSDs are so cheap and an HBA can be had for cheap to give you more SATA ports lots of folks are booting off of 1 or two SSDs. I personally choose two SSDs in a mirror configuration as a boot pool.

What is SLOG if you don't mind me asking?...
A SLOG is a Separate ZFS Intent Log device. For sync writes your system will not report that data has been received until it has been committed to non-volatile memory. In a basic setup this means the ZIL and the ZIL, by default, lives on your pool. This first write contains your data but isn't yet committed to your pool for long-term storage; it requires a bit more processing before that happens. So, for sync writes your system first writes data to the ZIL in short transaction groups and then reports back to the user that it has the data. If nothing bad happens the data is then written to your pool (from disk) and the ZIL is ignored. If your system crashes however before the data makes it to the pool on reboot your system checks the ZIL, finds data there that isn't yet in the pool and commits it thereby preventing data loss. As you can see in normal operation in a basic setup your pool is written to twice; once for the ZIL and once when the final data is written to the pool; this can consume precious IOPS and bandwidth. So, the solution is to move the ZIL out of the pool onto a dedicated device, this is the SLOG device.

I could opt with one that's not on the QVL but i'd rather not. 64gb seems to be enough for the amount of storage i'm putting in the sytem though right? It's definitely above the 1gb per Tb recommended.
That is a rule of thumb. If you are going to be accessing many large files in rapid succession and need high read speeds you're going to want a larger ARC to fit them all in memory. Anything not in memory at read time will end up with the read hitting your pool and hurting your read rates.

There weren't any 32gb DIMMs on the motherboard QVL that I could find to buy here.
Some memory manufacturers will guarantee their product with a specific board, even if the board manufacturer does not.

My current NAS is in the define R5, defenitely a nice case and would also be an option. However with the current 8 drives i'm planning to put in there i'd need to get an expansion bay regardless if i ever want to upgrade as the define R6 cannot fit 16 drives.
Can the silverstone case fit that many? I did an R6 build that fit 12 drives no problem.

The reason why I chose this is because they have a SAS backplane and I can plug all of them into a single SAS card.
The office i'm in doesn't have a separate server room. All the servers are located on top of something center of the room and I'm not sure people will like me if I put a loud server chassis there, so that's a no go.
Yeah, that makes sense. The case is often quite a bit of a personal choice. Just make sure you can keep the drives cool and happy whatever case you choose.

If it's not adding much i'll probably leave those out. Can always add a read cache later right?..
Absolutely. I would recommend go for more ram to start and if read speeds are slow add more ram. Once you can't add more ram then add L2ARC devices. :)

Not set on ryzen but it seems to be the cheapest option for what i'm trying to build.
This board has IPMI which is quite nice for a board in this pricerange.
If you have a good Xeon / motherboard combo you can recommend in similar pricerange then it would be worth considering as well. But I couldn't find anything that was similar in price/performance with similar functionality.
I would recommend used X10 series supermicro boards (or x11 if you have the funds). You can get dual-socket boards if you want to allow for more memory in the system. You can also get the Xeon cpus used. Used CPU and motherboard will save some money. My personal choice is to go with boards with more PCIe slots rather than M.2 slots etc for the simple reason that PCIe slots are more versatile. If your 10gbps connection is copper you could probably easily find one with a 10gbps built-in NIC.
 
Joined
Dec 2, 2019
Messages
30
It used to be. Now that SSDs are so cheap and an HBA can be had for cheap to give you more SATA ports lots of folks are booting off of 1 or two SSDs. I personally choose two SSDs in a mirror configuration as a boot pool.
Ah ok, got it.
Can I install VM's on the boot drive as well? Might be an idea to put the OS on the SSD as well as the VM's then.

Can the silverstone case fit that many? I did an R6 build that fit 12 drives no problem.
No, but I can only expand with the same amount of drives correct? As i'm building my initial pool with 8 I could only upgrade the pool with 8 more at a later point so i'd need to get a second enclosure then anyways. So the 4 extra bays wouldn't be that useful in this case.

That is a rule of thumb. If you are going to be accessing many large files in rapid succession and need high read speeds you're going to want a larger ARC to fit them all in memory. Anything not in memory at read time will end up with the read hitting your pool and hurting your read rates.

I'm building the pool with 4 disk striped so should hit 800Mb+ on sequential (210mb per disk), which is fast enough already for my uses, anything that comes out of cache will be nice benefit but as the 10gbe will bottleneck it anyways at 1000Mb that's not that big of a deal to be honest. I don't expect much of my data to come from cache as it's too big to fit in there anyways even if I had 128gb of ram.

I would recommend used X10 series supermicro boards (or x11 if you have the funds). You can get dual-socket boards if you want to allow for more memory in the system. You can also get the Xeon cpus used. Used CPU and motherboard will save some money. My personal choice is to go with boards with more PCIe slots rather than M.2 slots etc for the simple reason that PCIe slots are more versatile. If your 10gbps connection is copper you could probably easily find one with a 10gbps built-in NIC.
I'm moving away from copper as with my new setup I'll have 4 systems connected instead of 2.
I'm going to look into some xeon options again. but it was looking more expensive last time. Difficult finding good 2nd hand boards for my usecase.
I don't want to spend much more on the build then it's already at. I'm also building a new threadripper workstation around the same time so costs are already quite high for the entire thing.
I ideally want to purchase most components at the same vendor because it gives me a 28% extra tax write off so it might end up costing more if I have to source ram/cpu/motherboard from different vendors.
I'll have a look at some x10 boards and see If it works. Any reason why you're not keen on the Ryzen 3600?...
 
Joined
Oct 18, 2018
Messages
969
No, but I can only expand with the same amount of drives correct? As i'm building my initial pool with 8 I could only upgrade the pool with 8 more at a later point so i'd need to get a second enclosure then anyways. So the 4 extra bays wouldn't be that useful in this case.
Not strictly speaking, no. You can add a new vdev of any type and size to a pool to increase the size. You will likely want to stick with the same vdev type at the very least.

Any reason why you're not keen on the Ryzen 3600?...
Just that for my uses I wanted a server and so I went with parts intended for use in a server and marketed and supported by the manufacturer for use in a server. This pushed me to server boards with server CPUs and server memory is all. From a very brief look at that CPU it looks like AMD will not officially offer support for ECC memory for that cpu even though it does work. If this is the case (and it may not be since my search was brief) that would definitely push me toward a build with active ECC support from the manufacturer. It is certainly a bias on my part, to be sure.

I'm building the pool with 4 disk striped so should hit 800Mb+ on sequential (210mb per disk), which is fast enough already for my uses, anything that comes out of cache will be nice benefit but as the 10gbe will bottleneck it anyways at 1000Mb that's not that big of a deal to be honest. I don't expect much of my data to come from cache as it's too big to fit in there anyways even if I had 128gb of ram.
I look at it this way. Frequently purchasing 2 larger modules rather than 4 smaller modules nets you the same amount of memory and frequently with only a very modest price increase. If you do find that you need more memory latter if you went with 2 modules you can do that without tossing any sticks; if you go with smaller modules your upgrade process will be costlier. So to me, the larger modules are a way to open future expansion options to me. If you build your pool out of spinning rust and find it can't keep up your best bet on the read side will be more ram; so my line of thought is just to make sure that expansion option is easily available. :)
 
Joined
Dec 2, 2019
Messages
30
Not strictly speaking, no. You can add a new vdev of any type and size to a pool to increase the size. You will likely want to stick with the same vdev type at the very least.

Wait, what? How does that work? If I added a vdev with only 4 disks that vdev would only be striped across 2, so it will be a lot slower... or am I not understanding it correctly? That's what i've been reading everywhere at least.
 
Joined
Aug 8, 2019
Messages
21
I think you might be conflating pools and vdevs. The weirdness (and coolness) with ZFS is that it's both a filesystem and a volume manager. When you create a pool (which is where your data gets stored) you back it with vdevs, which are virtual disks made out of underlying physical drives. These vdevs have varying levels of redundancy (mirrored, RAIDz1, etc.), but there's no redundancy or real striping on the pool level. Everything's functionally just JBODed. This means that you can have multiple vdevs per pool without any trouble, as well. The performance implications of that are complicated, but the general rule seems to be that read and write performance to your pool is bound to the performance of your slowest vdev. Adding more vdevs doesn't affect performance as long as those vdevs are as fast or faster than your existing ones.

This is complicated, though. This blog is a great resource for wrapping your mind around the ins and outs of ZFS.

(A side note here is that you're going to want to put your VM SSDs in their own pool if you're going to use them. If you stick them in a pool with spinning disk vdevs they're going to be functionally spinning disks themselves.
 
Joined
Dec 2, 2019
Messages
30
I think you might be conflating pools and vdevs. The weirdness (and coolness) with ZFS is that it's both a filesystem and a volume manager. When you create a pool (which is where your data gets stored) you back it with vdevs, which are virtual disks made out of underlying physical drives. These vdevs have varying levels of redundancy (mirrored, RAIDz1, etc.), but there's no redundancy or real striping on the pool level. Everything's functionally just JBODed. This means that you can have multiple vdevs per pool without any trouble, as well. The performance implications of that are complicated, but the general rule seems to be that read and write performance to your pool is bound to the performance of your slowest vdev. Adding more vdevs doesn't affect performance as long as those vdevs are as fast or faster than your existing ones.

This is complicated, though. This blog is a great resource for wrapping your mind around the ins and outs of ZFS.

(A side note here is that you're going to want to put your VM SSDs in their own pool if you're going to use them. If you stick them in a pool with spinning disk vdevs they're going to be functionally spinning disks themselves.

Ok so then i did understand it correctly.
The entire reason I'm building it 4 stripes wide is for performance, which is why I mentioned I can't upgrade it by less than 8 drives in the future (or I'll lose performance) If I'd add less the speed of the entire pool would suffer so it wouldn't make any sense to add a smaller vdev in that case.

I guess I could add a different pool with less disks for archive in it later though, but i was planning to have another NAS (my old system) for backup/archival anyways.
 
Joined
Dec 2, 2019
Messages
30
In my view M.2 is SUPER fast and most useful for either SLOG devices of L2ARC if you don't need it for SLOG. I typically suggest people NOT use M.2 for the boot device because the speed is a bit of a waste on the boot device.
Reading a bit into SLOG now and think might benefit me. Looking into this drive now for it:

Think that would be good for SLOG?..
 
Joined
Oct 18, 2018
Messages
969
Reading a bit into SLOG now and think might benefit me. Looking into this drive now for it:
SLOG devices can be added after-the-fact and will only help systems which make use of lots of sync writes. You can complete your build and check your ZIL activity levels and performance before purchasing.

Think that would be good for SLOG?..
There is a tread about benchmarking SLOG devices; you could check there. Any device with high write speed, power loss protection, and high write endurance makes a decent SLOG device. And they don't need to be huge either. Some folks opt for larger devices so they can over provision them and take advantage of higher write speeds on some devices.
 
Joined
Dec 2, 2019
Messages
30
SLOG devices can be added after-the-fact and will only help systems which make use of lots of sync writes. You can complete your build and check your ZIL activity levels and performance before purchasing.


There is a tread about benchmarking SLOG devices; you could check there. Any device with high write speed, power loss protection, and high write endurance makes a decent SLOG device. And they don't need to be huge either. Some folks opt for larger devices so they can over provision them and take advantage of higher write speeds on some devices.

Ah, ok might just build first without SLOG and can always add it in later then. thanks! Do you think I can expect somewhat decent writes without SLOG as well? (as long as they're 200+ mbps I'd be happy to be honest)

The optane drives seem to be optimized for caching but couldn't find anything on powerloss protection on the 800p one (900p seems to have it) I'll check out the thread!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Wait, what? How does that work? If I added a vdev with only 4 disks that vdev would only be striped across 2, so it will be a lot slower... or am I not understanding it correctly? That's what i've been reading everywhere at least.
The statement was, "you can"... Not that you should. It is best to keep all vdevs the same for redundancy and performance.
 
Joined
Oct 18, 2018
Messages
969
Correct, I didnt mean to imply you shouldn't try to keep the same type and number of disks. Reading back now I see how my comment was too tersely put and possibly confusing, sorry about that.
 
Top