Hi all,
Firstly, this post isn't strictly about a FreeNAS build, but I hope that will be forgiven - FreeNAS is a major part of it, and the knowledge in these forums has already been an invaluable part of my build research. I'm hoping I can tap into that knowledge just a little further :)
So - the time has come for me to replace my aging (coming up on 6 years old) AiO home server (which runs various VMs via ESXi, including a virtualised copy of NexentaStor to provide "loopback" NFS storage to ESXi, plus NAS services). I've spent the better part of a month (on-and-off) doing my research and putting together a build list for the new box. I'm hoping the experts here don't mind casting their eye over it, and letting me know if I've made any dumb decisions or wrong turns. There's also a couple of specific questions along the way.
Firstly, context - what is this home server going to be doing? Again, it will be an AiO machine running ESXi on the bare metal. On top of that I plan to run a virtualised copy of FreeNAS, to provide "loopback" storage to ESXi for the other VMs via NFS, plus providing NAS services to clients on the network via CIFS. The NAS will server up all the normal data you'd expect from a home NAS - documents, photos, music, movies, TV, etc.
Apart from FreeNAS, the machine will also run a number of other VMs - a couple of light-load Windows VMs for things like Active Directory (and associated services), WSUS, print server and centralised AV (Trend WFB), a router/VPN appliance, plus a few "experimental" VMs from time-to-time when I want to... experiment.
I'll also be using this box to torrent, handle cloud backups (CrashPlan) and run Plex (wanting the ability to transcode two streams at once - including transcoding of very-high-def video (e.g. 4K h264/h265) in the future). On my current AiO these services (Plex, CrashPlan, torrents) are run inside a Windows VM, but with the new box I'd assume it makes more sense to run these inside the FreeNAS VM instead.
Finally, future-proofing - I want this server to last for at least 5 years, and while it is starting life with six HDDs, it needs to be able to handle an additional six in the future (probably 1-2 years away).
With that all out of the way, here's the build list:
CPU / Motherboard / RAM:
Either:
SuperMicro X11SSL-CF
Intel Xeon E3-1240 v5
32GB (2 x 16GB) DDR4-2400 DIMMs from the compatible memory list for this MB
Or:
SuperMicro X10SRH-CF
Intel Xeon E5-2620 v4
32GB (4 x 8GB) DDR4-2400 DIMMs from the compatible memory list for this MB
Either way, FreeNAS would get 16GB of RAM, and the other 16GB would be for the other VMs. Deliberately not filling all the DIMM slots with both options, so I can easy future expansion.
The choice between these two options is one I'm struggling with. Do I need the extra horsepower of the E5 now? Probably not. Will I appreciate it in the future? Maybe, especially with some unknowns around the demands of transcoding very-high-def video.
The price difference to the E5 is about $300 AUD (~$225 USD) - about 9% extra on top of the cost of the E3 option. I don't think there's any other appreciable differences here other than performance and price - I don't need the extra features (additional PCIe slots, etc) of the X10 board, maybe there's an increase in idle power with the E5 but I don't imagine it'll be significant, etc.
I'd love to hear some opinions from others on this topic. Would also appreciate any suggestions for a decent HSF (for either option) - this is the one component I haven't had a chance to research yet.
Storage:
VM Storage
1 x Samsung EVO 850 500GB
1 x Crucial MX200 500GB
Through partitioning, I'll use ~16GB of each of these drives to provide FreeNAS with a mirrored boot pool. The remainder of the drives will form a second mirrored pool to provide "loopback" storage to ESXi for the remainder of my VMs virtual disks.
For both pools I'll definitely enable compression, and probably dedupe as well. Obviously de-dupe comes with a memory overhead, but plenty of RAM available and we're not talking about de-duping that much data - and I'd like to maximise what I can fit on the VM virtual disk pool.
I'm proposing two different SSDs to try and eliminate any risk of firmware bugs / manufacturing faults / etc causing the simultaneous failure of both SSDs. The particular models were chosen based on representing good value, and having similar performance characteristics. Not sure if I'm being too paranoid here though, and should just get two of the same SSD...
I don't plan on overprovisioning these drives given my write workload isn't that heavy - it's a home server, after all. Interested to hear if anyone disagrees, though. I also haven't been able to find much guidance about an appropriate ZFS recordsize for ESXi-based VM virtual disk storage over NFS. What little I've found suggests decreasing from default down to 16KB is possibly a good idea, but it's far from conclusive. Again, interested in any thoughts on this.
Mass Data Storage
3 x WD Red 4TB
3 x Seagate NAS 4TB
These six drives will be combined into one RAIDZ2 vdev to provide a pool for mass data storage / NAS use. I spent a while agonising over striped mirrors vs RAIDZ2, but ultimately went for the latter - RAIDZ2 is more space-efficient, for the use-case I don't need the additional IOPs of mirrors and I'm OK with expanding in larger drive groups (rather than two at a time). Also, importantly, given this is a home server and I won't be holding spares - and may take several days to a week to source replacements for failed drives - RAIDZ2 seemed safer, despite the longer resilver times.
Again, mixing drive vendors here to reduce the chance of simultaneous failures. I would've preferred three different vendors, but I don't think there's a third vendor who makes 5,xxx rpm NAS drives? Assuming that's correct, will try and get drives from different batches as well.
For this pool, no de-dupe (wouldn't make any sense). I don't think compression makes much sense either given most of it will be filled with incompressible data (music, movies, TV), but it sounds like the general best practice is just to leave it on anyway?
SLOG / L2ARC
1 x Intel DC S3700 100GB (used from eBay)
I'll partition this drive to provide SLOG for each of my pools. I've considered carving off a small amount (20GB?) as L2ARC for the mass data storage pool as well, in case it does end up with some hot data from time-to-time, but I'm not sure if that really makes sense or not. Interested to hear any thoughts on the matter.
HBAs / Misc
1 x on-board 8-port LSI SAS controler
1 x LSI SAS2008-based PCIe card
Each of my motherboard options has a 8-port LSI SAS controller on-board. I'll also add a SAS2008-based PCIe card (whatever type I can find cheapest on eBay (that isn't fake!)) and quality SF-8087-to-4xSAS cable. Both will be cross-flashed to IT mode, of course.
It's a little annoying that I'm a single drive over what the on-board controller could handle on its own - but not the end of the world as I'll definitely be adding more disks to this server in the future.
I'll also be buying a small USB thumb drive to boot ESXi off.
Case & PSU
Fractal Design Define R2 XL case
Seems well regarded, reasonable price, can handle 12 x HDDs + a couple of SSDs, and the large physical size isn't an issue for me. I'll also add an additional 140mm fan (and reposition some of the bundled ones) so I have 2 x front 140mm, 1 x back 140mm, and 1 x top 140mm.
Seasonic G-550 PSU
On the PSU front, I've done the math according to jgreco's guide (https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/), and I estimate my max potential power usage will be around 420W initially. Once I add another six HDDs, that figure goes up to 590W. Obviously the latter figure is a little above the rated 550W of this PSU, but I've been conservative when estimating max potential power usage for some of my components.
Estimated idle watts is around 100W / 120W (6 HDDs / 12 HDDs), which is pretty close to the line re the "20% of PSU max watts" (120W) guideline to ensure max efficiency, but I think it should be OK? And I'm not sure what else I can do to better "meet" this guideline, as I think a 450W / 500W PSU would likely be undersized, especially once I add another six HDDs.
The only other issue with this PSU is insufficent SATA power connectors, but that seems an easy fix with a couple of quality MOLEX-to-4xSATA adapters.
And that, folks, is just about that. Not sure how many will have made it this far - if you have, thanks for reading. And thanks in advance for any suggestions / feedback / answers.
Firstly, this post isn't strictly about a FreeNAS build, but I hope that will be forgiven - FreeNAS is a major part of it, and the knowledge in these forums has already been an invaluable part of my build research. I'm hoping I can tap into that knowledge just a little further :)
So - the time has come for me to replace my aging (coming up on 6 years old) AiO home server (which runs various VMs via ESXi, including a virtualised copy of NexentaStor to provide "loopback" NFS storage to ESXi, plus NAS services). I've spent the better part of a month (on-and-off) doing my research and putting together a build list for the new box. I'm hoping the experts here don't mind casting their eye over it, and letting me know if I've made any dumb decisions or wrong turns. There's also a couple of specific questions along the way.
Firstly, context - what is this home server going to be doing? Again, it will be an AiO machine running ESXi on the bare metal. On top of that I plan to run a virtualised copy of FreeNAS, to provide "loopback" storage to ESXi for the other VMs via NFS, plus providing NAS services to clients on the network via CIFS. The NAS will server up all the normal data you'd expect from a home NAS - documents, photos, music, movies, TV, etc.
Apart from FreeNAS, the machine will also run a number of other VMs - a couple of light-load Windows VMs for things like Active Directory (and associated services), WSUS, print server and centralised AV (Trend WFB), a router/VPN appliance, plus a few "experimental" VMs from time-to-time when I want to... experiment.
I'll also be using this box to torrent, handle cloud backups (CrashPlan) and run Plex (wanting the ability to transcode two streams at once - including transcoding of very-high-def video (e.g. 4K h264/h265) in the future). On my current AiO these services (Plex, CrashPlan, torrents) are run inside a Windows VM, but with the new box I'd assume it makes more sense to run these inside the FreeNAS VM instead.
Finally, future-proofing - I want this server to last for at least 5 years, and while it is starting life with six HDDs, it needs to be able to handle an additional six in the future (probably 1-2 years away).
With that all out of the way, here's the build list:
CPU / Motherboard / RAM:
Either:
SuperMicro X11SSL-CF
Intel Xeon E3-1240 v5
32GB (2 x 16GB) DDR4-2400 DIMMs from the compatible memory list for this MB
Or:
SuperMicro X10SRH-CF
Intel Xeon E5-2620 v4
32GB (4 x 8GB) DDR4-2400 DIMMs from the compatible memory list for this MB
Either way, FreeNAS would get 16GB of RAM, and the other 16GB would be for the other VMs. Deliberately not filling all the DIMM slots with both options, so I can easy future expansion.
The choice between these two options is one I'm struggling with. Do I need the extra horsepower of the E5 now? Probably not. Will I appreciate it in the future? Maybe, especially with some unknowns around the demands of transcoding very-high-def video.
The price difference to the E5 is about $300 AUD (~$225 USD) - about 9% extra on top of the cost of the E3 option. I don't think there's any other appreciable differences here other than performance and price - I don't need the extra features (additional PCIe slots, etc) of the X10 board, maybe there's an increase in idle power with the E5 but I don't imagine it'll be significant, etc.
I'd love to hear some opinions from others on this topic. Would also appreciate any suggestions for a decent HSF (for either option) - this is the one component I haven't had a chance to research yet.
Storage:
VM Storage
1 x Samsung EVO 850 500GB
1 x Crucial MX200 500GB
Through partitioning, I'll use ~16GB of each of these drives to provide FreeNAS with a mirrored boot pool. The remainder of the drives will form a second mirrored pool to provide "loopback" storage to ESXi for the remainder of my VMs virtual disks.
For both pools I'll definitely enable compression, and probably dedupe as well. Obviously de-dupe comes with a memory overhead, but plenty of RAM available and we're not talking about de-duping that much data - and I'd like to maximise what I can fit on the VM virtual disk pool.
I'm proposing two different SSDs to try and eliminate any risk of firmware bugs / manufacturing faults / etc causing the simultaneous failure of both SSDs. The particular models were chosen based on representing good value, and having similar performance characteristics. Not sure if I'm being too paranoid here though, and should just get two of the same SSD...
I don't plan on overprovisioning these drives given my write workload isn't that heavy - it's a home server, after all. Interested to hear if anyone disagrees, though. I also haven't been able to find much guidance about an appropriate ZFS recordsize for ESXi-based VM virtual disk storage over NFS. What little I've found suggests decreasing from default down to 16KB is possibly a good idea, but it's far from conclusive. Again, interested in any thoughts on this.
Mass Data Storage
3 x WD Red 4TB
3 x Seagate NAS 4TB
These six drives will be combined into one RAIDZ2 vdev to provide a pool for mass data storage / NAS use. I spent a while agonising over striped mirrors vs RAIDZ2, but ultimately went for the latter - RAIDZ2 is more space-efficient, for the use-case I don't need the additional IOPs of mirrors and I'm OK with expanding in larger drive groups (rather than two at a time). Also, importantly, given this is a home server and I won't be holding spares - and may take several days to a week to source replacements for failed drives - RAIDZ2 seemed safer, despite the longer resilver times.
Again, mixing drive vendors here to reduce the chance of simultaneous failures. I would've preferred three different vendors, but I don't think there's a third vendor who makes 5,xxx rpm NAS drives? Assuming that's correct, will try and get drives from different batches as well.
For this pool, no de-dupe (wouldn't make any sense). I don't think compression makes much sense either given most of it will be filled with incompressible data (music, movies, TV), but it sounds like the general best practice is just to leave it on anyway?
SLOG / L2ARC
1 x Intel DC S3700 100GB (used from eBay)
I'll partition this drive to provide SLOG for each of my pools. I've considered carving off a small amount (20GB?) as L2ARC for the mass data storage pool as well, in case it does end up with some hot data from time-to-time, but I'm not sure if that really makes sense or not. Interested to hear any thoughts on the matter.
HBAs / Misc
1 x on-board 8-port LSI SAS controler
1 x LSI SAS2008-based PCIe card
Each of my motherboard options has a 8-port LSI SAS controller on-board. I'll also add a SAS2008-based PCIe card (whatever type I can find cheapest on eBay (that isn't fake!)) and quality SF-8087-to-4xSAS cable. Both will be cross-flashed to IT mode, of course.
It's a little annoying that I'm a single drive over what the on-board controller could handle on its own - but not the end of the world as I'll definitely be adding more disks to this server in the future.
I'll also be buying a small USB thumb drive to boot ESXi off.
Case & PSU
Fractal Design Define R2 XL case
Seems well regarded, reasonable price, can handle 12 x HDDs + a couple of SSDs, and the large physical size isn't an issue for me. I'll also add an additional 140mm fan (and reposition some of the bundled ones) so I have 2 x front 140mm, 1 x back 140mm, and 1 x top 140mm.
Seasonic G-550 PSU
On the PSU front, I've done the math according to jgreco's guide (https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/), and I estimate my max potential power usage will be around 420W initially. Once I add another six HDDs, that figure goes up to 590W. Obviously the latter figure is a little above the rated 550W of this PSU, but I've been conservative when estimating max potential power usage for some of my components.
Estimated idle watts is around 100W / 120W (6 HDDs / 12 HDDs), which is pretty close to the line re the "20% of PSU max watts" (120W) guideline to ensure max efficiency, but I think it should be OK? And I'm not sure what else I can do to better "meet" this guideline, as I think a 450W / 500W PSU would likely be undersized, especially once I add another six HDDs.
The only other issue with this PSU is insufficent SATA power connectors, but that seems an easy fix with a couple of quality MOLEX-to-4xSATA adapters.
And that, folks, is just about that. Not sure how many will have made it this far - if you have, thanks for reading. And thanks in advance for any suggestions / feedback / answers.
Last edited: