FreeNAS + ESXI Lab Build Log

Status
Not open for further replies.

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Hello all,

My requirements for this build are a follows:

1. Use of FreeNAS (why I am here) or other storage solution (if FreeNAS is not advised) to insure data storage is available, secure, and holds integrity.
  • I currently have about 2 TB worth of data but plan to digitalize the remaining bit of media, and may redo some of it at higher quality if I have the space. I would also like to open this storage/backup option to a few friends and family for backup of pictures, media, etc. I run a plex server off of my daily PC but would like to move it into FreeNAS.
2. Runs ESXi for home web server and lab use.
  • I have read numerous threads here as well as on other sites. While I know there are risks involved, I would like to run FreeNAS virtualized. If this means a little more work, learning, and use of an HBA I am ok with that. Alternatively, if this is not advised, I may be able to procure a separate host for ESXi, such as an R710. The tradeoff, among many other things, will be the "price" of the HBA card vs the price of running an entirely separate host.
  • On this ESXI host I would like to run a web server, which is currently being hosted elsewhere (paid), as well as a lab environment to learn the other facets of the IT field (Nagios XI, JIRA, Linux distros, Window's Server/AD Environment, vCenter).
What I currently have:

I am a bit hesitant to even post this because I am not sure if this is worth the money or stress to do at this time. I have been purchasing, researching, and building out parts of the lab and am learning a hell of a lot. I plan to use this lab not only for my own use, the use of my friends and family, but for teaching others at my work and friends how these various technologies work. I would like to be ready to take on hosting my websites from the current host in May, but will have to see if that is a worthwhile venture.

My main question to you is this - with the parts and requirements that I have, would it be best to try and sell what I have and start fresh with a dedicated FreeNAS host with all the "proper" hardware from chassis with hot swap through ECC memory? Or, is there a way that I can make an all in one? The issues I see are as follows:
  • Non-ECC RAM. Due to my limited understanding of the ZFS system, I will have to trust in the posts from veterans that ECC ram is essential to keeping data integrity. Storage integrity is not something I want to compromise on, especially once the system is proven and I offer up space to family/friends as a secondary/tertiary backup.
  • Currently no HBA. This can be purchased, but then we get back to the cost of an HBA vs just purchasing a separate ESXi host (R710), and building off a fresh board, CPU, RAM. I read about VDMs / passing through ESXi directly to a VM, but then read posts here advising to the contrary. I'm not even sure if this is relevant today as I have not given it a test yet (waiting on Kapton tape to arrive today to cover pin 3 on the 8 TB White drive
Good things
  • The current mobo, processor, RAM, and the old case with its water cooling components may be able to fetch a fair price to offset any new builds
  • While the Rosewill case is cheap and a PITA, it could work
  • The main use is storage and redundancy. I may keep a large drive on my main PC to do the ripping of media, and simply let the server/FreeNAS do the playback. For this reason a single RAIDZ2 or 3 may be sufficient, if not overkill.
  • I am using this as a learning experience and am willing to take a hit or two to get things right
  • I'd like to think when all is said and done, and through the process, I can give back to the community with my experience and possibly guided access to the system for yet newer people.
  • I have a 42U rack with 11 Cisco devices in a closet downstairs, so noise is less of an issue. The 2800 Cisco router is pretty obnoxious, but not horrible.
  • There is a good possibility I can get a lot of this equipment locally (Ebay or Craigslist) as I live in the home of the Internet / Data Center lane.

I am curious to hear any of your thoughts. Thank you in advance for all of the work you have done and any more you have to give.
 
Last edited by a moderator:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Welcome! Great first post; it's good to see that you're thinking about what you're getting in to, and it's clear that you've don quite a bit of research. Hopefully my answers below help. If you need any clarification, just ask!

insure data storage is available, secure, and holds integrity.
You're definitely in the right spot. ZFS is the best in the game right now, and in my opinion FreeNAS is the best way to deploy ZFS. You could also DIY it with FreeBSD + ZFS. Personally, I don't put a ton of trust in ZFS on Linux, largely because licensing issues will prevent ZFS from ever being an integrated part of the Linux kernel. In other words, ZFS on Linux will probably always be a community/academic project.

If you decide not to go with ZFS, your next best bet would probably be Microsoft's ReFS. It's a little feature limited relative to ZFS, but the critical CoW and checksumming features are also present. Microsoft disabled the ability to create ReFS volumes within Windows 10 in a recent update (seriously, WTF?), but I believe that there is a registry edit or command line tool something like it to work around it.

I am a bit hesitant to even post this because I am not sure if this is worth the money or stress to do at this time.
I wouldn't say that doing what you are doing is necessarily stressful, but that largely depends on your views going in. With the advice and recommendations here, my first FreeNAS build was a piece of cake: purchase, assemble, install, configure, done. My hardware setup was pretty simple, so there was almost no tweaking needed outside of connecting to my domain, and creating the shares. Obviously, virtualizing FreeNAS will add some additional challenges, but I don't think that it's a huge addition, if you do the research in advance.

My main question to you is this - with the parts and requirements that I have, would it be best to try and sell what I have and start fresh with a dedicated freeNAS host with all the "proper" hardware from chasis with hot swap through ECC memory? Or, is there a way that I can make an all in one? The issues I see are as follows:
I would definitely recommend selling your current hardware and buying better hardware. ECC memory isn't necessary in the sense that thing will break if you don't use it. However, not using it leave a gaping hole for problems to sneak in. There's a reason that every server out there uses ECC memory. You see a lot of ECC love here because it goes hand-in-hand with the purpose of ZFS. Using ZFS without ECC is kind of like driving a Volvo without your seat belt: you're not guaranteed to get into an accident if you don't wear your seat belt, but didn't you buy a Volvo for the safety, so why aren't you wearing your seat belt?

I would recommend looking at something second hand. I'm not sure what your noise tolerances are, but you can easily get into a kick-a** server for under $500 second hand. A pretty popular option on eBay are Dell R710 servers. For the more noise sensitive, older workstations are a good choice.

For example, my hypervisor is a Dell T7500 workstation. I bought the workstation for about $300 shipped with an X5660, spent another $120 for 48GB of RAM, and then said "why not?" and bought the second CPU riser and another X5660 for another $200. That means I have a 12 core/24 thread hypervisor with 48GB of RAM for $620. And it's really quiet. For your use, dual CPUs is probably way overkill, so you could have that setup at $420 plus the cost of an HBA.

For this reason a single RAIDZ2 or 3 may be sufficient, if not overkill.
I would recommend at least RAIDZ2.

However, there's a couple things to consider. If you are using FreeNAS to manage your ESXi datastore, then you're going to need to think performance. It's not necessary to use FreeNAS for this purpose, but it's pretty common. If that's the case, I would only recommend stripped mirrors. You may also need to consider a SLOG.

Really, there are a ton of different setup options, and it really depends on your goal. If virtualizing FreeNAS is more about minimizing your hardware, and less about providing ZFS storage for ESXi, then I would do something like: dual drives (maybe SSDs) mirrored for ESXi datastore, managed by ESXI; and 6x or 8x drives in RAIDZ2, managed by FreeNAS (HBA passthrough). If you want FreeNAS to managed the ESXi datastore, then you've got a couple different options: (1) mirrored SSDs for ESXi datastore (NFS share); 6x or 8x drives in RAIDZ2; or (2) 6x or 8x drives in striped mirrors (possibly plus SLOG).
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
For now I believe minimizing the hardware would be best. In the future if I really get into using the VM's or want to separate the hypervisor and freeNAS, I may get a second host. With that said, the plan would look something like this.
  1. Ideally sell the Mobo, CPU, and RAM together to get enough money to...
  2. New mobo, CPU, ECC RAM, and an HBA. I'd like to get a dual socket mobo for future expansion, but would need to be careful about getting one that would not require two to start. Normally I prefer Newegg, but I may have to go through Ebay for a better price.
  3. Find a bunch of smaller disks for a RAIDZ2 in one zDev, one zPool, and have the 8TB in a secondary zpool for backup only.
I will have to come back with specifics on the new parts, but I will go with a lot of what I have read here. It may look like Chris's post here: https://forums.freenas.org/index.ph...motherboard-for-freenas-11.61802/#post-439901. So much to read, so much to learn.
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
@Nick2253:
Very well written!

I would definitely recommend selling your current hardware and buying better hardware. ECC memory isn't necessary in the sense that thing will break if you don't use it. However, not using it leave a gaping hole for problems to sneak in. There's a reason that every server out there uses ECC memory. You see a lot of ECC love here because it goes hand-in-hand with the purpose of ZFS. Using ZFS without ECC is kind of like driving a Volvo without your seat belt: you're not guaranteed to get into an accident if you don't wear your seat belt, but didn't you buy a Volvo for the safety, so why aren't you wearing your seat belt?
A perfect metaphor for ECC. Thanks for that :)
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Find a bunch of smaller disks for a RAIDZ2 in one zDev, one zPool, and have the 8TB in a secondary zpool for backup only.
Minor terminology pedentry:

A device in ZFS is a vdev, and a pool is just a pool. zpool is the command used to work with ZFS pools.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
So much to read, so much to learn.
In my signature, there is a button called, "Show : Useful Links"... You might want to look at those.

One that is particularly applicable to your stated goal is this one:
Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

@Stux dis build an All-In-One system and virtualized several functions within it. He also has some other linked threads where he tests the performance of a SLOG device.
If you are willing to consider used server gear, this one looks to be a pretty good value, especially because of the amount of RAM:
https://www.ebay.com/itm/Supermicro...-2-6ghz-8-Core-128gb-24-Bay-JBOD/232656106862
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
I have read through so many threads here, including many, but possibly not all, of your links it is all a bit jumbled. I thank you all for the input though. I may have to do some selling first before I am ready to jump in a grand and "do it right".

Nick, I had read in a nice pdf (of a ppt) that vDevs are parts of larger zPools. Thanks for the correction.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I had read in a nice pdf (of a ppt) that vDevs are parts of larger zPools.
It is a easy misunderstanding because the command to create a ZFS pool is zpool create so, here is another link to make your head spin... ;)
https://www.freebsd.org/doc/handbook/zfs-zpool.html

Thankfully, FreeNAS puts most of those maintenance tasks into a GUI so it is point-and-click. I administer some servers at work that are using ZFS on Linux and all the admin is done at the command line with a combination of scripts and creating cron jobs. I really wish I could use FreeNAS for it, but the boss won't let me.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Using zpool instead of pool is not a big deal, but the extra z is a waste of time.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
As an update I have revised what computer parts I am looking to sell and have listed it on Craigslist. I will be listing parts of this on Ebay soon as well.
Craigslist: https://washingtondc.craigslist.org/nva/sop/d/caselabs-sth10-intel-4790k/6507380467.html

Ideal Plan
  • Supermicro 4U Chassis (For long term)
  • Super Micro Mobo (still a little confused on which model to pick)(may need HBA or not)
  • Xeon Processor, high core count (if I need multi socket, may be best to simply have a separate host like an R710 for ESXI then run FreeNAS bare metal on this build
  • Plently of ECC RAM, though I dont think I will ever need past 32 GB (12GB for FreeNAS, the rest for other VMs)
  • New, redundant PSUs
  • UPS
  • 3-5 more 8TB WD NAS drives from current deal (for a total of 4-6), will do RAIDZ2. I have read that using 4 drives for RAIDZ2 is "inefficient" but I don't think I need the efficiency for much, yet. I am also trying to keep the cost down for the initial plan. I should be able to move the data and redo the vDev's as needed if and when I want to add more drives. I have two 4TB WD green drives that I could use for this, or I could put the drives into the arrays initially. Still learning about the various RAID implementations/efficiencies/best practices.
Non-ideal
  • Keep Rosewill Chasis, PSU (800W Gold)
  • Still buy new mobo (+hba?), xeon processor, ECC RAM, UPS, more drives
Been dabbling and did a build of an R710 on savemyserver.com with 12 cores @2-3.5GHz, 32 GB RAM, H200 card (I think that works with FreeNAS), rapid rails, and 6 drive trays for $800-$1150 depending on the CPU clock speed. I am a little worried about getting a chasis with more bay expansion room, but this may be well worth it for an AIO (6x 8TB NAS), with expansion to a second chasis when needed. Could this run meet my requirements? I have the impression, likely unfounded, that you can only run Dell approved disks on Dell machines. Unfortunately it looks like only the Dell 720 and up are marked as approved for ESXI 6.5 and up, but the 710 would probably still work.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Xeon Processor, high core count
Even in Xeons, it is better to look for a higher clock speed. Core count doesn't help as much with most tasks at home. If you get (for example) a dual socket board and populate both sockets with 4 core processors, that gives you 8 real cores 16 with hyper-threading and that is enough for almost anything you will likely do. The thing to absolutely avoid, in my opinion, is getting 8 or 10 core CPUs that are running at 1.7GHz. We had someone where I work order a bunch of systems in like that. It was horrible. It may not be video games, but clock speed still matters. Don't go overboard and get the fastest thing going, because that will really set you back, but don't look for core count over clock speed. The 2.6 to 2.8 GHz range is usually affordable and perform well. You can sometimes find a deal on 3.1 to 3.4 GHz range chips.
Just ask if you have any questions.
if I need multi socket
The reason for multi socket boards is that it gives you access to more PCIe lanes for connecting things like drive controller cards. There was someone on here recently that built a storage server and a ESXi server and they used dual socket boards in both if I recall. I will see if I can find their thread. They took some good photos of the build.
Could this run meet my requirements? I have the impression, likely unfounded, that you can only run Dell approved disks on Dell machines.
Once you flash the H200 card to IT mode, you can run any kind of disk on it. Dell charges a premium for their disks, but they are not required. I have bought Dell servers at work and put non Dell drives in them and the server don't care. Dell cares because they missed out on that up-charge. Would you believe that they want $1600 for a 10TB drive?
 
Last edited:

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Well that sounds like a plan then. If I can get some of this old stuff sold off, I can get at LEAST a R710 with 6 bays for the RAIDZ2 (I found a thread from this forum about a similar setup here (https://forums.freenas.org/index.php?threads/dell-poweredge-r710-used-as-a-nas.44364/). I'd have some extra parts out of it, ut the 6x8TB RAIDZ2 should be more than enough storage + the ability to do ESXI labs without much trouble. Adding another 5 drives would round out the cost to $2000, but what a setup that would be! Need to sell before the trigger finger gets itchy...

I'd still rather go for a Super Micro Chasis and be able to customize the board, PSU, and other componets a little better. That and upgrading would be a lot easier without proprietary builds. I will have to keep an eye out. This place offers a way to build out and price SuperMicro stuff, but I will still look to Ebay for the deals: https://www.theserverstore.com/Supermicro-4U-Server-W-X10SLM-F_p_660.html.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Hubba hubba. That is really nice. I remember reading that thread but it got lost. What is the advantage or disadvantage of this vs SuperMicro vs Norco vs other brands? The case I have right now is fine for the 6 drives I want, though not hot swappable and a PITA, it would work.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What is the advantage or disadvantage of this vs SuperMicro vs Norco vs other brands?
The advantage to me, for my home use, is that I will be able to upgrade the system board later to one with more compute capacity, like a dual socket model, and virtualize my own FreeNAS install. Once I have ESXi running on bare metal and FreeNAS inside a virtual machine, I can create other virtual machines that have (through a virtual switch) fast access to the storage that the FreeNAS is sharing. Having the additional PCIe lanes will allow me to install additional 10GB dual port and 1GB quad port network adapters to allow those virtual machines and virtual switches to talk to the real systems in my home. All the extra drive bays mean that I can have some drives in RAIDz2 for bulk storage and other drives in a stripe of mirrors for speed to host the virtual machines. I have a plan, and I am working toward a similar goal. I bought this chassis with the future in mind, so everything can fit in the one chassis. I still have a backup server for the important things, but this will be my future VM host.

PS. The big deal was the price. I got that 48 bay chassis for only $350 plus shipping. It has two 24 port SAS expander backplanes so I can run all the drives on one SAS controller.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
I hate to ask this, but what motherboard, or at least size of motherboard, would you recommend for an all in one? The RAM and CPU I am more familiar with, but the motherboard and case nomenclature I am not. I see that the SuperMicro SC846* is popular, but I am unaware, or have forgotten, the difference between the variants. I am using https://www.theserverstore.com/SuperMicro-4U-Servers_c_49.html as a guide for pricing (not meant to take away affiliate sales) and am trying to compare that to what I have learned here.

Chasis: https://www.supermicro.com/products/chassis/4U/?chs=846
Mobo: https://www.supermicro.com/products/motherboard/
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I hate to ask this, but what motherboard, or at least size of motherboard, would you recommend for an all in one? The RAM and CPU I am more familiar with, but the motherboard and case nomenclature I am not. I see that the SuperMicro SC846* is popular, but I am unaware, or have forgotten, the difference between the variants. I am using https://www.theserverstore.com/SuperMicro-4U-Servers_c_49.html as a guide for pricing (not meant to take away affiliate sales) and am trying to compare that to what I have learned here.

Chasis: https://www.supermicro.com/products/chassis/4U/?chs=846
Mobo: https://www.supermicro.com/products/motherboard/
I would not buy from them. The configurator they have is cool and all, but their prices are not very good.
I like this system board: X9DRI-F. You can make an all-in-one unit with a lot less, like the one that @Stux built. Did you look at that link?
I like the X9DRI-F because it would give me room to grow and that is the kind of system board I will probably get. If I had the cash to do it, I would buy an X10 version, but those are not really available on the second-hand market so much because they are too new. The one you pointed out is a single processor unit, X10SLM+-F, and the most likely reason that it is up on the used market so early is because they found that it didn't have a feature they needed. It is too new to be retired under normal circumstances. The big problem with that board, in my mind, is that it doesn't have enough card slots for what I see as possible needs for my system. I am going to want at least four cards, one SAS controllers, one NVME SSD and two network cards and that system only has three slots, so it wouldn't be an option.
The system on the Server Store site that could be configured like the one I linked to on eBay is this one:
https://www.theserverstore.com/SuperMicro-846E16-R1200B-W-X9DRI-F-24-x-LFF-4U-SERVER_p_598.html
If you configure it to be the same processor, RAM and SAS controller that is included in the eBay auction I linked to, the system from the Server Store would be $1,401 and the one I linked to on eBay is only $1,039.99... So, you can pay more, but I don't like to do that and that is the reason I don't send people to look at equipment from the Server Store and I don't buy from them either unless I can't find it anywhere else.
 

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Yeah, I don't plan to buy from them either, it was just a tool to learn about the various boards and cases. I think what I need to do is a little more research on the boards and find one that meets my needs. It is one of the harder things to research for a desktop so it makes sense that it would be equally, if not moreso, for servers given the various roles and specs you may want. I may stick with the Rosewill for now and upgrade when I sell off enough of the old gear. Thanks for all the help. I will be browsing the forums to look at the different motherboards (more than is on the hardware guide) and the supermicro site. I will keep you updated.

Edit, found a great thread in the hardware forum describing the super micro motherboard naming conventions. https://forums.freenas.org/index.php?threads/supermicro-motherboard-part-number-guide.17511/

Also found https://forums.freenas.org/index.php?resources/supermicro-x10-and-x11-motherboard-faq.5/ . It seems that now that I understand things a bit more previous resources are clicking...

After looking a bit more I think I am just gonna stick with the guides and examples here. There is just too much out there and I'm going to trust the info here to get started. Besides, I don't need anything latest gen as it would be overkill anywho.

How much cpu would I be looking at for an occasionally busy storage, Plex, website, and lab vms? Hmm...

Option 1 (high end, wow, just wow)
Option 2 (X10, $700-840 piecemeal)
This is only considering microATX, non packaged deals. I may look at ATX boards of the older gentleman for additional CPU variation as I will be running ESXI with a few more VMs. Getting a lot closer to a build though, feeling good.
 
Last edited:

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
After looking through some more about virtualization could requirements I believe the mATX board and suggested Xeon Core should be adequate. I will start pricing out a build and update this post.

Alright, your guy's site is just awesome. Whenever I go to take a second look or have a question and look around first, I find even more resources. I am going to have to make a post of sorts in the newcomer section with my findings and a meta guide.

Hardware Recommendation Guide - https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/
Confusion About HBA Cards? - https://forums.freenas.org/index.php?threads/confused-about-that-lsi-card-join-the-crowd.11901/
SuperMicro Motherboard Naming Convention - https://forums.freenas.org/index.php?threads/supermicro-motherboard-part-number-guide.17511/
SuperMicro X10 and 11 FAQ = https://forums.freenas.org/index.php?resources/supermicro-x10-and-x11-motherboard-faq.5/
SuperMicro X11 Boards - https://forums.freenas.org/index.php?resources/so-you’ve-decided-to-buy-a-supermicro-x11-board.13/
SuperMicro x10 Boards - https://forums.freenas.org/index.php?resources/so-you’ve-decided-to-buy-a-supermicro-x10-board.14/
RAM for X10 Boards - https://forums.freenas.org/index.ph...ns-for-supermicro-x10-lga1150-motherboards.6/
Testing Your System - https://forums.freenas.org/index.php?threads/building-burn-in-and-testing-your-freenas-system.17750/

Another useful for a future guide dealin with FreeNAS and ESXI- https://forums.freenas.org/index.ph...er-esxi-6-with-x10sl7-f-and-sas-drives.36412/

The X10 build is what I would go with above. Now just to wait on selling the original items or getting an itchy trigger finger. Thank you all again for the guidance.
 
Last edited:

Maelos

Explorer
Joined
Feb 21, 2018
Messages
99
Final Build Check
Any reason the higher model # CPU is cheaper than the 1231? Why is there such a wide price range on the various models? I am finding the 1231 to be noticeably higher than the higher # models.

Total is 250+160+300+150+20=$870 without drives or newer PSU. Hmm. Any suggestions? Wait and pick up the pieces slowly?
 
Status
Not open for further replies.
Top