Home Whitebox ESXi with FreeNAS handling local storage

Status
Not open for further replies.

invisiblade

Cadet
Joined
Aug 28, 2012
Messages
6
I'm kind of new to FreeNas, ESXi, and really anything beyond home Windows support. Please forgive me for my level of knowledge, but I'm reading alot and trying to learn. This seems like a fairly large challenage, at least to me at this point. I'm looking for suggestions and different points of view.

Here's the senario. I have a decent box (back to hardware in a minute) that I would like to setup to be my home All-In-One server. I've been reading a lot on how to best configure it, and I'd like to utilize the hardware I have without having to buy anything, or the minimum amount. I know it won't be long and I'll need to replace the drives, but I want to use the one's I have for now.

My current plan is to install ESXi on it and run a handful of VM's. I've already installed ESXi and confirmed that it works with the hardware I have, but just as a test. The problem I have is that it doesn't support the onboard RAID controller. Even if it did, it's not the greatest options.

What I'd like to do to handle the local drives is use FreeNas as a VM on the machine, pass to it all the drives (if possible), and setup zfs and then iSCSI back to the ESXi host to create the other VM's on.

Here are the main VM's I want to run

High Importance VM's
pfSense VM
This will be my main firewall for my house. I want to do proxy cacheing, as much as reasonably possible, so need some space for that.

Linux Server VM
This will be running maybe 100 php scripts all day (Not necessarily all at once), and storaging a few hundred thousand record Postgresql database.
This needs to be fastish, on 24/7, and 'safe' with redunancy. I will back it up to a seperate location along with the local reduancy.
I'm not sure how big the database will be yet, it's all texted based information so I imagine even if I hit a million records it can't be that big.

Would like on this server:
Windows VM
This is to stream movies to my Xboxs. Might do a Linux box if I can get it to work the way I want. I can run this elsewhere if needed.
I have about 6 TB's total in movie ISO's (Nothing Pirated), although I'll likely shrink that down to about 3 TB's in the next few months as I convert and remove ISO's.

File/Backup Server
I have about a TB of other "Random" data, and would like to do backups of a few machines.

Later Expansions:
An Owncloud Server
Test VM's with various OS's
Maybe other Windows or Linux server, but with minimal workloads.

The hardware I have is this:
ASRock Fatality Professional Motherboard (http://www.newegg.com/Product/Product.aspx?Item=N82E16813157299)
i7-3770 (http://www.newegg.com/Product/Product.aspx?Item=N82E16819116502)
32GB Ram (4x8GB G.Skill DDR3 1600)
6x WD Green 2 TB Drives
2x WD Green 3 TB Drives
1x Seagate 3 TB Drive (Not sure which one)
1x Samsung 830 256 GB

I have a good case and awesome powersupply so I'm not worried about that.

I'm not really sure of a couple of things:

A Performance. Running FreeNas as a VM, Passing the drives too it, setting up zvol(s), and iSCSI back to the host to run the other VM's on.. Obviously with slow drives like I have it won't be great, but can it at least handle what I have? Performance of different RAID options. The Movies just need some read, not much write. The Linux server and such need a decent amount of both, and I'd like to keep it safe.

B How exactly to setup the drives. I can get a USB drive (or two) in order to install ESXi on, and install FreeNas either on a USB drive or the SSD directly. Then I have a single point of failure on that drive. I am kind of thinking setting up 6 drives into a RAIDZ2 for Movies and File/Backups. Take 3 drives and setup a Mirror? Or get a cheap Sata PCI card for the SSD and setup a RAID 10 with one more drive. Would kind of like the Linux php Server on a SSD for speed, replicated to a different volume. Maybe get a second SSD for that server, an setup a mirror with 2 drives?


Would like different points of view, does anyone have any experience doing something similar with FreeNas. Or am I completely crazy and I should seperate FreeNas into it's own box or just get a iSCSI Nas (Right now I can get a ReadyNas Ultra 6 with 2 3TB Seagate 7200 drives for $700), or am I missing something obvious for the setup to optimize it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Even if you really knew what you were doing, ESXi and FreeNAS make for curious bedfellows. In particular, ESXi wants very desperately to gain full access to all its datastores before it starts launching VM's, so a datastore that is dependent on one of those VM's being up and available is basically going to turn into a freakish nightmare/disaster at some point (like the first time you need to reboot ESXi).

Some of us do in fact use ESXi and run FreeNAS as a VM for other non-ESXi storage needs, but the recipe for doing so is not obvious, has significant potential for tears, and the forums are littered with the shattered bits of people who didn't know what they were doing and ended up losing it all. FreeNAS works best if it has direct access to the disks, meaning that unless you have something like PCI Passthrough to give the SATA/SAS controller to the FreeNAS VM, you are likely to end up in a bad place. PCI Passthrough works fine on a lot of server-grade (read: "Xeon") hardware, which you don't have. It may appear to work fine on a lot of prosumer grade (read: "i3/i5/i7") boards, but subtle failures seem to be common. A company like Supermicro sees a huge percentage of its hardware shoved into virtualization environments and a lot of them rely on the more obscure CPU features to work correctly. Your desktop board manufacturers ... often don't, so bugs don't get found and fixed as easily, or at all.

If you really want an All-In-One solution, here's what I'd do.

1) Figure out a way to boot ESXi that's reliable. We've been using IBM M1015's in IR mode with a pair of SSD's in RAID1 and it's absolutely awesome. This gives you a controller and a datastore on which to host your VM's.

2) Figure out if PCI Passthrough on your board is reliable. Try passing through the SATA controller on the board to a VM and then beat it to death for a few weeks. Then, decide if you trust it or not.

3) Then use FreeNAS as a VM to provide NFS/CIFS access to those PCI Passthrough'd disks (you can have your other VM's mount those resources directly, and ESXi can provide ordering support for startup).

In between those steps, you should be reading everything that's been posted on these forums about this topic.

Or you can do it some other way, live more dangerously, and then if and when your bits spill, ... sigh.
 

phoenix

Explorer
Joined
Dec 16, 2011
Messages
52
I'd like to add my twp cents worth to this thread. Just for confirmation that your motherboard is OK for ESXi with passthrough take a look at this blog post. I had been looking for a suitable whitebox ESXi/freeNAS box for a while and came across the vZilla blog. I purchased a Fractal case (although a slightly smaller one), the same motherboard and memory (32GB) and an IBM M1015 controller (flashed to IT mode) plus some WD Red hard drives. After beating the server to death for a few weeks and making sure it did what I needed I copied my data to the newly installed freeNAS VM and haven;t looked back since - it's a great piece of kit and ESXi runs smoothly as does freeNAS (plus six other VMs) streaming digital content to my HTPC - it's been running now for about five months without problems.
 

invisiblade

Cadet
Joined
Aug 28, 2012
Messages
6
1) Figure out a way to boot ESXi that's reliable. We've been using IBM M1015's in IR mode with a pair of SSD's in RAID1 and it's absolutely awesome. This gives you a controller and a datastore on which to host your VM's.

Raid 1 SSD's? I understand the performance gain on Read, my thought though is that both 'should' die about the same time with the writes being the same on both drives. So a Raid 1 is more for mishaps over Raid 0, and you can replace one after some time so they don't die at the same time. Just a thought, I would still stick with 1 over 0 here if possible.

Do you think Raid 1, or 0, of two SSD's be enough IOps for the VM's I have planned?

3) Then use FreeNAS as a VM to provide NFS/CIFS access to those PCI Passthrough'd disks (you can have your other VM's mount those resources directly, and ESXi can provide ordering support for startup).

From what I've gathered, iSCSI is significantly faster. I guess it depends on what data is necessary to transfer as to if it will make a difference.


One thing I've caught onto is RAM is pretty much the key factor with FreeNas. Even though I'm maxed out at 32 GB, IDK if that will be enough. Or at least not much room for expansion. With say 12 TB of space, that would mean FreeNas would want 18 GB at minimal. I'll have to test that along with the Sata throughput.

Thanks for the input!
 

invisiblade

Cadet
Joined
Aug 28, 2012
Messages
6
I'd like to add my twp cents worth to this thread. Just for confirmation that your motherboard is OK for ESXi with passthrough take a look at this blog post. I had been looking for a suitable whitebox ESXi/freeNAS box for a while and came across the vZilla blog. I purchased a Fractal case (although a slightly smaller one), the same motherboard and memory (32GB) and an IBM M1015 controller (flashed to IT mode) plus some WD Red hard drives. After beating the server to death for a few weeks and making sure it did what I needed I copied my data to the newly installed freeNAS VM and haven;t looked back since - it's a great piece of kit and ESXi runs smoothly as does freeNAS (plus six other VMs) streaming digital content to my HTPC - it's been running now for about five months without problems.

Wow, great info form the blog. Thank you.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Raid 1 SSD's? I understand the performance gain on Read, my thought though is that both 'should' die about the same time with the writes being the same on both drives.

Despite what people like to think, SSD reliability is not really that different from hard drive reliability. If you have "High Importance" VM's and this is your "All-In-One" server, and it's your firewall and NAT gateway and the kitchen sink too, you might wish to consider whether or not it is worth making preparations for the scenario where a single drive failure takes out Everything You Have. Whether both 'should' die about the same time with an equal number of writes is basically irrelevant. Maybe that's five years off. What happens when you have a drive go DOA in 3 months?

Do you think Raid 1, or 0, of two SSD's be enough IOps for the VM's I have planned?

I would imagine that to be dependent on the workload. This is a FreeNAS forum and discussions of VMware workloads on storage directly attached to a VMware host are probably off-topic. There are better experts for that elsewhere, but basically SSD is pretty fast, and if you are tasking a VM with IOPS that SSD cannot keep pace with, you are probably doing a number of things wrong.

One thing I've caught onto is RAM is pretty much the key factor with FreeNas. Even though I'm maxed out at 32 GB, IDK if that will be enough. Or at least not much room for expansion. With say 12 TB of space, that would mean FreeNas would want 18 GB at minimal. I'll have to test that along with the Sata throughput.

RAM is important, yes. A properly resourced FreeNAS server should shine under heavy workloads, but the definition of "properly resourced" may be unpalatable. More spindles are often good. More memory is generally good. Given sufficient ARC and L2ARC, a ZFS system will eventually be servicing all popular content from cache, significantly improving on apparent pool I/O speed. However, you can give ZFS a slow pool and that can hurt you as much as - or more than - too little RAM.
 

invisiblade

Cadet
Joined
Aug 28, 2012
Messages
6
I would imagine that to be dependent on the workload. This is a FreeNAS forum and discussions of VMware workloads on storage directly attached to a VMware host are probably off-topic. There are better experts for that elsewhere, but basically SSD is pretty fast, and if you are tasking a VM with IOPS that SSD cannot keep pace with, you are probably doing a number of things wrong.

I do actually have this posted in other forums as well, were this is a multi-system setup I understand that each component will have it's own knowledge experts. And ya, fair point, if the IOPS are too high then there's something wrong.
 
Status
Not open for further replies.
Top