New Supermicro FreeNAS / SOHO Server Build - What OS drives needed if running virtualized?

Status
Not open for further replies.

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
Hi there
Am just about to push the order button on a new FreeNAS build.
This is my current part list:
-----
MOBO: http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-12C-TLN4F.cfm
CASE: https://www.supermicro.com.tw/products/chassis/tower/721/SC721TQ-250B
RAM: Kingston 32GB DDR4 2133 Reg ECC (x2)
STORAGE DRIVES: HGST 3.5in 26.1MM 6000GB 128MB 7200RPM SAS 512E ULTRA ISE (x4)
HBA: https://www.broadcom.com/products/storage/host-bus-adapters/sas-9211-8i
-----
Please feel free to provide feedback with any of the above btw! (And the reason for the 12C Xeon was we are planning to run a lot of virtualized applications in the not too distant future)


What I need some advice with is regarding the best setup for running FreeNAS virtualized within a hypervisor such as ProxMox.
The above case/mobo has room for another two 2.5" drives so I was thinking of specifying a pair of mirrored SSDs potentially but I am a little unclear on what experienced users would recommend?
 
Joined
Jan 7, 2015
Messages
1,155
Im leery of these SoC Xeon D boards. They are small and I get it. I feel like you would be better off with a proper Xeon chip and X11 board, but I have never used nor fiddled with an SoC board. I just know that in a price point they are comparable to each other. I also feel like these hot swap cases are nice, but un necessary. After you build you arent going to be pulling the drives out every other day. Think more along the multiple year time frame. Gives more of a server feel, and if thats what you are after so be it, they are nice. A Fractal Node case is comparable and are everywhere and once the thumb screws and side panel are off the drives are just as easy to remove and replace.

Now with that out of the way, the parts you have chosen should be just fine for what you are wanting to do. Preference, thats all it is. On to the meat of the question. SSD drives, mirror, yes. They should be large enough to hold your VMs/jails/containers. A couple 500 6gb/s drives mirrored will go very nicely. I run 250's and im wishing I would have gone 500's. Now im getting to the point where im thinking about adding 2 more 250's and going to Z2 for my jail pool. They'll just free up the storage drives to store, arent totally necessary, and are snappier for VMs and such. Ultimately a matter of preference and available SATA ports.

Thats my 6 cents. Good luck!! Welcome!
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
Thanks very much for your feedback John! I will take what you have said into consideration.

Can I ask if it's possible to store any of the VM files on the storage drives (thinking snapshots in particular but also VDIs etc.)?
In other words: Do my SSDs need to be large enough to store any and all VM related data or can this be shared with the FreeNAS drives (and is this advisable or not)?

Also, do these SSDs need to have power protection or can I use pretty much anything?
 
Joined
Jan 7, 2015
Messages
1,155
See now that probably depends on OS. FreeNAS 9.10 needs only 8gb and is happy booting from usb. If you install it to a 250gb ssd its still going to only use 8gb. The rest of that space will be a waste.

But in a Virtual world you can install a xen or esxi to a 250 and then use remaining space to house virtual drives.

Enter FreeNAS 10. Its going to still be best booting from usb or small ssd. But with the vision of supporting docker containers/vm/jails and this can all run from a main storage pool on platters or a storage pool running SATA 3 SSDs or both. SSDs obviously have benefits there.

So the basic answer is, in a nutshell ALL vm data will be on one or both pools. Datasets can be mounted to any vm/container in all combinations. None will be with the FreeNAS os except what is included to build and run the VMs. The future is looking bright.

You talk of running FreeNAS as a guest. But consider FreeNAS 10 as a host. People are running everything using its built in VM/container capabilities. The example given was Windows with exchange. But its my understanding people are running every thinkable vm/os with FreeNAS 10 as the host. Its here that some SSDs might really shine.

You can probably use just about any mainstream drives here. Nothing fancy is needed. No power woes that I'm aware of.
 
Last edited by a moderator:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
What I need some advice with is regarding the best setup for running FreeNAS virtualized within a hypervisor such as ProxMox.
FreeNAS does not work well in Proxmox. If you must virtualize, use VMware ESXi. If you have to ask how to virtualize, then you shouldn't be virtualizing FreeNAS.
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
I am very familiar with virtualization, just not with respect to FreeNAS. I also have no interest in running a proprietary hypervisor (even if it is "free"). Given you clearly have the requisite knowledge and experience mOnkey_, perhaps you could enlighten me as to what the issues are running FreeNAS within Proxmox? Is it a debian -- freebsd issue or something else? Is Xen a better alternative? Do you have much experience with non-proprietary virualisation solutions or is it the case that you only know VMware and so believe this is the only solution out there?
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
See now that probably depends on OS. FreeNAS 9.10 needs only 8gb and is happy booting from usb. If you install it to a 250gb ssd its still going to only use 8gb. The rest of that space will be a waste.

But in a Virtual world you can install a xen or esxi to a 250 and then use remaining space to house virtual drives.

Enter FreeNAS 10. Its going to still be best booting from usb or small ssd. But with the vision of supporting docker containers/vm/jails and this can all run from a main storage pool on platters or a storage pool running SATA 3 SSDs or both. SSDs obviously have benefits there.

So the basic answer is, in a nutshell ALL vm data will be on one or both pools. Datasets can be mounted to any vm/container in all combinations. None will be with the FreeNAS os except what is included to build and run the VMs. The future is looking bright.

You talk of running FreeNAS as a guest. But consider FreeNAS 10 as a host. People are running everything using its built in VM/container capabilities. The example given was Windows with exchange. But its my understanding people are running every thinkable vm/os with FreeNAS 10 as the host. Its here that some SSDs might really shine.

You can probably use just about any mainstream drives here. Nothing fancy is needed. No power woes that I'm aware of.

Thanks John.
I am indeed interested in the newfound virtualization capabilities of FreeNAS 10 and beyond (especially with respect to Docker which I have quite a bit of experience with).
In the scenario that you have described, how would you best see this setup given the original machine specs I posted above? Currently we are planning to have 4x 6TB SAS drives as the main storage pool (RaidZ2) and the chasis / mobo allow for another two 2.5" drives. How would you set this up and what hardware would you ideally select?
 
Joined
Jan 7, 2015
Messages
1,155
I would boot from a USB mirror, run two 500gb SSDs(??) in a mirror to hold everything not media, and run whatever remaining spinning drives in Z2. With only 4 drives its a tough pill to swallow losing half of your total space and tempts people to run Z1, but dont fall for it. I personally would expand my spinning pool in whatever way possible to at least 6 drives to lessen the space lost, at least perceptibly.. But if roughly 10-11TB is enough, and most likely it is in a SOHO setting, this is the way I would do it. Then as im sure you are aware, three of the four would have to fail before you are up schitts creek. If you are even half way on top of things, this should never happen.

The other option (with 4 disks) is to go with a striped mirror for the added IOPS. This will be best suited for persons running lots of changing files, multiple writes, VMs and such. Same space as Z2 but a little snappier. Personally I find Z2, or Z3 in my case, is plenty for what I am doing. This game is all about preference.
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
I would boot from a USB mirror, run two 500gb SSDs(??) in a mirror to hold everything not media, and run whatever remaining spinning drives in Z2. With only 4 drives its a tough pill to swallow losing half of your total space and tempts people to run Z1, but dont fall for it. I personally would expand my spinning pool in whatever way possible to at least 6 drives to lessen the space lost, at least perceptibly.. But if roughly 10-11TB is enough, and most likely it is in a SOHO setting, this is the way I would do it. Then as im sure you are aware, three of the four would have to fail before you are up schitts creek. If you are even half way on top of things, this should never happen.

The other option (with 4 disks) is to go with a striped mirror for the added IOPS. This will be best suited for persons running lots of changing files, multiple writes, VMs and such. Same space as Z2 but a little snappier. Personally I find Z2, or Z3 in my case, is plenty for what I am doing. This game is all about preference.

Thanks again for the info John!
So what you have proposed would be based on running everything within FreeNAS (ie. virtualizing using bhyve etc.) or have I misunderstood?

Assuming I haven't misunderstood..... could you please explain where the guest vm's would 'live' in this scenario? I am a complete noob to FreeNAS (and FreeBSD tbh) so apologies if I am missing something really obvious. It was my understanding that after booting, everything FreeNAS OS related would be run from RAM (hence why you can boot from just a standard USB drive). If this is the case, then what is running from the SSDs?
 
Joined
Jan 7, 2015
Messages
1,155
Yes. FreeNAS as a host using bhyve. The guest VMs and associated data would live on the mirrored SSDs or on the platter pool, or both, anywhere but the Freenas USB obviously.. Im a bit of a disadvantage as I am still running FN 9.10.2, have not used 10, am not necessarily looking forward to using it, and am no pro at virtualization/containerized softwares. While I understand the theory, I dont need 20 VMs/containers when I can do everything I want to do using jails, often a single jail, like im at a Slack terminal. Im pumped that I can continue using jails in 10, and am also looking forward to tinkering with the rest of it..

I imagine I will continue right on using 9.10 until it becomes completely obsolete. I have a second system and stack of 1 tb drives I may throw it on to learn. Or I may sell it.. Havent decided.
 
Last edited:

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
Yes. FreeNAS as a host using bhyve. The guest VMs and associated data would live on the mirrored SSDs or on the platter pool, or both, anywhere but the Freenas USB obviously.. Im a bit of a disadvantage as I am still running FN 9.10.2, have not used 10, am not necessarily looking forward to using it, and am no pro at virtualization/containerized softwares. While I understand the theory, I dont need 20 VMs/containers when I can do everything I want to do using jails, often a single jail, like im at a Slack terminal. Im pumped that I can continue using jails in 10, and am also looking forward to tinkering with the rest of it..

I imagine I will continue right on using 9.10 until it becomes completely obsolete. I have a second system and stack of 1 tb drives I may throw it on to learn. Or I may sell it.. Havent decided.

Thanks again John!
Okay, I am getting more and more convinced that this (FreeNAS installed on bare metal and hosting everything via Bhyve) is the way to go.

I guess where my lack of knowledge of FreeNAS is still letting me down is to do with how to configure the drives.

Assuming I have the four 6TB drives in a RaidZ2 configuration......how do I configure the two mirrored SSDs? Also, do the SSDs (and the VM data contained therein) get automatically backed up / protected by the larger RaidZ2 storage pool also? If not, how to do I ensure proper protection of the SSD data (over and above the fact that I have two drives in a mirrored configuration). This might all be really trivial and something FreeNAS will handle without issue so I again apologise for my lack of knowledge.

Any advantage in getting Intel DC grade SSDs for this purpose (eg. something like a pair of S3520 Series 480GB)? Would this setup also allow me to run ZIL / SLOG duties on the same SSDs (ie. use them for both VM storage *and* ZIL / SLOG duties).

Lastly, I was also looking into using a SATA DOM for boot/os purposes? Any advantage? Or maybe revert back to using USB for boot/os and then use the SATA DOM for ZIL/SLOG?

Sorry for all the questions....just getting ready to order the hardware today so under a bit of time pressure ;)
 
Joined
Jan 7, 2015
Messages
1,155
A ZFS log device is probably not needed in your case. You can add this later if you decide you may benefit from it. It will be a separate device from the mirrored 500gb drives, whatever they are. My guess is as it stands that you will be saturating a gigabit link with the machine as stated. ZILs and such are best used when there are piles of users writing to many different files. In a SOHO use case, it most likely isnt going to benefit you.

Intel drives are great, Samsungs are nice, Sandisk, PHY, Crucial, basically any major brand SSD. Stick with tried and true for SSDs. Anything with fast reads writes and well reviewed by the community will be fine. I use Samsung EVO drives, they are great. Intels are pricey, but if you have the coin, go for it.

SATA DOM will work to boot from. Honestly, a mirrored USB setup will be fine to start from. You can always backup the config and move to a DOM or SSDs later on if you decide that would be better. Get in the habit early on of backing up your config every night or at least once per week. Then you will have no issues switching boot devices.

My opinion is to get the basics up and running, and familiarize yourself with it. The machine as advised will be a great machine and last you many years most likely. Then later you can switch to some of the fancier more advanced setups with ZILs and SLOGs if its warranted.

My 9 cents. Good luck!!
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
A ZFS log device is probably not needed in your case. You can add this later if you decide you may benefit from it. It will be a separate device from the mirrored 500gb drives, whatever they are. My guess is as it stands that you will be saturating a gigabit link with the machine as stated. ZILs and such are best used when there are piles of users writing to many different files. In a SOHO use case, it most likely isnt going to benefit you.

Intel drives are great, Samsungs are nice, Sandisk, PHY, Crucial, basically any major brand SSD. Stick with tried and true for SSDs. Anything with fast reads writes and well reviewed by the community will be fine. I use Samsung EVO drives, they are great. Intels are pricey, but if you have the coin, go for it.

SATA DOM will work to boot from. Honestly, a mirrored USB setup will be fine to start from. You can always backup the config and move to a DOM or SSDs later on if you decide that would be better. Get in the habit early on of backing up your config every night or at least once per week. Then you will have no issues switching boot devices.

My opinion is to get the basics up and running, and familiarize yourself with it. The machine as advised will be a great machine and last you many years most likely. Then later you can switch to some of the fancier more advanced setups with ZILs and SLOGs if its warranted.

My 9 cents. Good luck!!

Thank you John.
So to be clear, there is no need to use capacitor backed SSD drives for the VM storage pool? (my understanding was that this was a definite requirement if they were going to be used for ZIL). If this is the case I will probably also go with Samsung EVOs as I too have never had a problem with them.

Is there any guide(s) available about how to configure the SSD mirrored pair if they are to be used solely for VM use? I am still unclear how these interact (if at all) with the RAIDZ2 (platter) storage pool?
 
Joined
Jan 7, 2015
Messages
1,155
An absolutely great UPS system, is in my book a requirement, which should allow you to do non-enterprise SSDs such as the EVO. Power outage in middle of write = bad. In a non enterprise system, ZIL is not a benefit, but most likely a headache. But that one is up to you. I have never run a cache drive, while I understand the theory, in practice I am no help.

You configure it in a standard ZFS mirror. (When you get to that part it will be obvious to you exactly how its done). You put folders AKA datasets on it, and when asked where you want to put things, you make a decision. Can this go on platters? Should this go on SSDs? Everything to do with media storage, non changing backups, ISOs, pictures, anything of that sort, goes onto the platter pool, in a dataset(s). Now, basically everything else is going to be on the mirrored SSDs, in a dataset(s) aka FOLDERS. Then when the VMs and or containers, jails, any of that software, programs, applications, databases or anything of that sort need to access any of that data, or just needs more space for things than you have on your SSDs, you will pass storage on to them from your platter pool in the form of mounts or links. I cannot elaborate on how this is done in "Corral", but in 9.10 and pre, it was very easy. Im sure its as so, if not easier in Corral. Its all really very simple for anyone who is even half saavy. The system holds your hand pretty much and when it doesnt there are loads of documented use cases.

Just do it bud. We can cross these bridges once you have spent a month burning in your hardware. By then I might be running Corral myself, and can be of more help.
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
Awesome! Thanks very much mate!
We have specified an APC BACK-UPS PRO 900 (http://www.apc.com/shop/au/en/products/APC-Power-Saving-Back-UPS-Pro-900-230V/P-BR900GI)...... do you think this is up to the job?

Looking forward to getting going with this and no doubt I will be back with more questions in due course (but maybe a little more informed I would hope:)

Great to know there is such a friendly and helpful community here to lend a hand if/when I need it.

Warm regards from down under!
 
Joined
Jan 7, 2015
Messages
1,155
Any UPS that is online with enough battery to last 5 minutes or so is ok in my book. A good one will tell you how long it can last with the current load attached based on battery and other factors that it figures out that I dont know what they are.. In a test mine ran for about 20 minutes, but that was when I first got it and only half the load. I have everything related to my server attached to mine, switches, cable modems, my pfsense router, a small flatscreen emergency monitor, everything that needs to run while the server shuts itself down in a power outage. I figure 5 minutes is more than enough. Im sure the APC will be just fine.

Haha, Australia you lucky bastard. Im smack dab in the middle of the ole USA. Its cold as a MF here.

Cheers back at ya bud. Rest easy. We got this..
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
I am very familiar with virtualization, just not with respect to FreeNAS. I also have no interest in running a proprietary hypervisor (even if it is "free"). Given you clearly have the requisite knowledge and experience mOnkey_, perhaps you could enlighten me as to what the issues are running FreeNAS within Proxmox? Is it a debian -- freebsd issue or something else? Is Xen a better alternative? Do you have much experience with non-proprietary virualisation solutions or is it the case that you only know VMware and so believe this is the only solution out there?

Those active on the forum who successfully virtualize FreeNAS and contribute are using ESXi.
I know the more advanced users in that segment have tried and elaborated with other hypervisors, but have found ESXi to be the most reliable in combination with FreeNAS. I won't go into specific reasons. If you are skilled enough on other hypervisors, you could perhaps figure out what mistakes others have done - or ...just hope you've better luck.

Since you are experienced in virtualization, I suggest you look through the stickies by jgreco. There are a few particularly are relevant. They are written in the way that you should be able to pick up all information specifically relevant for FreeNAS virtualization and translate it to your other hypervisor scenarios. If you're not, then stick with ESXi, or simply put FreeNAS on bare metal. I'll add as a historic disclaimer on why there are relatively little talk about virtualization here - mods have been historically reluctant to assist newbies with all sorts of hell breaking loose when something breaks due to poor virtualization practices.. ....causing loss of data and unfair blame directed to FreeNAS. That said, there might be other places where you can find the specifics for other hypervisors. What you need to know from a FreeNAS perspective is located in these threads.

https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/
https://forums.freenas.org/index.ph...ative-for-those-seeking-virtualization.26095/

Broader terminology but still very useful in making hardware choices with regards to virtualization performance:
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Good luck.
 

FreeNASmike

Dabbler
Joined
Mar 13, 2017
Messages
13
Those active on the forum who successfully virtualize FreeNAS and contribute are using ESXi.
I know the more advanced users in that segment have tried and elaborated with other hypervisors, but have found ESXi to be the most reliable in combination with FreeNAS. I won't go into specific reasons. If you are skilled enough on other hypervisors, you could perhaps figure out what mistakes others have done - or ...just hope you've better luck.

Since you are experienced in virtualization, I suggest you look through the stickies by jgreco. There are a few particularly are relevant. They are written in the way that you should be able to pick up all information specifically relevant for FreeNAS virtualization and translate it to your other hypervisor scenarios. If you're not, then stick with ESXi, or simply put FreeNAS on bare metal. I'll add as a historic disclaimer on why there are relatively little talk about virtualization here - mods have been historically reluctant to assist newbies with all sorts of hell breaking loose when something breaks due to poor virtualization practices.. ....causing loss of data and unfair blame directed to FreeNAS. That said, there might be other places where you can find the specifics for other hypervisors. What you need to know from a FreeNAS perspective is located in these threads.

https://forums.freenas.org/index.ph...nas-in-production-as-a-virtual-machine.12484/
https://forums.freenas.org/index.ph...ative-for-those-seeking-virtualization.26095/

Broader terminology but still very useful in making hardware choices with regards to virtualization performance:
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Good luck.

Thanks for the info....much appreciated!
 
Status
Not open for further replies.
Top