Freenas and Esxi lab w/ home storage

Status
Not open for further replies.

zer0fool

Cadet
Joined
Jun 2, 2015
Messages
4
Hey guys. This is my first post here so my apologies if this is not the right forum to post in or if it's already been answered elsewhere. Anyway, here goes:

My intent is to build 2 esxi 5.5 servers in order to perform vmotion between them using freenas as my datastore. Once all is stood up, I'll be setting up a citrix farm. My goal being I want to become a backend and frontend virtualization engineer. I get a taste of both on a regular basis at work (citrix vdi on top of vmware), but want to further my knowledge. Lastly, I'd also like to cifs shares setup for the family and media content.

It's my understanding through research thus far that isci is the best for esxi datastores on freenas. Can someone confirm this? Secondly, will I be able to setup traditional shares over nfs for family users or will ibe limited to just isci throughout. Lastly, does anyone know of any existing guides out there for freenas as esxi datastore that don't suck? I'm planning on 3 vdevs of 8 3tb hd's a piece in one zpool using zfs3.

Any input is appreciated in advance! Oh and feel free to slam me if I asked something dumb or didn't do enough research on something or am completely thinking this the wrong way. I'm here to learn!

Specs (thus far):
freenas box:
Intel E5-1620 V2 Quad Core 3.7Ghz 10MB L3 Turbo 130W LGA 2011 22nm Processor US
IBM M1015 SAS2 SATA3 PCI-e RAID Controller Card LSI SAS9220-8i x2
Supermicro SC846E16-R1200B 24 Bay 4U Chassis 2x 1200Watt PSU
16GB DDR3 PC3-14900 Registered ECC (will add more w/ each paycheck)
segate 3tb hd's x10 (will add more w/ each paycheck)

esxi:
supermicro board
xeon e3 cpu
32gb ecc ram
supermicro 1u chassis
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If by "zfs3" you mean RAIDZ3, that's going to be rather bad for performance for a VM datastore. Consider instead mirror vdevs.
 

zer0fool

Cadet
Joined
Jun 2, 2015
Messages
4
If by "zfs3" you mean RAIDZ3, that's going to be rather bad for performance for a VM datastore. Consider instead mirror vdevs.
Thanks for your prompt response jgreco. I wasn't expecting a celebrity to respond to my first post! Anyway, my apologies. Yes I meant RAIDZ3. May I ask why I should use mirrored vdevs as opposed to RAIDZ3? Additionally, I already crossedflashed my raid controllers in anticipation of using zfs. Should I revert them back? By mirroring vdevs, am I still using zfs?

I found this link that basically describes mirrored vdevs as raid 10 equivalent: http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/. Can you verify? I know traditional raid 10 is recommended normally. In fact, we pretty much use it exclusively in our data center (with the exception of a couple dell powervault setups in raid 5). All that said, google isn't returning very much on how to configure mirrored vdevs within freenas. Can you point me in the right direction? Finally, I'm going to have 24 3tb drives in my case when all is said and done. Can I do 3 groupings of 8 or should I do 6 of 4 or 2 of 12? Oh and one last thing, will I still be able to make cifs/samba shares for the family on top of the devs? I only require about 2-4tb max for vm data store. The rest will be for media to be used w/ plex. Thanks again!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You have some options in my opinion. But first things first. Multiple vdevs give you improved performance. the more the better. and mirrors are faster to write to than RAIDZ1/2/3. These are all very easy to setup in the GUI. In fact, if you have a virtualization environment, I'd suggest testing it, and go through the process.

Yes, you can create the equivalent of RAID10, and it's easy with the GUI.

Flashing the controllers is so the firmware version matches the FreeNAS Driver version. RAID version is irrelevant.

If you only need 4TB for vm's, you could use 8 1TB drives for your VM pool (RAID10 gives you 4TB), and then use the remaining 16 slots for 8 drive vdevs of RAIDZ2. This way you could maximize your storage, and even use your larger pool as a replication target for your VM's.

But, even if you don't do that, and go with 3 RAIDZ2 vdevs, you will be able to carve it up and provide iscsi, NFS, and CIFS. It's very easy to do. One thing to keep in mind if you go iscsi, you will pick a size (let's say 4TB), and it will be dedicated to iscsi. NFS and CIFS don't have the same issue.

Hope this helps. Welcome!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
RAID10 is a RAIDism that doesn't quite transform to ZFS cleanly. ZFS allows you to have vdevs. You can have multiple vdevs in a pool. A vdev is the basic unit of redundancy, so you can have a RAIDZ{1,2,3} vdev, but that will have the speed characteristics of a single drive. A mirror vdev also has the speed characteristics of a single drive for writing, but somewhat faster for reading. If you add multiple vdevs to a pool, ZFS will do intelligent load balancing between them, which some people mistake for RAID10-style striping. It isn't. One of the things is that you can add additional vdevs to a pool, so you can easily add mirror vdevs to a pool, two disks at a time. That's a great strategy for ongoing expansion...

The rest, I think @depasseg answered well.
 

zer0fool

Cadet
Joined
Jun 2, 2015
Messages
4
You have some options in my opinion. But first things first. Multiple vdevs give you improved performance. the more the better. and mirrors are faster to write to than RAIDZ1/2/3. These are all very easy to setup in the GUI. In fact, if you have a virtualization environment, I'd suggest testing it, and go through the process.

Yes, you can create the equivalent of RAID10, and it's easy with the GUI.

Flashing the controllers is so the firmware version matches the FreeNAS Driver version. RAID version is irrelevant.

If you only need 4TB for vm's, you could use 8 1TB drives for your VM pool (RAID10 gives you 4TB), and then use the remaining 16 slots for 8 drive vdevs of RAIDZ2. This way you could maximize your storage, and even use your larger pool as a replication target for your VM's.

But, even if you don't do that, and go with 3 RAIDZ2 vdevs, you will be able to carve it up and provide iscsi, NFS, and CIFS. It's very easy to do. One thing to keep in mind if you go iscsi, you will pick a size (let's say 4TB), and it will be dedicated to iscsi. NFS and CIFS don't have the same issue.

Hope this helps. Welcome!

Thanks for the awesome post depasseg! In fact, it really did help. I'm not concerned w/ saving $ on hdd's by buying lesser capacity drives. Microcenter has an awesome deal on 3tb's (see: http://www.microcenter.com/product/...ps_35_Desktop_Internal_Hard_Drive_STBD3000100). 35tb's of usable storage is still plenty for most soho's; assuredly for a home environment. Definitely a protip on using the esxi server to virtualize a freenas install so I can test basic functionality and experiment as opposed to blowing up my physical host. I should've thought about it myself but was scarred off by various posts of people blowing their stuff up doing so. I now realize those were in regards to using virtualized instances to manage "production" storage. Replication's also an awesome idea that I hadn't thought of yet. At any rate, can I set up mirrors throughout and set up a specified iscsi targeted share for vm datastore and then create remain shares using cifs for media sharing in a win/lix environment? I'm a firm believer in using windows for user interaction/domain management and having linux servers do real work. I'm really excited. Thanks again!
 

zer0fool

Cadet
Joined
Jun 2, 2015
Messages
4
RAID10 is a RAIDism that doesn't quite transform to ZFS cleanly. ZFS allows you to have vdevs. You can have multiple vdevs in a pool. A vdev is the basic unit of redundancy, so you can have a RAIDZ{1,2,3} vdev, but that will have the speed characteristics of a single drive. A mirror vdev also has the speed characteristics of a single drive for writing, but somewhat faster for reading. If you add multiple vdevs to a pool, ZFS will do intelligent load balancing between them, which some people mistake for RAID10-style striping. It isn't. One of the things is that you can add additional vdevs to a pool, so you can easily add mirror vdevs to a pool, two disks at a time. That's a great strategy for ongoing expansion...

The rest, I think @depasseg answered well.

Thanks jgreco. I've reviewed cyberjock's ppt twice. It would seem I need to go over it once more. Hopefully third times a charm! I really appreciate your guidance.
 
Joined
Oct 2, 2014
Messages
925
I am interested in seeing how your homelab turns out @zer0fool keep us posted
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Yes, you can create iscsi and CIFS shares from the same pool.

Think of it this way. Vdevs combine to make a pool. The pool can have multiple datasets (like a folder with special properties, like being a CIFS share). The pop can also contain, for lack of the proper word, an iscsi share which you could think of as a really big file.

Take a quick search for a post by me. I created an OVA, that you can use to quickly import and try out the GUI. The only advantage the OVA has over building your own from scratch is that there are ~12 virtual drives in the VM.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
To help visualize this. Here's my pool and datasets:
upload_2015-6-3_23-17-36.png


And then I point my CIFS shares to a dataset or 4.
upload_2015-6-3_23-18-45.png


And point my iscsi shares to their zvols:
upload_2015-6-3_23-19-41.png


And in case you were wondering, here are the vdevs that makeup the pool:
upload_2015-6-3_23-21-12.png
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks jgreco. I've reviewed cyberjock's ppt twice. It would seem I need to go over it once more. Hopefully third times a charm! I really appreciate your guidance.

Don't worry, there's a steep learning curve. Anyone who's trying, we can see that, and we'll get you sorted out in the end if you put in the effort.
 
Status
Not open for further replies.
Top