Seperate Pool for ESXi Storage?

Status
Not open for further replies.

cfgmgr

Cadet
Joined
Jan 9, 2015
Messages
9
Greetings everyone!

I'm currently in the process of a new build, moving off of a Synology DS1515. This system is currently used for storing music, movies, pictures and commonly used software via NFS/CIFS. Additionally it provides iSCSI targets for my 2 node ESXi cluster. MPIO split across different networks/VLAN's is working very well!

At the present moment I'm not using much of the storage in the Synology (1.26TB total) between ESXi and the files that I have on there. That will more than likely change eventually. Heavier use on just regular file storage, than ESXi storage.

Why am I moving away from the Synology? It's not a bad product, however I'm stuck to only 5 bays (future expansion). That and I'm running RAID5 so I would really like to get away from that. I'm also very interested in learning more about FreeNAS and ZFS. Someday I would love to have 10GB NIC's at some point, but that is a little ways off...

What do I use my ESXi environment for? Not a terrible lot, mostly for learning. I run software for my wireless access point on it (Ubiquiti), and one of the mFI power strips they have. I also have a Plex media server running which points to my movies I have exported via NFS. I also run a small VM that donate some CPU cycles to the world community GRID just for fun. Plex and the GRID are by far the most resource intensive items I run, but only from a CPU perspective. I'll probably get back into Minecraft and spin up my own server for fun. From an I/O standpoint, it is pretty quite. I have 13 VM's total including vCenter. The other VM's are a combination of lab boxes I've built for testing kickstarts, studying or trying to learn something new. They are mostly idle. However, I can bury the Synology pretty quickly when I kick off a few dd's on some of the VM's.

Here is the layout/overview of my ESXi environment.

- Dell R710 x2 running ESXi 5.5 Update 2 (Dell Image) off of USB
- Each system has 7 NICs in used, configured as follows:
- 2 for iSCSI MPIO (each port is on a separate network and VLAN)
- 2 for VM Network (running LACP)
- 1 for Management Network (primarily used for my NFS datastore to share software/OS images)
- 1 for vMotion network (separate port and VLAN here as well)
- iDRAC6 for remote console

Here is the current storage setup
- Synology DS1515
- 5x4TB Western Digital Reds
- 2 NIC's running LACP for general data access
- 2 NIC providing iSCSI (separate network/VLAN for each port)

Future items ordered/acquired thus far.
- Supermicro 836TQ-R800 (Thanks eBay!)
- Supermicro X9SRH-7F-B
- Intel Xeon E5-1620 v2
- Samsung DDR3-1600 16GB x 2 (Total of 32GB for now)

I had planned on ordering up another WD 4TB red, to bring the total up to 6. I planned on running those in RAIDZ2, providing storage for movies, music, pictures, ect. Should give me much better redundancy than the paper plate I have covering myself currently. Also it seemed that 6 is the ideal number for optimal performance.

Now that you have an overview, here is my main question(s)...

Does it make sense to split my storage for ESXi into a separate pool? It would not have to be large amount, but at least I can keep the zvol's out of the other pool. I'm even debating what type of RAID I would run here. It would be nice to have at least 2-4TB to throw around in this area. Perhaps it's crazy/overkill but I like to keep things neat/organized and hopefully reduce causing issues later. The threads on iSCSI/ESXi performance have been wonderful reads. I did not see them touch on these point, but perhaps that because it is really a non-issue.

Perhaps it would make more sense to bump the RAM up instead? Can never seem to have too much memory.

I also have a spare Samsung 840 PRO 120GB at my disposal. Reading up on SLOG, I thought perhaps I could use this to test and see if there could be any potential future benefit. It would be fun for a test.

I would like to think I could easily saturate my 1GB iSCSI connection with either setup.

If you made it this far, I apologize for being long winded!

Thanks!!
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Assuming you had 6 drives to work with and wanted a separate ESX pool, the only possible performance config I can imagine is a 4 disk RAID10 which would give you 2 disks for your file shares which would need to be a mirror. so you would be effectively losing another disk drive of capacity compared to RAIDZ2.

Given your workload, and only 6 drives, I'd stick with RAID Z2.

The SSD could help, but I'd try it without and see what the performance is like. I seem to recall that the Samsung 840 Pro had too much latency and wasn't a good fit as a SLOG. I could be mistaken though.
 

cfgmgr

Cadet
Joined
Jan 9, 2015
Messages
9
Thanks depasseg!

I was thinking really the only combination that would make sense would be RAID10 for a separate pool.

I think I'll stick with 6 for now, do some testing and see how it is. Appreciate the response!
 
Status
Not open for further replies.
Top