BUILD Dual CPU ESXi -Advice Needed

Status
Not open for further replies.

Syris

Cadet
Joined
Dec 11, 2014
Messages
8
My current system is as follows:

Supermicro X8DTi
Corsair CX500 500W PSU
(2) Intel Xeon W5590 @3.3GHZ
24GB G.SKILL DDR3 1333 (6 x 4GB)
(1) 2TB WD HDD connected to onboard controller (Datastore to hold VMs)
(5) 3TB Seagate ST3000DM001s connected to LSI 9220-8i cross flashed to a LSI9211-8i in IT mode
(1)Intel EXPI9402PT Dual Port Gigabit PCI-E (both ports assigned to PfSense VM)

Running ESXi 5.5 with the following VMs:
Windows Server 2012 R2 - Runs 24/7 with Plex, FlexRAID, Sickbeard (accesses the Seagate drives by physical RDM)
PfSense 2.1.5 - Runs 24/7 as my router/firewall and hosts a OpenVPN server for tunneling into my home network
Windows Server 2012 R2 - test DC runs occasionally
Windows 10 TP - runs occasionally
Ubuntu 14 - runs occasionally when I need to use Linux for something

For various reasons FlexRAID just isn't cutting it anymore as a solution for managing my data. So here is my new plan, I'd like to move to a Supermicro 24 bay HDD chassis(SC846E1-R900B) either keep my existing motherboard or switch to a X8DAH+-F-O (more on this later). Change out my non ECC RAM for 48 GB of ECC RAM - 6x8GB HMT31GR7AFR4C-H9 (listed on mobo specs as compatiable). Then change CPUs to (2) X5687s. The main reason I'm thinking of switching CPUs out is because my W5590s don't support AES-NI and I'd like to encrypt my pool.

Under my new setup I'd like to run the following VMs
FreeNAS--------------- 2 vCPUs -16GB
Linux w/ Plex/SB--- 2 vCPUs -4GB
PfSense---------------- 2 vCPUs -1GB
Linux XBMC distro--1 vCPUs -4GB
Win8.1------------------4 vCPUs -16GB

For the Linux XBMC distro and the Windows 8.1 VM I'd like to passthrough AMD 6450 GPUs and run HDMI to a TV for a XBMC HTPC and run HDMI to monitors and use the win8.1 VM as my desktop
However my current board only has 1 16x pci-e slot so if I want to Virtualize my desktop I'd need to change motherboards to the aforementioned X8DAH+-F-O(has more x16 slots). I haven't decided quite yet if I'd like to take that dive or not.

If anyone can offer any suggestions or point out things I'm overlooking I'd really appreciate it. Also got some noob questions if anyone would care to take a stab :P

1.)For hard drives I want to get new ones and am taking all recommendations. I've seen most people prefer 5400RPM drives , do they actually make a noticeable difference in heat? Performance wise I don't really care about individual drive performance as long as I can read/write to the pool @ Gbps speeds (usually get 112MB/s with cifs)which shouldn't be an issue I'd think.

2.)I was thinking of doing RAIDz2 with 6 drives, but after reading this (site down now google web cache link)it sounds like URE rates are greatly exaggerated and that RAIDz is likely still safe. How does ZFS handle handle running into a URE during resilvering?

3.)In an ideal world I'd get a new dual socket 2011 v3 mobo and (2) E5-2623 v3s but that would be at minimum $1,000 more , does anyone know how older faster (56XX) CPUs stack against new but lower freq. CPUs (26xx V2 & v3)? I'm pretty sure it isn't worth the cost, but maybe.


4.)Does any one know what kind of ballpark performance one can expect on non AES-NI CPUs with encrypted pools? Does this performance scale better with higher frequency or higher core count?

5.)I know for most ESXi builds everyone uses passthrough to pass their HBA to the FreeNAS VM, but is there any documented reason not to use physical RDM as I did previously with FlexRAID? This article documents the advantages of using phyiscal RDM over vt-d passthrough.

6.)Is my understanding correct , that the SC846E1 back plane just accepts 1 of the connections from my m1015 for all 24 drive bays??

7.) If I use a SSD for storing the VMs .vmdks and not the pool itself do I need a separate ZIl/SLOG or anything else for the type of performance I'm looking for (around 115 MB/s read/write to pool)

Any help or guidance is greatly appreciated!
 
Status
Not open for further replies.
Top