3 ESXi with FreeNAS in VM

Status
Not open for further replies.

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
hi

i got hands on 3 cheap HP ProLiant DL585 G7

each:
- 96gig ram
- 2x 4port gig lan cards
- 2x single port FC cards

my plan is to build 3 IDENTICAL servers,
2 productive, 1 hot standby

esxi boot from USB? or internal ssds (separate sata controller)?

1st VM is FreeNAS (bare metal disks), expose iSCSI, then start all the other VMs

all 3 FreeNAS should replicate eachother over FC (each server directly connected to the other 2)
2x 4port gig lan bonded for data

on server 1 50% of the VMs up, on server 2 the other 50%, but EVERY VM is on EACH server (has to)

is this possible?
good/bad idea?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
my plan is to build 3 IDENTICAL servers,
2 productive, 1 hot standby
Why not use DRS/DPM to automatically power hosts on and off based on load? Just set DRS to pin your FreeNAS to one host (it wouldn't be allowed to vmotion with the HBA passthrough anyway)
esxi boot from USB? or internal ssds (separate sata controller)?
USB. ESXi botts 100% into memory. The only downside is that you have to configure the logging to use a shared datastore or it will complain with little yellow triangles.
1st VM is FreeNAS (bare metal disks), expose iSCSI, then start all the other VMs
To do this, you "main" host will need at least a small datastore for the FreeNAS VM and you need to be extremely careful about booting FreeNAS before your VCSA and shutting the VCSA off before FreeNAS. You can imagine why.
all 3 FreeNAS should replicate eachother over FC (each server directly connected to the other 2)
So each server is going to have an HBA and disks? AND you want replication?? Why not just use VSAN??? Also FreeNAS will not replicate over FC. It uses TCP/IP to ZFS send over SSH (I think).
2x 4port gig lan bonded for data
For the FreeNAS VMs or the hosts? I think reviewing VMWare networking may be in order. You don't get LACP in ESXi unless your using dvSwitches and even then it's not a great idea in most cases.
on server 1 50% of the VMs up, on server 2 the other 50%, but EVERY VM is on EACH server (has to)
I think your missing the point of shared block storage. Also I don't know what you mean by "(has to)" because it does not. That's the point of iSCSI. I think your looking for HA on FreeNAS and your wont get it unless you buy TrueNAS and that means buying 2 or more physical servers from iX systems.

Setup one server bare metal as FreeNAS, connect the other two with FC. Follow the KISS model (Keep It Simple Stupid).
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Also this will cost you at least $20 per month in power, just FYI.
 

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
Thanks for the input!

Maybe i missed some points in the first post.

Its for a non profit organization ...

Allmost no $$$

No licenses. No vshpere. No drs.

But high redundancy.

I know... KISS.

But with 1 freenas and 2 esxi, if freenas fails, everything fails...

We want freenas localy, no latency, no switches.

We dont want HA, but allmost, with free tools

If everything fails, ALL vm should run on one server.

Replication: we can put some 10gig cards in, they're not so expensive...
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Please don't do anything like this for a business that depends on the data and reliability of VMs. If they want a reliable and stable set up, they need to pay the price or risk everything.
You need at least 4 boxes. If you want HA storage, use ESOS compiled with ZFS and do the ork to set it up safely and correctly, RAIDz2 vdevs, redundant replication links, redundant fibre channel etc. Switches will NOT add any measurable latince in this small and simple of a setup and are needed to ensure proper redundancy in the storage fabric (FC or ethernet).

If you think 20 microseconds will effect the business, they need to spend several 100k on storage anyway and a few grand wont make a difference anyway.

The big question is, how much down time is acceptable? Can it be down for a weekend? Is 10 minutes of downtime per year unacceptable? Fast/Cheap/Reliable. Pick two and keep in mind you have to do the work.

Edit: How much data (in hours) can you afford to loose?

Post the exact specs of the servers and the VM requirements (memory, IOPs, CPU, Network) and we will see what we can come up with.
 
Last edited:

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
Thanks again!

I totally get the point.

But i have 10k and 6 weeks to make the most possible out of it...

Specs: stock/options as described above
Each 4 cpu 12core 2.1gh
96gig ram
2x 4port gig lan
2x 1port fc hba
Disks: 6 1tb wd red, 2 ssd for cache/log

The total system has to run 40+ vms as safe as possible.

Mysql/memcache/iis/dns/ad

Backup is external

Each server max 50 percent of load. Other vms present but down

Failover: everything on one server

Max downtime 5min/day or better

(man-)power is free, inet also

Any ideas?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Specs: stock/options as described above
Each 4 cpu 12core 2.1gh
96gig ram
2x 4port gig lan
2x 1port fc hba
Disks: 6 1tb wd red, 2 ssd for cache/log
I need to know the EXACT internal HBAs installed. The SSDs may or may not be useful I need the EXACT model number to find out.
The total system has to run 40+ vms as safe as possible.
Do you have all the VMs built? What are the specs/planed specs? How much space will you need and what kind of growth rate are you expecting?
Mysql/memcache/iis/dns/ad
Ho do we get to 20 VMs per host?
Backup is external
How will you be backing up the VMs? What software? Does that support FC?
Each server max 50 percent of load. Other vms present but down
It wont work like that. Failover will require a bit of planning and manual work once the time comes.
(man-)power is free, inet also
I guess that's one up side!

What does the network look like? How will this stack tie into the rest of the network? Can we setup vlans and simple routing/firewall rules?
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Project Outline
Implement three HP DL585 G7 server with the following hardware configuration is the most resilient way possible with utilizing all free software and licensing. Hosts will run MySQL, AD, IIS, and other services. FreeNAS VMs will provide iSCSI storage from passed though HBAs and will also provide cross host replication for disaster recovery.

· 96GB RAM
· 4x AMD Opteron 6172 (12 core 2.1GHz)
· 2x quad port gigabit NICs
· 2x Fiber Channel single port HBAs (unused)
· 6x 1TB WD RED
· 2x SSD

Planned implementation
Two hosts will be active with a third preconfigured for standby. Each host will provide its own storage via FreeNAS VM with HBA passthrough to support direct disk access for ZFS. Each FreeNAS VM will have at minimum two zvols configured one for each host and it respective VMs. The zvols will be cross replicated and utilize VMWare snapshots to obtain application consistent backups.

ESXi Host Networking
Each host will be configured with one vSwitch. The vSwitch and port groups are outlined below.
· vSwitch01
o pgManagement – VLAN 99 – NIC 1 Port 1, NIC 2 Port 1
§ Vmkernel (management traffic only)
§ FreeNAS management interface
o pgVirtualMachine – VLAN ?? – NIC 1 Port 2, NIC 2 Port 2
§ VM1
§ VM2
o pgReplication – VLAN 80 – NIC 1 Port 3, NIC 2 Port 3
§ FreeNAS replication interface
o pgISCSI01 – VLAN 81 – Internal only at this time
§ vmkernel (no services enabled used for iSCSI to FreeNAS)
o pgISCSI02 – VLAN 82 – Internal only at this time
§ vmkernel (no services enabled used for iSCSI to FreeNAS)

ESXi Host Storage

ESXi required redundant paths to any external block storage and as such we will be using two vmKernel ports, one on pgISCSI01 and the other on pgISCSI02. This will provide connectivity to the hosts FreeNAS VM allowing access to the hosts assigned LUN via iSCSI. The iSCSI vmkernel ports should be configured with port biding to prevent unintended routing of iSCSI traffic.

FreeNAS Networking
Each FreeNAS instance will be configured with three interfaces of the VMXNET3 type. One for management traffic, one for backup/replication, and two for iSCSI to serve the underlying host.

FreeNAS Storage
Each FreeNAS VM will have the hosts HBA and connected disks passed though giving ZFS full control and visibility to the status of the disks. Disks will be configured in either one RAIDz2 vdev or as stiped mirrors depending on space/performance requirements. The pool will be configured with a minimum of two zvols one for each host and its respective VMs. This will provide some level of “high availability”. Each FreeNAS will only serve one LUN at a time to prevent unintended changes in the replicated VMs.

· FreeNAS
o Pool (to be named the same as the hostname FreeNAS01/FreeNAS02)
§ zvol01 (corresponds to EXSi01)
§ zvol02 (corresponds to EXSi02)

# Just a start. Ill might go through and outline every setting with basic explanation... I also need ALL the details from your side and I need to do some review reading.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Basically this is what your going for? This may look simple but there are a lot of layers and moving parts to this.
cross rep.JPG

NOTE: the dark orange represents the logical iSCSI links, not the passthrough and the light orange would be over the switched network.
 

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
WOW

exactly...

i will collect the detailed HW info (HBA/NIC) maybe today/tomorrow
 
Last edited:

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
FC_HBA: PLE12000
NIC: HP NC364T
SSD: to buy

i saw the board has one SDcard slot and 2 internal USB, and a 10gig nic slot (empty)


Do you have all the VMs built? What are the specs/planed specs? How much space will you need and what kind of growth rate are you expecting?
can not say exactly, but depending on the 50% "rule" we have 24 cores/48gig ram per server with ~20 VMs running, plenty i think, everything thick provisioned and limited

Ho do we get to 20 VMs per host?
manual selection for even load, mysql/memcache/2-3 vms clustered

How will you be backing up the VMs? What software? Does that support FC?
Veeam (server-housing-company offers to use their solution)

It won't work like that. Failover will require a bit of planning and manual work once the time comes.
thats true, we are aware of that but its ok, no licence=manual work

What does the network look like? How will this stack tie into the rest of the network? Can we setup vlans and simple routing/firewall rules?
redundant 1g uplink, big USV, Fortinet FW, servers connected to a 24port gig switch, vlans possible
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
FC_HBA: PLE12000
I am referring to the hard drive HBA or RAID card. It may need to be replaced with a card suitable for FreeNAS.
can not say exactly, but depending on the 50% "rule" we have 24 cores/48gig ram per server with ~20 VMs running, plenty i think, everything thick provisioned and limited
I guess we just hope everything fits? I would advise against the thick provision. With the newer VMFS and ESXi 6+ there is virtually no overhead. Also with thin disks, the blocks are not preallocated in ZFS meaning that snapshots will be much smaller and transfer faster allowing for shorter RPOs in the replication. CORRECTION this only applies to the initial replication. Subsequent snapshots and replication task will only be concerned with new/changed data.
Veeam (server-housing-company offers to use their solution)
Veeam is a fantastic product. You have have two Veeam servers or just one. This will depend more on how you want the hosts to handle the backup load and licencing than anything else. Also keep in mind that Veeam can be a hefty VM and consume a lot of resources during backup.
thats true, we are aware of that but its ok, no licence=manual work
Be sure to test this and DOCUMENT AND PRINT this entire process before ever placing production VMs on any of it.
redundant 1g uplink, big USV, Fortinet FW, servers connected to a 24port gig switch, vlans possible
Just so you know this is a single point of failure. You need to get a matching switch to have redundant links to your core. It also sounds like the Fortinet FW handles all the routing. This is fine but I just want to verify what you have.
 
Last edited:

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
many many thanks for all the help!

i will dive deeeep into it an will report in few weeks
 

moldof

Dabbler
Joined
Mar 26, 2017
Messages
15
with 3x HP Enterprise NC524SFP extension Card (dual-port SFP+)
68449879_200x200.jpg


and 3x DAC Cable (each server connectet to the other 2)
20180425101337_692.jpg

it should be possible to use them for replication?

so the 2x 4port gig nics are free for normal IO traffic/iSCSI

2 left onboard nics are free for management/backup

2 FC HBAs we can then take out or use them later
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
it should be possible to use them for replication?
You didn't have this listed in your hardware. Yes it can be done but I'm not going to update my doc. I will not shoot for someone else's moving target. You still have not answered half of my questions.
You need to boot one of the boxes to FreeNAS (or any BSD) and provide the output of lspci.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
and 3x DAC Cable (each server connectet to the other 2)
This leaves you with no redundancy. Its a single point of failure. I thought you were trying to avoid this.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Also using 10gbe for replication is a waste in this use case unless you have massive data change rates and if that's the case this is the worst architecture for it.
Just a heads up, you could licence all three hosts and build a proper cluster for about $1120 US plus storage (another 2-3k 4k with 10gbe)
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
so the 2x 4port gig nics are free for normal IO traffic/iSCSI
The storage is "local" remember? The iSCSI was added to the spec on the off chance your company decides they want a stable and supported setup, that and Veeam SAN direct backups (wouldn't use the iSCSI vmk port and would need the port group added for non-vmk traffic. Adding ports for iSCSI does not scale speed linearly (not even close) on this small of a setup as it will not have an efficient PSP and no RR does not count.
2 left onboard nics are free for management/backup
But you just said they would be for normal IO traffic/iSCSI.
2 FC HBAs we can then take out or use them later
Good to keep if you want to use FC for proper external storage.
 
Status
Not open for further replies.
Top