Alvin
Newbie
- Joined
- Aug 12, 2013
- Messages
- 65
FreeBSD 9.2 will also contain virtio drivers. That should ease virtualisation on KVM or bhyve. (source: http://forums.freebsd.org/showthread.php?t=41246 )
I hear developers just got RC1 recently, so I'd say a few weeks or so depending on feedback. I agree the moving around of features and modules has become crazy, in fact I'm in a pissing match with vmware support over if I should have a license for VSA, tech support says I should have one, license supports agrees, sales would like a PO & money for a license. Between FreeNAS & now the new vsan I no longer see a much future for VSA for myself.It becomes ever more challenging to figure out the whole VMware ecosystem as they keep adding and changing various features and modules. Has there been any estimate as to when 5.5 will actually hit FCS?
FRC looks very cool, but you need the vSphere Enterprise Plus edition in order to use itFrom my PoV, the real win isn't any of that but rather Flash Read Cache (people here, think of it as L2ARC for your ESXi). And I really have to say, about fscking time, VMware.
I really can't say if your FC channel SAN might help or hinder your setup. Here is what I do for my production vSAN. I put ESXi onto a nice little SSD drive(though it could be just about anything) with enough space to have a small data store to hold my FreeNAS OS volume. I then pass through my HBA(a high end LSI SAS card) and build out FreeNAS using the physical drives off the HBA. I create a 2nd network on each ESXi box that is dedicated for vMotion and SAN traffic. I attach the FreeNAS to both virtual networks so I can manage it from the office net and then serve NFS to the SAN network and host my ESXi VMs on that.My questions are :
- Having fiber SAN architecture and using NPIV, I have a better (or different) chance of successfully virtualize?
- VMware documentation says that I should use RDM to have vMotion, it means that I can't follow the directives of jgreco if not sacrificing vMotion ?
- In my server I have 2 hba to connect with redundancy to storage through 2 fiber switch, using the PCI- passthrough, I map HBA direct to a VM , but this means that my esxi host cannot use them to access a LUN for the vSphere DataStore?
- I guess the best solution is a third server with 2 other HBA where install FreeNAS directly. But I need to purchase additional hardware and I haven't hw redundancy...
To some extent, yes. Your SAN infrastructure is presumably monitored for faults, and since your non-NAS data is at risk too, it all shares the same fate. That compares favorably to SMART monitoring (assuming you actually ARE monitoring the SAN).[*]Having fiber SAN architecture and using NPIV, I have a better (or different) chance of successfully virtualize?
Well, I think the question is how much data are you planning to store this way? If it wasn't a huge amount (ie could fit on a single VMware virtual disk) and you are prepared to rely on the SAN for reliability properties, then you probably could just make a conventional VM. Using RDM in a VMware blessed manner may also be okay, maybe pbucher will stop in with some comments.[*]VMware documentation says that I should use RDM to have vMotion, it means that I can't follow the directives of jgreco if not sacrificing vMotion ?
[*]In my server I have 2 hba to connect with redundancy to storage through 2 fiber switch, using the PCI- passthrough, I map HBA direct to a VM , but this means that my esxi host cannot use them to access a LUN for the vSphere DataStore?
[*]I guess the best solution is a third server with 2 other HBA where install FreeNAS directly. But I need to purchase additional hardware and I haven't hw redundancy...
[/LIST]
I apologize for the long post and the bad English, but I am really anxious to put in production a fully open environment! (except vSphere)
thanks!
sincar
Since you have real SAN you should be able to use RDM in a vmware officially supported way which will help things greatly. The key is your SAN needs to give the LUNs a unique serial # and have it stick with the LUN. This usually isn't a problem, except under locally attached storage using certain HBAs and brands of hard drives(there isn't a list vmware just forbids the whole setup to be safe).Using RDM in a VMware blessed manner may also be okay, maybe pbucher will stop in with some comments
Actually, it seems to be more complex than that. I will agree that your reasoning is very sound, but it actually looks like RDM'd disks in FreeNAS are being zeroed out. At least, for some people that's the case.Since you have real SAN you should be able to use RDM in a vmware officially supported way which will help things greatly. The key is your SAN needs to give the LUNs a unique serial # and have it stick with the LUN. This usually isn't a problem, except under locally attached storage using certain HBAs and brands of hard drives(there isn't a list vmware just forbids the whole setup to be safe).
Hehe. That last sentence has been many people's epitaph. The biggest problems with RDM(and with the whole ECC versus non-ECC RAM) is that things will appear to work fine. You'll be able to create it, performance will be good, disk failure testing will be just fine. There will be no indication that anything is not right with the setup or that you should be concerned at all. It might work great for weeks or months(some people for more than a year).I've used RDM physical disks successfully in the past, though once I got the HBA to pass through and work correctly I swtiched to PCI passthrough of the HBA(in fact I went back and forth a few times without having to rebuild the pool). I've heard other people have some really bad problems with RDM setups when a disk had to be replaced, I suspect though they had run of the mill consumer hardware and had issues with the drive serial #s. I just brought up a RDM setup using ESXi 5.5 and FN 9.1.1 and it seems to be stable and is holding up ok so far(I hope I will be able to convince the company to buy a server that supports pass-through in the near future).
I know that every person I've tried to help did physical RDM. Note that I'm not implying that physical must be bad. I'm just saying that the 4 people I've helped via Teamviewer all did physical and none got their data back.Far too many folks using RDM are doing it via hacks and worse often choose the hack of doing a virtual RDM and not a physical one.
My M1015 reflashed to IT mod provided the serial number in the device ID when I did an RDM map for a linux VM. I can't confirm any other configurations myself. Currently I'm doing RDM via the onboard SATAIII port on my Supermicro X9SCM-F motherboard with ESXi 5.1 and it definitely has the serial number too.The key thing is does the drive serial # that vmware counts on for the mapping come from the physical drive or SAN controllers vs a #s assigned by a driver or some firmware have the drive.
Please don't fool around, tell us what you really think!Is it fool of SATA drives with some kind of FC convertor
Opps I meant to say full....but only a fool would put SATA drives in a FC SAN........Please don't fool around, tell us what you really think!
Agreed....RDM is indeed playing with fire...some times you can cook with it and other times you get burned bad.Generally we("we" being some of the more knowledgable people here) just say "too bad so sad" when we get cases of RDM gone bad because recovery is pretty much impossible from what has been researched on the topic before.