ESXi 5.5 Datastore.

Status
Not open for further replies.

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
Guys.

Once myt FreeNas build is finished i plan to hook my two ESXi servers up to shared storage on the Freenas. I saw a post but can't find it that said there are some things to know about ESXi data stores on ZFS. Can anyone provide any info , pitfalls or things to definitely do or avoid?

Thanks in advance
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Use mirror vdevs, not RAIDZ. For VM storage, you need to maintain at least 40% free space on your pool to have any chance of good performance in the long term due to the fragmentation and the CoW nature of ZFS.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
Thanks for that info mate. So just so i get this right. I have 7 x 2TB disks which i was going to make one large amount of storage. So if i am understanding right i should take 2 of the disks out of the 7 and assign as a mirror for VM storage. Have i got that right?
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
Is that not what you was suggesting? Have i not understood what you was explaining? Sorry FreeNas is very new to me right now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's lots you could do. For example, you could make three sets of mirrors and have 3x the performance and a warm spare disk.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
OK. So my home ESXi storage needs to be about 1.5tb , can't see it ever growing past this. So keeping in mind i have 7 x 2TB SATA 3 disks in my rig what would be the best approach? I am looking to use my FreeNas as ESXi shared storage between tow boxes. The rest for movies , pictures, backup and general storage.

What would be the best way of carving this up?

thanks in advance
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The big problem with CoW filesystems like ZFS is fragmentation. When you lay down a vmdk on a blank ZFS pool, the blocks can be laid down sequentially in the pool, meaning no seeks and fast write. However, when you go to update a block, ZFS allocates a different block and writes your file data block there instead. So now when you're reading what you might expect to be a contiguous set of blocks, you get read/seek/read/seek-back/read. ZFS needs lots of free space in the pool to have any chance of making semi-rational allocations.

So let me cut out all the crap in the middle. You want to store 1.5TB. You need an absolute minimum of 3TB exclusively for the VM storage. I've written about pathological ZFS fragmentation cases where you might actually need more than 15TB to semi-sanely store 1.5TB, but that's usually not the case in the real world unless you have a crazy VM like a very busy mail or database server.

So if you're set on the idea of sharing the machine's purpose, the best thing I can suggest would be two mirrors of 2TB drives striped (4TB usable space), for your virtual disk storage, then take the remaining disks and make a RAIDZ1 out of them, or maybe a separate mirror pool and then maintain a warm spare drive. I'm sure it isn't as much space as you were hoping for.

The other alternative is to accept extremely reduced performance and just put them all in a RAIDZ2. But this is also bad because beyond just being on RAIDZ2, user data typically expands to fill all available space (the UNIX sysadmin's ancient lament) and you'll end up compromising free space requirements and your pool will slowly get slower and slower as both fragmentation increases AND free space dwindles.

Don't shoot the messenger, he knows it sucks.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
I think maybe FreeNas is not right for ESXi then? or am i just being paranoid?

When you say create two mirrors, do mean mean just take two disks out of my 7 or 4 disks out of my 7?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
FreeNAS is fine for ESXi, but in order to get good performance, you have to look at what's needed to deliver good performance. If you throw resources at ZFS, ZFS will do all sorts of awesome. But the resources I'm talking about would probably be mind-numbing to the average home user.

In a VM scenario, that's typically lots of memory (64GB+), lots of mirrored vdevs, probably an L2ARC, a competent SLOG device, and some patience and tuning.

The average hobbyist/home user who is hoping to transform his 7x2TB drives into a 10TB RAIDZ2 and get screaming fast performance while simultaneously storing 9.8TB of data is going to be in for a very rude shock. ZFS just doesn't work that way. It works that way even less for VM's.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
OK. So my VM estate looks like this right now.

2 x Domain Controllers
1 x Exchange 2013 Server
1 x IIS Web Server
1 x Untangle firewall.

I will prob add a few more servers over time but nothing really intensive like SQL or anything.

I don't need the environment to be lightning fast, but i need it to be usable and stable . However I am still unsure about the mirrored vdev. When you say create two mirrors, do mean mean just take two disks out of my 7 or 4 disks out of my 7?

Is there no way to address the defragmentation like defrag in Windows?

Sorry for some very noob questions but you helping me to understand more as we go.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't need the environment to be lightning fast, but i need it to be usable and stable . However I am still unsure about the mirrored vdev. When you say create two mirrors, do mean mean just take two disks out of my 7 or 4 disks out of my 7?

Sorry for some very noob questions but you helping me to understand more as we go.

Questions are great but I'm going to let(/make) you answer your own question.

You say you need 1.5TB of space for VM's. For general VM usage, about the absolute most you should ever fill a pool is around 60%, and at that point you're still likely to eventually wind up with significant fragmentation effects at some point down the road. 30 or 40% is much more agreeable. Calculate your pool space requirements.

A vdev cannot be relied upon to perform better than its component members, so if you have a mirror vdev created out of two drives that are capable of 100 IOPS, then the mirror vdev should be considered to be capable of 100 IOPS (but might do more). Two mirror vdevs striped would then get you approximately 200 IOPS, bearing in mind that ZFS doesn't actually guarantee an evenly spread load.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
OK. So am i right in thinking that scrubbing is defraging? Should this be run on VM Datastore?

Also and just so i am understanding this right. Say i have a pool of 100GB then i should not fill it up more than 60GB right?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
OK. So am i right in thinking that scrubbing is defraging? Should this be run on VM Datastore?

No, there is no good way to defrag a pool. Scrubs simply verify consistency (by looking at the checksums), and will correct errors if found.

Also and just so i am understanding this right. Say i have a pool of 100GB then i should not fill it up more than 60GB right?

For VM use, correct. Possibly even less. For normal use, the number's more like 80%, because regular files normally don't exhibit the fragmentation behaviours common to vmdk's.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
OK so i think i am starting to understand now. I will come back with some questions when i digest what i have so far. Thanks for your help so far.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's a steep learning curve. Take your time and do it right.
 

WallaceTech

Dabbler
Joined
Apr 6, 2012
Messages
32
very steep mate , very steep. I am a VMware / Active Directory guy by day. It helps to have people like yourself just explain stuff to me so that i can get to grips with it but by bit. I don't want to go mental with FreeNas. Just want to get one bit done correctly such as ESX datastore and then try something else with FreeNas.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ZFS can be just as easy or just as horrible as either VMware or AD. However, ZFS is less forgiving of mistakes made in the initial strategy because you typically have to live with some of them for the lifetime of the pool ...

There's lots of good information posted around here, be sure to take advantage of the search feature.
 
Status
Not open for further replies.
Top