Major Performance Issues after 70 days of working great

Status
Not open for further replies.

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
With VMWare you're going to want 32gb of ram for starters, maybe more depending on the guest VMs. Also, are you able to change your vdev from raidz2 to striped mirrors? It will help with the random IO.

You can look into getting proper hardware later, and you should.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yes but it's pretty much the only similarity between the two. RAID-Z2 isn't "basically RAID6", it's just similar.

Yeah, you get the IOPS of one drive. That's why you need plenty of RAM, you need to have a bigger ARC ;)

And, if you can, in the future, use striped mirrors instead of RAID-Z vdev(s), your IOPS will be much better :)
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
So would it be smart to add two more 3tb drives and redo the SAn like this


8 disks total

4 for each vdev striped mirrors (raid10)

Then create a ZPOOL with two Vdevs

Useable space would be about 6TB right?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
A mirror will be a vdev so you'll have 4 striped vdevs of 2 drives mirrored ;)

You'll have about 9.5 TB usable (including the 80% rule), look at the calculator in my sig if you want to simulate different setups ;)
 

Stephen Jones

Dabbler
Joined
Mar 16, 2015
Messages
14
ok none of this is making sense.

My understanding is that DISKS go into VDEVS. VDEVS are a collection of disks that are in raidz, raidz2, mirror, stripe
Then you use VDEVS to create Zpools which is where your data goes.


In my current setup we have created one VDEV which is not smart.


That's my understanding.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yes, where is the problem? everything you said agree with what I said ;)

I reformulate: you'll have a pool of 4 vdevs striped together. Each vdev will be a mirror of 2 disks.
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
You need to read cyberjocks guide about ZFS, it won't take long to get through it
and you'll have a much better understanding after reading it. It rocks!

EDIT: here's the link
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yeah, I should have posted the usable space without the 80% rule and let the interpretation to the user, sorry.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Yeah, I should have posted the usable space without the 80% rule and let the interpretation to the user, sorry.

Even though I quoted your reply - my message was directed to the OP. All to often we see iSCSI users show up complaining of poor performance after using 80%+.


Sent from my phone
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
No problem, if I can improve my answers the next time it's always a good thing... ;)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
poor performance after using 80%+.

Which, sadly, doesn't strike until they've gone through a bunch of write iterations on their virtual disk and fragmentation grabs them by the neck and starts shaking them.

8 disks total

4 for each vdev striped mirrors (raid10)

Then create a ZPOOL with two Vdevs

Useable space would be about 6TB right?

The description is wrong but the math is right. Bidule0hm summarized:

a pool of 4 vdevs striped together. Each vdev will be a mirror of 2 disks.

This is 24TB (3TB * 8) worth of raw space, which gives you 12TB (mirror tax) worth of pool space. The actual amount of usable space is debatable and depends extremely much on what the workload is.

The rule of thumb I would use says that 60% is the maximum you should try to use on a pool with VM storage. If you truly had a VM workload that did almost no writes ever, you might be able to push that towards 80%. More likely, you need to trim down from 60%, 50% being an intelligent guess for VM's that are doing modest writes. The problem is that if you get a VM that is doing truly heavy writes, fragmentation increases dramatically.

I present once again an interesting discussion of the issue from Delphix.

Looking at the graph of % Pool Full vs Steady State Performance, you can see that lower percentages full have a dramatic impact on write performance. It is worth noting that all VM virtual disks over time will tend towards greater fragmentation as writes occur. Reducing unnecessary writes is an important tool in managing fragmentation. It may feel satisfying to do a buildworld on all your FreeBSD servers, but if it is being done on a VM with a ZFS-backed datastore, fragmentation++. Don't do that. And disable mostly-useless stuff like atime updates on your filesystems, because that's resulting in a perpetual trickle of trite writes.

Fragmentation is best combatted with large amounts of free space, which allows block allocation of contiguous regions. The only long-term fix to reduce fragmentation is to periodically rewrite a VM's virtual disks into fresh regions of contiguous space (stop VM, copy, start VM) but having sufficient free space greatly lengthens the amount of time it takes to reach the pain point.

So while I won't say that 6TB is a correct amount of usable space, I think it is at least a ballpark figure.
 
Status
Not open for further replies.
Top