FreeNAS storage slow

Status
Not open for further replies.

mrjoli021

Explorer
Joined
Dec 8, 2012
Messages
50
I am running FreeNAS 11.1 on Vmware 6.5. I am using a PCI passthrough to present the HBA back to FreeNAS. This has been working for about 1 year without any issues. I had 12 1TB drives in a raidz2 setup. I upgraded all my drives to 2TB drives each, so now I have 12 2TB drives in the same raidz2 config. Not sure if that is the cause but shortly after the system itself has been running extremely slow. I mean to the point that Linux VMs timeout when I ssh into them. I have a Windows box and on the console I move the mouse and about 5 seconds later it moves.

The storage was shared to vmware as NFS. I had sync turned off and it still is slow. I have since moved over to iSCSI to see if that was the issue and no improvement. I also have only VM running to see if there was a VM that was hogging up the resources, but NOPE not the issue.

What I am noticing is that vmware is complaining about FreeNAS memory. I started off with 12GB and I have increased it to 16GB and FreeNAS is using all of it.

Any suggestions?
 
Last edited by a moderator:

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
16GB is not enough memory for your storage, especially since you are providing VM storage. Based on the previous use case, I think 24GB might be just enough, but I would recommend at least 32GB. That would also explain your slowdown, since you only needed about half as much memory with half as much storage, 12GB was probably just enough.
 

mrjoli021

Explorer
Joined
Dec 8, 2012
Messages
50
I upgraded to 32GB and it seems to be working much better. Question I have 2 cpu's on it. Is that enough and is 32GB enough for 12 2TB on 2vdevs?

thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
RAIDZ2 is really bad for VM storage. I explain this frequently enough so I'll just give you a pointer and a bit of a push to look...

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

Also, for good performance for iSCSI or NFS block storage, we really suggest a minimum of 64GB, though you can get away with less sometimes depending on the specifics, especially if you don't care about performance. Two vCPU's may or may not be enough. Anecdotally I've got a filer serving VM data here with 24 x 2TB drives in 3 way mirrors, so 16TB pool size or around 8TB of usable datastore space. 128GB RAM and 1TB of L2ARC make it pretty nice and zippy, it has an E5-1650v3 CPU but I never see it hitting any significant amount of CPU (almost always 90-95% idle).

You need to be keeping your pool less than 50% full or the performance penalty is pretty severe. Like, VM's stop responding and mice get laggy.
 
Status
Not open for further replies.
Top