ZFS performance with sphere 5

Status
Not open for further replies.

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
So i've been using freenas for a while, usbing CIFS for windows shares and NFS for vmware esxi5 boxes to connect to.

I found my write and read latency was too slow using zfs and esxi, so i decided to rebuild my raidz array as a raid 10 zfs array to reduce the raid write penalty

Current configuration

Code:
zpool status
  pool: storage
state: ONLINE
scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0

errors: No known data errors


all the drives are 4k drives, and when I run zdb storage | grep ashift i get

Code:
zdb storage | grep ashift
                ashift=12
                ashift=12


so it's all aligned to use 4k sectors, running in zfs raid 10

When i use iscsi or NFS from my esxi5 boxes i get huge latency still, i'm played around with zfs tuning and currently have

vfs.zfs.txg.timeout=5
vfs.zfs.write_limit_override=1073741824
vfs.zfs.cache_flush_disable=1

but zfs is just not working fast enough. I'm tossing up going back to ext4 but i really like zfs snapshots and data validation.

Would disabling ZIL to get decent latency be better than going to EXT4 for data protection?

Specs are:

2.4ghz quad core Q6600, not using much cpu
4GB ram
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Hello,

I had a similar issue running ESXi5 to my FreeNAS box.

Initially I used NFS then decided to use iSCSI as I felt that NFS was not working very well. The speed increase was not particularly good going from one technology to the other, however, the one thing that helped hugely was enabling the onboard NIC on the VMWare box and creating a dedicated VSwitch just for iSCSI using that NIC. Both of my NICs are 1GB and I have a 1GB network hub. I only have one NIC on my FreeNAS box though. I can quite easily run three or four VM's at the same time, one running SQL Server, another running Oracle (for example) and I don't get any latency issues, though this IS in a development environment in my home office.

My next test is to install another NIC on the FreeNAS system and try link aggregation, plus, I have bought (but not had chance to install yet) a proper 1GB managed network switch. I hope this will lead to me being able to run more VM's if I choose too.

Hope this helps



So i've been using freenas for a while, usbing CIFS for windows shares and NFS for vmware esxi5 boxes to connect to.

I found my write and read latency was too slow using zfs and esxi, so i decided to rebuild my raidz array as a raid 10 zfs array to reduce the raid write penalty

Current configuration

Code:
zpool status
  pool: storage
state: ONLINE
scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada2    ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0

errors: No known data errors


all the drives are 4k drives, and when I run zdb storage | grep ashift i get

Code:
zdb storage | grep ashift
                ashift=12
                ashift=12


so it's all aligned to use 4k sectors, running in zfs raid 10

When i use iscsi or NFS from my esxi5 boxes i get huge latency still, i'm played around with zfs tuning and currently have

vfs.zfs.txg.timeout=5
vfs.zfs.write_limit_override=1073741824
vfs.zfs.cache_flush_disable=1

but zfs is just not working fast enough. I'm tossing up going back to ext4 but i really like zfs snapshots and data validation.

Would disabling ZIL to get decent latency be better than going to EXT4 for data protection?

Specs are:

2.4ghz quad core Q6600, not using much cpu
4GB ram
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Hello,

I had a similar issue running ESXi5 to my FreeNAS box.

Initially I used NFS then decided to use iSCSI as I felt that NFS was not working very well. The speed increase was not particularly good going from one technology to the other, however, the one thing that helped hugely was enabling the onboard NIC on the VMWare box and creating a dedicated VSwitch just for iSCSI using that NIC. Both of my NICs are 1GB and I have a 1GB network hub. I only have one NIC on my FreeNAS box though. I can quite easily run three or four VM's at the same time, one running SQL Server, another running Oracle (for example) and I don't get any latency issues, though this IS in a development environment in my home office.

My next test is to install another NIC on the FreeNAS system and try link aggregation, plus, I have bought (but not had chance to install yet) a proper 1GB managed network switch. I hope this will lead to me being able to run more VM's if I choose too.

Hope this helps

It doesnt seem to be network based, i have 3 nics in the esxi box and 3 in the freenas box, all using link aggregation into a gigabit layer 3 switch.

I'm thinking it is relating to the cheap green drives i'm using in my array, or the only 4gb of ram, i'm going to upgrade to 8 and see how that helps
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Have you got a tried a dedicated network connection to a Vswitch/iSCSI at all ?
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Have you got a tried a dedicated network connection to a Vswitch/iSCSI at all ?

yep sure have, made a private port based vlan on the switch for only storage, still have high latency for datastores
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
ended up solving the latency by having 2 60gb vertex 3 ssd's in raid 1. 2x 30gb partitions, using them as zil and l2arc caches
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Woah !!! That sounds an expensive solution !! Has it made a noticeable difference ?
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Woah !!! That sounds an expensive solution !! Has it made a noticeable difference ?

It wasnt too expensive, had a 60gb vertex 3 sitting around, so i just picked up another one. Worth it for the features tho, snapshotting and ability to zfs send and manipulate vm config files easily. Now just looking forward to freenas 8.3 so i get more resiliance on my zil

latency went from spiking 100+, sometime 1000. down to no higher than 10, very stable and non jaggy iops.
 

jah

Cadet
Joined
Sep 18, 2012
Messages
1
Onthax - I've been trying to figure out if it's better to use a plain old NFS share or zvol+iSCSI on top of the raidz (or raid 10) array with the zil/l2arc cache to store VM's. Can you provide a little more detail around your end-state configuration?
 

onthax

Explorer
Joined
Jan 31, 2012
Messages
81
Using vmware?

If your using vmware i would recommend iscsi over NFS. You seem to get alot higher performance out of it without disabling synchronous writes on low iops arrays.

a good read for understanding iops and raid arrays with vmware.
http://www.yellow-bricks.com/2009/12/23/iops/

I use
4 x 2-3tb drives with 4k sectors in raid 10
2 x 60gb vertex 3's with 2 partitions, 1 x 10GB and 1 x 50GB on each. 1 partition as zil 1 as l2arc in mirrored configuration across the disks

remember that currently with freenas if you lose your zil you lose your array so make sure to mirror it.

i also use iscsi to zvol to power my hyperv systems.
 
Status
Not open for further replies.
Top