Slow Performance over NFS on one volume

Status
Not open for further replies.

cmbaker82

Dabbler
Joined
Jun 22, 2012
Messages
19
Hello,

I have two volumes setup.
Volume1 consists of 8 300gb 15k SAS drives, a 240GB SSD read cache, and a mirrored 50GB OCW Mercury Elite ZIL. The performance of Volume 1 is great, I am seeing 96MB/s write speed from ESX 5 with NFS over a 1 gigabit connection and far greater locally:
/mnt/Volume2# dd if=/dev/zero of=/mnt/Volume1/testfile bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 284.522614 secs (377383649 bytes/sec)

Volume2 is a pair of 3TB Segate Sata Drives with a 120GB read cache SSD. I am getting 4 to 7 MB/s write speeds on these drives from the same ESX hosts (3 different hosts) using the same network link and NFS. When doing a local DD test I am getting about 140MB/s through put. Using gstat during the NFS writes shows both disks averaging 96% Busy. During the local DD test it shows 100% Busy

local dd test:
dd if=/dev/zero of=/mnt/Volume2/testfile bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 713.323764 secs (150526574 bytes/sec)

zpool iostat -v 1 during a local dd test

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
Volume1 1.63T 539G 3 12 509K 195K
raidz2 1.63T 539G 3 0 509K 0
da0p2 - - 0 0 20.9K 0
da1p2 - - 1 0 42.8K 0
da2p2 - - 2 0 64.2K 0
da3p2 - - 3 0 85.6K 0
da4p2 - - 3 0 85.6K 0
da5p2 - - 3 0 84.6K 0
da6p2 - - 3 0 84.1K 0
da7p2 - - 1 0 41.8K 0
mirror 8K 44.5G 0 12 0 195K
da10p2 - - 0 12 0 195K
da9p2 - - 0 12 0 195K
cache - - - - - -
da12p1 224G 8M 1 0 255K 0
---------- ----- ----- ----- ----- ----- -----
Volume2 209G 2.51T 0 1.18K 0 151M
mirror 209G 2.51T 0 1.18K 0 151M
da15p2 - - 0 1.22K 0 156M
da16p2 - - 0 1.19K 0 152M
cache - - - - - -
da13p1 119G 0 0 63 0 7.96M
---------- ----- ----- ----- ----- ----- -----

During nfs transfer this shows Volume2 getting 4M/s and gstat shows 96% busy on both disks. I've attached a screen capture of gstat during an nfs write

The local DD test shows results that I expected, but the NFS is extremely slow on writes. Read speed is fine
 

Attachments

  • San Disk Usage.jpg
    San Disk Usage.jpg
    67 KB · Views: 290

Hexland

Contributor
Joined
Jan 17, 2012
Messages
110
Your local DD test on volume 2 still looks a bit on the slow side... did you format the volume using 4K sectors? I had a x3 speed increase by changing from default 512K to 4096K sectors.
 

cmbaker82

Dabbler
Joined
Jun 22, 2012
Messages
19
I just put the drives in and created a new Volume with the default options. I didn't do anything specific to format it at all. But even with the DD performance I am getting, shouldn't my nfs performance still be better?
 

cmbaker82

Dabbler
Joined
Jun 22, 2012
Messages
19
tried again on a different system, same specs, but with 4k sectors checked (drives are 4k drives). This system has 3 pools with 10 drives each in raidz2. Write performance over nfs was about 2MB/s. disabled zil with "vfs.zfs.zil_disable="1" " in system tunables and put a check in "asyncronus mode" under nfs, and rebooted. After reboot I am now seeing throughput of about 100mb/s on some small copies, but sustained is still pretty slow at about 10mb/s. Although "zpool iostat v 1" now seems a bit choppy with several seconds at 0 and then a large write every once in a while. I tested by storage vmotion of a 27gb (used) VM.

Is this result typical or is there something else going on that would cause this?
 

cmbaker82

Dabbler
Joined
Jun 22, 2012
Messages
19
When i mentioned that the output of zpool iostat seemed weird below is the output during a clone vm operation, copying from and to the same nfs destination. The system has 24GB of ram. This is with ZIL disabled

[root@sv-san] ~# zpool iostat -v 1 | grep "^Storage\b"
Storage 61.2G 27.2T 17 127 2.22M 14.9M
Storage 62.5G 27.2T 217 738 27.0M 64.9M
Storage 62.5G 27.2T 400 0 49.5M 0
Storage 62.5G 27.2T 113 0 14.2M 0
Storage 62.5G 27.2T 150 0 18.8M 0
Storage 62.5G 27.2T 91 0 11.3M 0
Storage 62.5G 27.2T 190 0 23.6M 0
Storage 62.5G 27.2T 176 0 21.9M 0
Storage 62.5G 27.2T 224 0 27.9M 0
Storage 62.5G 27.2T 278 0 34.6M 0
Storage 62.5G 27.2T 33 0 4.11M 0
Storage 62.5G 27.2T 310 0 38.5M 0
Storage 62.5G 27.2T 0 0 0 0
Storage 62.5G 27.2T 255 0 31.7M 0
Storage 62.5G 27.2T 306 0 38.0M 0
Storage 62.5G 27.2T 14 0 1.86M 0
Storage 62.5G 27.2T 174 0 21.8M 0
Storage 62.5G 27.2T 223 0 27.8M 0
Storage 62.5G 27.2T 255 0 31.7M 0
Storage 62.5G 27.2T 20 0 2.61M 0
Storage 62.5G 27.2T 123 0 15.3M 0
Storage 62.5G 27.2T 223 0 27.7M 0
Storage 62.5G 27.2T 206 0 25.7M 0
Storage 62.5G 27.2T 337 0 41.8M 0
Storage 62.5G 27.2T 97 0 12.1M 0
Storage 62.5G 27.2T 207 0 25.8M 0
Storage 62.5G 27.2T 193 0 23.9M 0
Storage 62.5G 27.2T 224 0 27.8M 0
Storage 62.5G 27.2T 130 0 16.2M 0
Storage 62.5G 27.2T 240 0 29.8M 0
Storage 62.5G 27.2T 205 0 25.6M 0
Storage 62.5G 27.2T 0 2.54K 0 318M
Storage 62.5G 27.2T 0 1.73K 0 219M
Storage 62.5G 27.2T 0 1.79K 0 221M
Storage 62.5G 27.2T 0 1.64K 0 183M
Storage 63.8G 27.2T 0 727 0 68.6M
Storage 63.8G 27.2T 0 0 0 0
Storage 63.8G 27.2T 0 0 0 0
Storage 63.8G 27.2T 0 0 0 0
Storage 63.8G 27.2T 0 0 0 0
 

cmbaker82

Dabbler
Joined
Jun 22, 2012
Messages
19
output from top shows:

PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND
2130 root 26497 500 0 0 0 0 0.00% nfsd
2709 root 2 0 0 0 0 0 0.00% python
2992 root 17 2 0 0 0 0 0.00% collectd
3554 root 0 0 0 0 0 0 0.00% python


vcsw seems really high, not sure if that is normal or not
 
Status
Not open for further replies.
Top