Just got an iSCSI target setup and connected from my Windows 7 box for some testing. When I use IOMeter to run some tests, I get horrible performance. I've tried both 8K Random 100% write, and 100% sequential writes, both results are awful.
It goes well, then stalls, rinse repeat...
zpool iostat during local dd command: dd if=/dev/zero of=/mnt/Data/tmp.dat bs=2048k count=50k
Data 94.8G 6.72T 0 1.15K 0 141M
Data 96.9G 6.72T 0 1.03K 0 129M
Data 99.0G 6.72T 0 1.84K 0 230M
Data 100G 6.71T 0 1.72K 0 214M
Data 103G 6.71T 0 1.14K 0 139M
Data 105G 6.71T 0 1.03K 0 123M
Data 106G 6.71T 0 1.39K 0 172M
Data 108G 6.71T 0 1.30K 0 161M
Data 110G 6.71T 0 1.09K 0 134M
zpool iostat during iscsi iometer test - 100% write, 100% sequential, 4K IO's
Data 170G 6.65T 14 713 114K 34.9M
Data 170G 6.65T 0 838 0 6.35M
Data 170G 6.65T 0 842 0 6.37M
Data 170G 6.65T 0 805 0 6.06M
Data 170G 6.65T 0 749 0 5.60M
In addition to this, when running the iSCSI test, zpool iostat -v shows no writes to the ZIL Log SSD at all.
My last OpenSolaris ZFS implementation on this same hardware, over iSCSI, got very good results.
It's such a drastic change to compare the local dd command to the iscsi iometer test.
Even if I just copy a file to the iSCSI storage, it copies fast...but then stalls. This is the iostat results of doing a File copy in Windows 7 over iSCSI - Windows uses 64K blocks for this IIRC.
Data 171G 6.65T 0 736 819 5.57M
Data 171G 6.65T 0 1.49K 1.60K 11.6M
Data 171G 6.65T 0 1.66K 2.40K 13.0M
Data 171G 6.65T 0 1.67K 819 13.2M
Data 171G 6.65T 0 1.81K 819 14.3M
Data 172G 6.64T 0 1.62K 0 12.7M
Data 172G 6.64T 0 1.58K 1.60K 12.4M
iostat -v here also shows no activity on the ZIL log.
zpool status shows all disks in good shape.
This is the result of a FTP transfer, still not great... but better than iSCSI. Still 0 ZIL Log use at all.
Data 173G 6.64T 9 565 79.4K 24.9M
Data 173G 6.64T 0 0 0 0
Data 173G 6.64T 0 0 0 0
Data 173G 6.64T 0 67 0 6.93M
Data 173G 6.64T 0 247 0 29.2M
Data 173G 6.64T 0 73 819 9.21M
Data 174G 6.64T 0 381 819 44.2M
Data 174G 6.64T 0 0 0 0
Data 174G 6.64T 0 706 0 87.2M
Data 175G 6.64T 0 75 819 7.90M
What should I check first? Where should I start?
Thanks!
Mark
It goes well, then stalls, rinse repeat...
zpool iostat during local dd command: dd if=/dev/zero of=/mnt/Data/tmp.dat bs=2048k count=50k
Data 94.8G 6.72T 0 1.15K 0 141M
Data 96.9G 6.72T 0 1.03K 0 129M
Data 99.0G 6.72T 0 1.84K 0 230M
Data 100G 6.71T 0 1.72K 0 214M
Data 103G 6.71T 0 1.14K 0 139M
Data 105G 6.71T 0 1.03K 0 123M
Data 106G 6.71T 0 1.39K 0 172M
Data 108G 6.71T 0 1.30K 0 161M
Data 110G 6.71T 0 1.09K 0 134M
zpool iostat during iscsi iometer test - 100% write, 100% sequential, 4K IO's
Data 170G 6.65T 14 713 114K 34.9M
Data 170G 6.65T 0 838 0 6.35M
Data 170G 6.65T 0 842 0 6.37M
Data 170G 6.65T 0 805 0 6.06M
Data 170G 6.65T 0 749 0 5.60M
In addition to this, when running the iSCSI test, zpool iostat -v shows no writes to the ZIL Log SSD at all.
My last OpenSolaris ZFS implementation on this same hardware, over iSCSI, got very good results.
It's such a drastic change to compare the local dd command to the iscsi iometer test.
Even if I just copy a file to the iSCSI storage, it copies fast...but then stalls. This is the iostat results of doing a File copy in Windows 7 over iSCSI - Windows uses 64K blocks for this IIRC.
Data 171G 6.65T 0 736 819 5.57M
Data 171G 6.65T 0 1.49K 1.60K 11.6M
Data 171G 6.65T 0 1.66K 2.40K 13.0M
Data 171G 6.65T 0 1.67K 819 13.2M
Data 171G 6.65T 0 1.81K 819 14.3M
Data 172G 6.64T 0 1.62K 0 12.7M
Data 172G 6.64T 0 1.58K 1.60K 12.4M
iostat -v here also shows no activity on the ZIL log.
zpool status shows all disks in good shape.
This is the result of a FTP transfer, still not great... but better than iSCSI. Still 0 ZIL Log use at all.
Data 173G 6.64T 9 565 79.4K 24.9M
Data 173G 6.64T 0 0 0 0
Data 173G 6.64T 0 0 0 0
Data 173G 6.64T 0 67 0 6.93M
Data 173G 6.64T 0 247 0 29.2M
Data 173G 6.64T 0 73 819 9.21M
Data 174G 6.64T 0 381 819 44.2M
Data 174G 6.64T 0 0 0 0
Data 174G 6.64T 0 706 0 87.2M
Data 175G 6.64T 0 75 819 7.90M
What should I check first? Where should I start?
Thanks!
Mark