I am new to FreeNAS but have been using Linux for years. I just upgraded my RAID 50 Linux server to FreeNAS (No hardware changes at all). I am now experiencing terrible hard disk I/O problems. At first I thought it was a network issue, but as I started to troubleshoot I realized it was drive related.
To start, I am running a Core 2 Q6600, 4 GB Memory (8 GB in the mail and will be here any day), Six 1G Ethernet Cards. My RAID controller is a 3Ware 9650SE-12ML configured as RAID 50 with 12 3TB disks.
Under Linux I was getting approximately 2.5 Gbit/sec I/O from the array, now I am getting 80 Mbit/second with peak performance of 110 Mbit/sec.
I eliminated networking as an issue by running iperf. See results:
Next I checked write speed. See results:
This is definitely the problem. To keep everyone from having to do the math this is 82.830741882 Mbit/sec local read write.
If I write a smaller file that can be cached the issue seems to disappear. See below:
Again to save you the trouble, this is 2833.37656403 Mbit/sec. Obviously I am okay with this I/O speed considering my RAID adapter is a 3Gbit/sec adapter.
There were zero configuration changes to the actual RAID configuration. I did change the file system to ZFS though. I originally had dedup enabled but I thought maybe that was contributing to the problem, so I disabled it.
I do think my swap partition may be a little small, but I do not remember seeing an option to manually partition the disk during install. See results:
None of the disks are failing and they have passed any checks I have run on them.
I am trying to think of possible reasons for this to be happening and I thought maybe 3Ware drivers/utilities could be the culprit. I have not been able to find 3Ware software for this RAID adapter for FreeBSD 8.3.
Any help or input would be appreciated. I know a lot of people are having similar issues, but i have run through the tests that I have read in other threads and can't seem to find a fix for this.
Thank you very much for your help!
Jeff
To start, I am running a Core 2 Q6600, 4 GB Memory (8 GB in the mail and will be here any day), Six 1G Ethernet Cards. My RAID controller is a 3Ware 9650SE-12ML configured as RAID 50 with 12 3TB disks.
Under Linux I was getting approximately 2.5 Gbit/sec I/O from the array, now I am getting 80 Mbit/second with peak performance of 110 Mbit/sec.
I eliminated networking as an issue by running iperf. See results:
[jsimon@storage01 /mnt/raid0]$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.20.105 port 5001 connected with 10.0.20.100 port 46752
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 19.6 GBytes 937 Mbits/sec
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.20.105 port 5001 connected with 10.0.20.100 port 46752
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-180.0 sec 19.6 GBytes 937 Mbits/sec
Next I checked write speed. See results:
[jsimon@storage01 /mnt/raid0]$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=1000000 && sync"
1000000+0 records in
1000000+0 records out
8192000000 bytes transferred in 754.550775 secs (10856791 bytes/sec)
real 12m36.513s
user 0m0.383s
sys 0m22.595s
1000000+0 records in
1000000+0 records out
8192000000 bytes transferred in 754.550775 secs (10856791 bytes/sec)
real 12m36.513s
user 0m0.383s
sys 0m22.595s
This is definitely the problem. To keep everyone from having to do the math this is 82.830741882 Mbit/sec local read write.
If I write a smaller file that can be cached the issue seems to disappear. See below:
[jsimon@storage01 /mnt/raid0]$ time sh -c "dd if=/dev/zero of=bigfile bs=8k count=100 && sync"
100+0 records in
100+0 records out
819200 bytes transferred in 0.002206 secs (371376333 bytes/sec)
real 0m0.121s
user 0m0.000s
sys 0m0.011s
100+0 records in
100+0 records out
819200 bytes transferred in 0.002206 secs (371376333 bytes/sec)
real 0m0.121s
user 0m0.000s
sys 0m0.011s
Again to save you the trouble, this is 2833.37656403 Mbit/sec. Obviously I am okay with this I/O speed considering my RAID adapter is a 3Gbit/sec adapter.
There were zero configuration changes to the actual RAID configuration. I did change the file system to ZFS though. I originally had dedup enabled but I thought maybe that was contributing to the problem, so I disabled it.
I do think my swap partition may be a little small, but I do not remember seeing an option to manually partition the disk during install. See results:
[jsimon@storage01 /]$ swapinfo
Device 1K-blocks Used Avail Capacity
/dev/da0p1.eli 2097152 22588 2074564 1%
Device 1K-blocks Used Avail Capacity
/dev/da0p1.eli 2097152 22588 2074564 1%
None of the disks are failing and they have passed any checks I have run on them.
I am trying to think of possible reasons for this to be happening and I thought maybe 3Ware drivers/utilities could be the culprit. I have not been able to find 3Ware software for this RAID adapter for FreeBSD 8.3.
Any help or input would be appreciated. I know a lot of people are having similar issues, but i have run through the tests that I have read in other threads and can't seem to find a fix for this.
Thank you very much for your help!
Jeff