Ok,
I am not sure where my bottleneck is. Any help would be greatly appreciated.
These are the drives I have;
ST31000528AS - 1TB
ST31000528AS - 1 TB
WDC WD15EADS-00P8B0 1.5 TB
WDC WD15EADS-00P8B0 1.5 TB
Corsair CSSD-F40GB2 40GB
This is the processor information
[root@storage01] ~# sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'
hw.machine: amd64
hw.model: Intel(R) Xeon(R) CPU X3440 @ 2.53GHz
hw.ncpu: 8
hw.machine_arch: amd64
[root@storage01] ~#
All my boxes are connected using a Cisco 3750G-24 Port switch. The network traffic between the storage box and the other boxes are all internal. Each machine has 2 gigabit cards non bonded.
Here is a network speed test between both machines.
Storage server listening
[root@storage01] ~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.2 port 5001 connected with 10.0.0.11 port 33263
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
Node Transmitting
[root@node01 ~]# iperf -c 10.0.0.2
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.11 port 33263 connected with 10.0.0.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
[root@node01 ~]#
---------------------------------------------------------------------------------
Node Listening
[root@node01 ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.11 port 5001 connected with 10.0.0.2 port 54556
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.1 sec 1.10 GBytes 936 Mbits/sec
Storage Server transmitting
As you can see with the network speeds everything checks out.
The disk's are setup in a RAID Z. I am only useing 2 of the 1 TB and 1 of the 1.5 TB drives, the other 1.5TB drive is connected but is now having problems.
The 40GB SSD drive is setup as my ZIL drive.
The server also has 4GB of ECC registered ram.
Here is some dd tests
DD Writing to the NAS
[root@node01 san]# time dd if=/dev/zero of=/san/test.file bs=1MB count=100
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 16.5313 s, 6.0 MB/s
real 0m16.534s
user 0m0.000s
sys 0m0.128s
DD Reading from the NAS writing to the NODE
[root@node01 san]# dd if=/san/test.file of=/dev/null bs=1MB
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 0.853853 s, 117 MB/s
[root@node01 san]# dd if=/san/test.file of=/root/testfile bs=1MB
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 0.0817032 s, 1.2 GB/s
[root@node01 san]#
I am not sure where my bottleneck is. Any help would be greatly appreciated.
These are the drives I have;
ST31000528AS - 1TB
ST31000528AS - 1 TB
WDC WD15EADS-00P8B0 1.5 TB
WDC WD15EADS-00P8B0 1.5 TB
Corsair CSSD-F40GB2 40GB
This is the processor information
[root@storage01] ~# sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'
hw.machine: amd64
hw.model: Intel(R) Xeon(R) CPU X3440 @ 2.53GHz
hw.ncpu: 8
hw.machine_arch: amd64
[root@storage01] ~#
All my boxes are connected using a Cisco 3750G-24 Port switch. The network traffic between the storage box and the other boxes are all internal. Each machine has 2 gigabit cards non bonded.
Here is a network speed test between both machines.
Storage server listening
[root@storage01] ~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.2 port 5001 connected with 10.0.0.11 port 33263
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
Node Transmitting
[root@node01 ~]# iperf -c 10.0.0.2
------------------------------------------------------------
Client connecting to 10.0.0.2, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.11 port 33263 connected with 10.0.0.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
[root@node01 ~]#
---------------------------------------------------------------------------------
Node Listening
[root@node01 ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.0.11 port 5001 connected with 10.0.0.2 port 54556
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.1 sec 1.10 GBytes 936 Mbits/sec
Storage Server transmitting
As you can see with the network speeds everything checks out.
The disk's are setup in a RAID Z. I am only useing 2 of the 1 TB and 1 of the 1.5 TB drives, the other 1.5TB drive is connected but is now having problems.
The 40GB SSD drive is setup as my ZIL drive.
The server also has 4GB of ECC registered ram.
Here is some dd tests
DD Writing to the NAS
[root@node01 san]# time dd if=/dev/zero of=/san/test.file bs=1MB count=100
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 16.5313 s, 6.0 MB/s
real 0m16.534s
user 0m0.000s
sys 0m0.128s
DD Reading from the NAS writing to the NODE
[root@node01 san]# dd if=/san/test.file of=/dev/null bs=1MB
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 0.853853 s, 117 MB/s
[root@node01 san]# dd if=/san/test.file of=/root/testfile bs=1MB
100+0 records in
100+0 records out
100000000 bytes (100 MB) copied, 0.0817032 s, 1.2 GB/s
[root@node01 san]#