For the past couple of weeks I've been playing around with a setup to get a small but decently reliable storage server up and going. I started with FreeNAS, and ran through some others just to compare performance, so far FreeNAS has given me the best.
Current NAS Setup:
Asus Mobo
Core i3 3.1Ghz
32GB DDR3 RAM
1x Onboard 1Gbe (Used for all management purposes)
Quad port Gbe Intel Server NIC, PCIe (Used for iSCSI)
ZFS Pool: 2x RaidZ each containing 3x 2TB Seagate Barracudas
(Making it a ZFS RAID 50)
The drives used to be connected via an Adaptec 3805 I had laying around. I found a post here that suggested to connect them directly to the mobo to avoid any double caching issues.
My test server is a Dell PowerEdge 1850 also with the same quad port intel gigabit nic card in it and Window Server 2012 using MPIO and the iSCSI initiator built in.
With the results I'm seeing benchmarking my storage I'm just having some trouble coming to terms with the performance I'm getting over iSCSI using all four NICs on each side. Obviously actual performance will never live up to theoretical but I'm not seeing anything close to that.
My first issue is consistency, I can be copying a file from the mounted drive at say 105 MB/s and it will dip to 30 MB/s for X amount of time and then bounce back up. Even writing does the same, bouncing up and down and I can watch the network traffic show the same results.
Another one is my write performance, since this topic has been covered time and time again I expect to take heat from this, but I have been through the forums for the past two weeks trying different solutions, versions, configurations, everything to change it or figure out why but I can't. With client write cache on I can write to the disk at a maximum of 70 MB/s give or take a few MB/s, this is straight from a RAMdisk or drive, doesn't matter (each NIC can get up to 300-400 mb/s). I usually try to keep my transfers above 10GB to get a nice idea of where my speeds stand. Without client write cache on, I can get as low as 10 MB/s and the network graphs show the same performance drop, never getting over 40-100 mb/s each.
I can live with about 100 MB/s read and 70 MB/s write if it were consistent. But it's not consistent and I would rather keep the client side write cache off in my case. I will post my tests below to show you the speeds I'm getting between the boxes and on the NAS itself. I will obviously continue to look around the forums for more answers but I've pretty much abused the search so far, I'm not sure how much more I will get from it.
DD Tests directly on NAS:
iPerf Tests between NAS and test box:
Any suggestions or insight would be greatly appreciated. Thank you.
Current NAS Setup:
Asus Mobo
Core i3 3.1Ghz
32GB DDR3 RAM
1x Onboard 1Gbe (Used for all management purposes)
Quad port Gbe Intel Server NIC, PCIe (Used for iSCSI)
ZFS Pool: 2x RaidZ each containing 3x 2TB Seagate Barracudas
(Making it a ZFS RAID 50)
The drives used to be connected via an Adaptec 3805 I had laying around. I found a post here that suggested to connect them directly to the mobo to avoid any double caching issues.
My test server is a Dell PowerEdge 1850 also with the same quad port intel gigabit nic card in it and Window Server 2012 using MPIO and the iSCSI initiator built in.
With the results I'm seeing benchmarking my storage I'm just having some trouble coming to terms with the performance I'm getting over iSCSI using all four NICs on each side. Obviously actual performance will never live up to theoretical but I'm not seeing anything close to that.
My first issue is consistency, I can be copying a file from the mounted drive at say 105 MB/s and it will dip to 30 MB/s for X amount of time and then bounce back up. Even writing does the same, bouncing up and down and I can watch the network traffic show the same results.
Another one is my write performance, since this topic has been covered time and time again I expect to take heat from this, but I have been through the forums for the past two weeks trying different solutions, versions, configurations, everything to change it or figure out why but I can't. With client write cache on I can write to the disk at a maximum of 70 MB/s give or take a few MB/s, this is straight from a RAMdisk or drive, doesn't matter (each NIC can get up to 300-400 mb/s). I usually try to keep my transfers above 10GB to get a nice idea of where my speeds stand. Without client write cache on, I can get as low as 10 MB/s and the network graphs show the same performance drop, never getting over 40-100 mb/s each.
I can live with about 100 MB/s read and 70 MB/s write if it were consistent. But it's not consistent and I would rather keep the client side write cache off in my case. I will post my tests below to show you the speeds I'm getting between the boxes and on the NAS itself. I will obviously continue to look around the forums for more answers but I've pretty much abused the search so far, I'm not sure how much more I will get from it.
DD Tests directly on NAS:
Code:
[root@localhost] /mnt/Pool# dd if=/dev/zero of=ddfile bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 26.479950 secs (395988664 bytes/sec) [root@localhost] /mnt/Pool# dd if=ddfile of=/dev/zero bs=1M 10000+0 records in 10000+0 records out 10485760000 bytes transferred in 15.434337 secs (679378717 bytes/sec)
iPerf Tests between NAS and test box:
Code:
[root@localhost] ~# iperf -w 65535 -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [ 4] local 10.0.1.10 port 5001 connected with 10.0.1.100 port 50017 [ 4] 0.0-10.0 sec 858 MBytes 718 Mbits/sec [ 5] local 10.0.2.10 port 5001 connected with 10.0.2.100 port 50039 [ 5] 0.0-10.0 sec 822 MBytes 688 Mbits/sec [ 4] local 10.0.3.10 port 5001 connected with 10.0.3.100 port 50067 [ 4] 0.0-10.0 sec 780 MBytes 653 Mbits/sec [ 5] local 10.0.4.10 port 5001 connected with 10.0.4.100 port 50091 [ 5] 0.0-10.0 sec 868 MBytes 726 Mbits/sec C:\Users\Administrator\Downloads\iperf-2.0.5-cygwin>iperf -w 65535 -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [ 4] local 10.0.1.100 port 5001 connected with 10.0.1.10 port 44155 [ 4] 0.0-10.0 sec 1.08 GBytes 932 Mbits/sec [ 4] local 10.0.2.100 port 5001 connected with 10.0.2.10 port 15695 [ 4] 0.0-10.0 sec 1.09 GBytes 936 Mbits/sec [ 4] local 10.0.3.100 port 5001 connected with 10.0.3.10 port 58551 [ 4] 0.0-10.0 sec 1.09 GBytes 931 Mbits/sec [ 4] local 10.0.4.100 port 5001 connected with 10.0.4.10 port 36130 [ 4] 0.0-10.0 sec 1.09 GBytes 931 Mbits/sec
Any suggestions or insight would be greatly appreciated. Thank you.