Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

FreeNAS 11.1-U6 10GigE writes, 150MB/sec reads (iperf is fine)

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Hi,

New to FreeNAS, have it setup on a supermicro xeon-D 1541, 16GB of DRAM, with four Intel S3500 480GB SSDs as one storage pool and eight 7200RPM HGST 4TB drives as a second pool. Switch is a Netgear XS708T, jumbo frames were on, but I've shut them off everywhere currently while troubleshooting.

I had it setup and working and was getting great reads:


and writes:


I was using 30GB of ISOs as my test data and everything was great. I originally only had 5 HDDs in it until I could prove out that I could max speeds with an SSD tier. HDD speeds were good, at the expected ~150MB/sec per drive. Once I saw the SSD speeds and saw they were consistent over several hours of testing, I shut the box down and bought 3 more HDDs to max out the storage available. Once the box came back up, I deleted my HDD pool, re-created it as a 1x8 arrangement as raidz2. Writes were still great and I started moving data into it, but when I decided to test reads again, they were only 150MB/sec. And then I tested the SSD tier and the reads there were also 150MB/sec...

I broke out iperf to try and eliminate the networking as an issue, things seem fine here:
#freenas as client
Code:
root@freenas:~ # iperf3 -c 192.168.2.200 -w64k -P6
Connecting to host 192.168.2.200, port 5201
[  5] local 192.168.2.28 port 63491 connected to 192.168.2.200 port 5201
[  7] local 192.168.2.28 port 56686 connected to 192.168.2.200 port 5201
[  9] local 192.168.2.28 port 47797 connected to 192.168.2.200 port 5201
[ 11] local 192.168.2.28 port 62493 connected to 192.168.2.200 port 5201
[ 13] local 192.168.2.28 port 35516 connected to 192.168.2.200 port 5201
[ 15] local 192.168.2.28 port 25206 connected to 192.168.2.200 port 5201
[ ID] Interval		   Transfer	 Bitrate		 Retr  Cwnd
[  5]   1.00-2.00   sec   158 MBytes  1.32 Gbits/sec	0   64.0 KBytes
[  7]   1.00-2.00   sec   156 MBytes  1.31 Gbits/sec	0   64.0 KBytes
[  9]   1.00-2.00   sec   132 MBytes  1.11 Gbits/sec	0   64.0 KBytes
[ 11]   1.00-2.00   sec   132 MBytes  1.11 Gbits/sec	0   64.0 KBytes
[ 13]   1.00-2.00   sec   154 MBytes  1.29 Gbits/sec	0   64.0 KBytes
[ 15]   1.00-2.00   sec   158 MBytes  1.32 Gbits/sec	0   64.0 KBytes
[SUM]   1.00-2.00   sec   889 MBytes  7.46 Gbits/sec	0


#windows as client
Code:
PS O:\Download\iperf-3.1.3-win64> .\iperf3.exe -c 192.168.2.28 -w64k -P6
Connecting to host 192.168.2.28, port 5201
[  4] local 192.168.2.200 port 51622 connected to 192.168.2.28 port 5201
[  6] local 192.168.2.200 port 51623 connected to 192.168.2.28 port 5201
[  8] local 192.168.2.200 port 51624 connected to 192.168.2.28 port 5201
[ 10] local 192.168.2.200 port 51625 connected to 192.168.2.28 port 5201
[ 12] local 192.168.2.200 port 51626 connected to 192.168.2.28 port 5201
[ 14] local 192.168.2.200 port 51627 connected to 192.168.2.28 port 5201
[ ID] Interval		   Transfer	 Bandwidth
[  4]   1.00-2.00   sec   161 MBytes  1.35 Gbits/sec
[  6]   1.00-2.00   sec   164 MBytes  1.37 Gbits/sec
[  8]   1.00-2.00   sec   225 MBytes  1.88 Gbits/sec
[ 10]   1.00-2.00   sec   248 MBytes  2.08 Gbits/sec
[ 12]   1.00-2.00   sec   161 MBytes  1.35 Gbits/sec
[ 14]   1.00-2.00   sec   159 MBytes  1.33 Gbits/sec
[SUM]   1.00-2.00   sec  1.09 GBytes  9.37 Gbits/sec


Freenas as a client is a tad slower, but it doesn't have 4.5GHz cores like my desktop does. It's still way more throughput than the 1.2GBit I'm seeing on reads from FreeNAS volumes.

Compression is off, atime is off, dedupe is off...

ix0 looked like this when jumbo frames were on:

ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether ac:1f:6b:60:52:7c
hwaddr ac:1f:6b:60:52:7c
inet 192.168.2.28 netmask 0xffffff00 broadcast 192.168.2.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (10Gbase-T <full-duplex>)
status: active

I've set the SSD stripe up as 1x4 and as 2x2 with no change in performance.

Same thing:


I tried with SMB Multichannel on/off and there was no difference there.

I tried starting a second read while the first was going only to have the speeds both run at 50%


I can still copy data at full speed across my network to other VMs just fine, just not FreeNAS.

Thoughts?

Thanks,

-- Dave
 
Last edited:

Chris Moore

Wizened Sage
Joined
May 2, 2015
Messages
9,654
Did you enable Autotune?
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Did you enable Autotune?
No. I left that off.

I did try to do the 10gige tuning I've seen around here and 45drives's site, but no change. I reset the config and am currently at that state.

All c-states, p-states, power management, etc are shut off on the box as well.

Sent from my Moto Z (2) using Tapatalk
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
No. I left that off.

I did try to do the 10gige tuning I've seen around here and 45drives's site, but no change. I reset the config and am currently at that state.

All c-states, p-states, power management, etc are shut off on the box as well.
First question, are you absolutely sure of what changed? nothing beyond deleting and recreating the magnetic zpool?

Gating the write speed at exactly 150MB/s smells fishy. Can we see your pool layout? zpool status?

I'd also be curious about what the write behavior looks like from a window running "zpool iostat -v 1" when you try your speed test. Are the writes bursty or consistent when the copy is running?
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
First question, are you absolutely sure of what changed? nothing beyond deleting and recreating the magnetic zpool?

Gating the write speed at exactly 150MB/s smells fishy. Can we see your pool layout? zpool status?

I'd also be curious about what the write behavior looks like from a window running "zpool iostat -v 1" when you try your speed test. Are the writes bursty or consistent when the copy is running?
It's the reads from my freenas that are gated, but I'll get you that info in a few.

Sent from my Moto Z (2) using Tapatalk
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
root@freenas:~ # zpool status
pool: SSD
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
SSD ONLINE 0 0 0
gptid/a30eebac-b386-11e8-afc0-ac1f6b60527c ONLINE 0 0 0
gptid/a364bc7b-b386-11e8-afc0-ac1f6b60527c ONLINE 0 0 0
gptid/a3dd9d43-b386-11e8-afc0-ac1f6b60527c ONLINE 0 0 0
gptid/a437ba06-b386-11e8-afc0-ac1f6b60527c ONLINE 0 0 0

errors: No known data errors

pool: Storage
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
Storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/2b7bb6c7-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/2e346c93-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/30c7fc8b-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/33983372-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/36358982-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/3761807b-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/380a696b-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0
gptid/38c890a7-b2d7-11e8-9953-ac1f6b60527c ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:01:39 with 0 errors on Mon Sep 3 00:46:39 2018
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da8p2 ONLINE 0 0 0
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Writes to the SSD pool:

capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
SSD 24.0G 1.71T 0 2.12K 0 742M
gptid/a30eebac-b386-11e8-afc0-ac1f6b60527c 5.99G 438G 0 529 0 185M
gptid/a364bc7b-b386-11e8-afc0-ac1f6b60527c 5.98G 438G 0 525 0 187M
gptid/a3dd9d43-b386-11e8-afc0-ac1f6b60527c 6.01G 438G 0 547 0 184M
gptid/a437ba06-b386-11e8-afc0-ac1f6b60527c 6.00G 438G 0 569 0 187M
-------------------------------------------- ----- ----- ----- ----- ----- -----


Reads from the SSD pool:

capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
SSD 25.4G 1.71T 1.21K 0 157M 0
gptid/a30eebac-b386-11e8-afc0-ac1f6b60527c 6.36G 438G 315 0 40.0M 0
gptid/a364bc7b-b386-11e8-afc0-ac1f6b60527c 6.34G 438G 304 0 38.7M 0
gptid/a3dd9d43-b386-11e8-afc0-ac1f6b60527c 6.37G 438G 315 0 39.9M 0
gptid/a437ba06-b386-11e8-afc0-ac1f6b60527c 6.36G 438G 304 0 38.8M 0
-------------------------------------------- ----- ----- ----- ----- ----- -----


That reflects what I'm seeing in windows.
Bandwidth: 740M write, 157M read

Thanks,

-- Dave
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Testing file copies from console as the same user:

I get what I was previously seeing in windows. ~900MB/sec writes to SSD tier, similar reads from HDD tier. ~550MB/sec writes to HDD tier.

Reading from HDD, writing to SSD from console

capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
SSD 19.3G 1.72T 0 1.55K 0 890M
gptid/a30eebac-b386-11e8-afc0-ac1f6b60527c 4.82G 439G 0 415 0 223M
gptid/a364bc7b-b386-11e8-afc0-ac1f6b60527c 4.80G 439G 0 394 0 223M
gptid/a3dd9d43-b386-11e8-afc0-ac1f6b60527c 4.82G 439G 0 396 0 222M
gptid/a437ba06-b386-11e8-afc0-ac1f6b60527c 4.82G 439G 0 379 0 222M
-------------------------------------------- ----- ----- ----- ----- ----- -----
Storage 19.7G 29.0T 6.53K 0 835M 0
raidz2 19.7G 29.0T 6.53K 0 835M 0
gptid/2b7bb6c7-b2d7-11e8-9953-ac1f6b60527c - - 3.85K 0 113M 0
gptid/2e346c93-b2d7-11e8-9953-ac1f6b60527c - - 2.88K 0 121M 0
gptid/30c7fc8b-b2d7-11e8-9953-ac1f6b60527c - - 3.57K 0 113M 0
gptid/33983372-b2d7-11e8-9953-ac1f6b60527c - - 3.77K 0 112M 0
gptid/36358982-b2d7-11e8-9953-ac1f6b60527c - - 2.52K 0 126M 0
gptid/3761807b-b2d7-11e8-9953-ac1f6b60527c - - 3.99K 0 113M 0
gptid/380a696b-b2d7-11e8-9953-ac1f6b60527c - - 4.42K 0 108M 0
gptid/38c890a7-b2d7-11e8-9953-ac1f6b60527c - - 4.17K 0 110M 0
-------------------------------------------- ----- ----- ----- ----- ----- -----
freenas-boot 2.51G 4.68G 0 0 0 0
da8p2 2.51G 4.68G 0 0 0 0
-------------------------------------------- ----- ----- ----- ----- ----- -----

Reading from HDD pool, writing to SSDs from console

capacity operations bandwidth
pool alloc free read write read write
-------------------------------------------- ----- ----- ----- ----- ----- -----
SSD 25.4G 1.71T 4.00K 0 536M 0
gptid/a30eebac-b386-11e8-afc0-ac1f6b60527c 6.37G 438G 1.00K 0 134M 0
gptid/a364bc7b-b386-11e8-afc0-ac1f6b60527c 6.34G 438G 1017 0 133M 0
gptid/a3dd9d43-b386-11e8-afc0-ac1f6b60527c 6.37G 438G 1017 0 134M 0
gptid/a437ba06-b386-11e8-afc0-ac1f6b60527c 6.36G 438G 1.01K 0 134M 0
-------------------------------------------- ----- ----- ----- ----- ----- -----
Storage 16.5G 29.0T 0 4.44K 0 546M
raidz2 16.5G 29.0T 0 4.44K 0 546M
gptid/2b7bb6c7-b2d7-11e8-9953-ac1f6b60527c - - 0 1.98K 0 93.2M
gptid/2e346c93-b2d7-11e8-9953-ac1f6b60527c - - 0 1.73K 0 93.3M
gptid/30c7fc8b-b2d7-11e8-9953-ac1f6b60527c - - 0 1.73K 0 93.4M
gptid/33983372-b2d7-11e8-9953-ac1f6b60527c - - 0 1.81K 0 93.3M
gptid/36358982-b2d7-11e8-9953-ac1f6b60527c - - 0 1.79K 0 93.2M
gptid/3761807b-b2d7-11e8-9953-ac1f6b60527c - - 0 1.35K 0 93.9M
gptid/380a696b-b2d7-11e8-9953-ac1f6b60527c - - 0 1.27K 0 95.4M
gptid/38c890a7-b2d7-11e8-9953-ac1f6b60527c - - 0 1.37K 0 93.9M
-------------------------------------------- ----- ----- ----- ----- ----- -----
freenas-boot 2.51G 4.68G 0 0 0 0
da8p2 2.51G 4.68G 0 0 0 0
-------------------------------------------- ----- ----- ----- ----- ----- -----
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
It's the reads from my freenas that are gated, but I'll get you that info in a few.
Yeah, sorry, it was late and I didn't proof what I typed.. :)
It's still very funny behavior.

So the reads are not bursty, it sustains a consistent read rate?

This is CIFS as well, correct? I assume so since you included some windows progress windows?
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Yeah, sorry, it was late and I didn't proof what I typed.. :)
It's still very funny behavior.

So the reads are not bursty, it sustains a consistent read rate?

This is CIFS as well, correct? I assume so since you included some windows progress windows?
Yup, freenas to windows 10. All transfers max out and flatline at the numbers provided.

Sent from my Moto Z (2) using Tapatalk
 

Chris Moore

Wizened Sage
Joined
May 2, 2015
Messages
9,654
That reflects what I'm seeing in windows.
Bandwidth: 740M write, 157M read
I don't get that problem, and I am not sure why it is happening to you. I have a regular SATA SSD in my Windows system and I have not configured a RAM disk to to any testing that way, so I always guessed that my limitation was the SSD in my desktop. I usually get close to 500MB/s, with some fluctuations.
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
The main SSD in my windows system (it only has SSDs) maxes out around 2.5GB/sec read/write, so that definitely isn't it.
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
FTP reads also max out around ~155MB/sec.


FTP writes are nice and fast ~655MB/sec



So I'm guessing this isn't an SMB issue since FTP is equally affected.
 
Last edited:

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Alright, so out of frustration of swapping ports/cables, I plugged in the 2nd 10gigE port on the board (ix1). Bummed I can't do static IPs on both, but I'll settle for DHCP on one for now.

I put the 2nd port in DHCP mode and tested service connectivity on that port vs the main and saw a substantial improvement:




I then disabled the prior port and speeds dropped back down to 155MB/sec. :(

It seems like whichever port freenas decides is the main admin port, slows down to 1mbit sends.

Is there any way to dig into this from the console and see if there's a service quota or something on a packet filter/etc?

Also, offtopic, but what is with the random text replacement on this forum?

I typed "A-l-l-r-i-g-h-t" in this original post, but the forum is displaying it as lowercase "all right"
In the edit dialog:

Shown on page:




Thanks,

-- Dave
 
Last edited:

acquacow

Member
Joined
Sep 7, 2018
Messages
51
Moving along to netcat between freenas and a centos VM I have on the network.

Both 10gig ports configured in freenas gets me 838MB/sec
Single 10gig port configured in freenas gets me 761MB/sec

I've repeated the test just to verify. Something is definitely up when only a single interface is configured within freenas. It's that, or there's a hardware limitation where the device doesn't get enough power when only one port is "up" and enabled at a kernel level.

Not sure how to further debug this...going to LACP both ports into my switch and see how that goes.

Both 10GigE configured in GUI, 2nd port just set for a different subnet:
root@freenas:/tmp # dd if=/dev/zero bs=1M count=100000 | nc 192.168.2.55 4444
104857600000 bytes transferred in 124.896474 secs (839556129 bytes/sec)


Single 10GigE port configured in web gui, no other nics displayed/configured:
root@freenas:/tmp # dd if=/dev/zero bs=1M count=100000 | nc 192.168.2.55 4444
104857600000 bytes transferred in 137.645763 secs (761793155 bytes/sec)


Both 10GigE configured in GUI, 2nd port just set for a different subnet:
root@freenas:/tmp # dd if=/dev/zero bs=1M count=100000 | nc 192.168.2.55 4444
104857600000 bytes transferred in 125.052519 secs (838508499 bytes/sec)
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Moving along to netcat between freenas and a centos VM I have on the network.

Both 10gig ports configured in freenas gets me 838MB/sec
Single 10gig port configured in freenas gets me 761MB/sec

I've repeated the test just to verify. Something is definitely up when only a single interface is configured within freenas. It's that, or there's a hardware limitation where the device doesn't get enough power when only one port is "up" and enabled at a kernel level.

Not sure how to further debug this...going to LACP both ports into my switch and see how that goes.
LACP shouldn't affect the behavior you're seeing. If it does have an effect, then that would point to some sort of configuration problem..

We use almost exactly the same disk configuration in one of our pod configs. They don't use the same mb, but it's close enough. General performance from the SSD pool is north of 7Gb/s on reads and writes over the network.
Only other difference is we use an SLOG on a NVMe board with the volume. And we run 11.1U5. The SLOG shouldn't be relevant to reads. Feel like trying U5 instead of U6? this is the list of tunables we use, but otherwise it's a stock setup for CIFS/samba..

It might rule out an OS bug.. I assume you did your whole config from scratch when you reinstalled?

Screen Shot 2018-09-11 at 02.14.36.png
 

acquacow

Member
Joined
Sep 7, 2018
Messages
51
LACP shouldn't affect the behavior you're seeing. If it does have an effect, then that would point to some sort of configuration problem..

We use almost exactly the same disk configuration in one of our pod configs. They don't use the same mb, but it's close enough. General performance from the SSD pool is north of 7Gb/s on reads and writes over the network.
Only other difference is we use an SLOG on a NVMe board with the volume. And we run 11.1U5. The SLOG shouldn't be relevant to reads. Feel like trying U5 instead of U6? this is the list of tunables we use, but otherwise it's a stock setup for CIFS/samba..

It might rule out an OS bug.. I assume you did your whole config from scratch when you reinstalled?

View attachment 25607
Yeah, I'm currently on a fresh install. Nothing configured other than ip addresses in the web gui.

I'm fine with shuffling versions.

Sent from my Moto Z (2) using Tapatalk
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Yeah, I'm currently on a fresh install. Nothing configured other than IP addresses in the web gui.

I'm fine with shuffling versions.
The parameters are in that image I posted above.. give it a shot, it can't hurt. :)
 
Top