SOLVED! Very Slow Disk Performance

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Trying FreeNAS for first time. Built low end system (just a bit better hardware than low end Synology that can be a plex server) using ASRock J4105-ITX and 16G of ram. I have two 4TB HGSTs in a mirror with my Plex media dataset. I have a cheap WD SSD for my jails. Took a bit of study and youtube, but have plex installed and running fine. I'm using NFS between it and my Ubuntu Mate 18.04 build. I have the media folder mounted and am able to copy things to and from the FreeNAS fine... problem is I noticed the transfer speeds were abysmal. At first, I thought the problem was the Realtek NIC.. but then I noticed, the first few Gigs a file go very fast.. saturating the 1G network... 145 to 110 MBs speeds... then things start to drop off... (I'm guessing the fast speeds are from Ubuntu to the ram of the FreeNAS). They drop all the way to 10 to 15 Mbs for the remainder of the time.. then when my transfer box in Ubuntu says 0 time remaining it takes several minutes (for 3Gb files) for the progress box to close. (I assume it is waiting for word back from the FreeNAS that the operation is complete). I thought maybe it was something to do with my HDD mirror pool.. so made a share on the SSD... same result. Copying movies back from either the mirror, or the SSD... about the same slow speed throughout... only no big delay at the end of transfer. So both reads and writes, from the SSD and the mirror are very slow. The CPU usage is very low.. usually under 10 percent. It is using up the memory like it should receiving files. I did an iostat -v, mirror drives run about the same but numbers seem low...mirror had 17 write ops, 34.4K read bandwidth and 1.18M write (worked up to those numbers.. they were highest moving a 3.2G folder with 8 short TV episodes) I have not found other examples of this problem... any ideas what it could be?
 
Last edited:

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Typically you're going to want to get a bit more detailed with your hardware here. You can add it in your signature like a lot of people do. Check out mine for an example. Also, it would be easier to parse what you write if you use paragraphs.

That said, what I would probably start with is running some iperf3 tests. On your FreeNAS box run
iperf3 -s
and on your Linux box run:

Code:
apt-get update && apt-get install iperf3

iperf3 -c YOUR_FREENAS_SERVER

# Reverse the test.
iperf3 -R -c YOUR_FREENAS_SERVER



Without doing anything else I'm able to nearly saturate a gig connection (940 Mbps).

Maybe start with adding some more info about your setup, running iperf3 tests, and adding the output of "zpool status" would help too.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I think I see where the problem is...
At first, I thought the problem was the Realtek NIC..
That is absolutely a part of the problem, but not all of it. You might want to get an Intel or other better supported NIC. Realtek NICs are known to be slower.

Hardware Recommendations Guide Rev. 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

Hardware Recommendations by @cyberjock - from 26 Aug 2014 - and still valid
https://forums.freenas.org/threads/hardware-recommendations-read-this-first.23069/
I assume it is waiting for word back from the FreeNAS that the operation is complete
Right. FreeNAS RAM acts as a cache and the transfer goes fast until RAM is filled, then it goes slow because it is waiting for the disk to catch up. The total process can't finish until everything has flushed from RAM to disk on the FreeNAS side. What kind of disks are you using?
You will probably also need to have a SLOG, in the FreeNAS because this is NFS.
Here are some basics about how ZFS works and the terminology of ZFS:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/
so made a share on the SSD
What SSD, the one for jails? Hardware description? See the guide here:

Forum Guidelines
https://www.ixsystems.com/community/threads/forum-guidelines.45124/

I did an iostat -v, mirror drives run about the same but numbers seem low...mirror had 17 write ops, 34.4K read bandwidth and 1.18M write (worked up to those numbers.. they were highest moving a 3.2G folder with 8 short TV episodes) I have not found other examples of this problem... any ideas what it could be?
First, understand that the maximum speed you can attain is the speed of the slowest drive in your pool of drives. You only have two, but just for reference, you should know.
Then, ZFS uses a log called the ZFS Intent Log or ZIL which is inside the pool (collection of disks used for storage) NFS uses synchronous write, so data that is to be committed to the pool is copied (via the network) into the RAM cache (called ARC for Adaptive Replacement Cache) and the ARC is flushed to disk every five seconds, but on synchronous writes, everything is first written to the ZIL and then written again to the actual permanent storage, so everything going into the disk pool is being written twice to the physical disks. In your situation, you will need what is known as SLOG (Separate LOG) device to allow the ZIL data to be written to a very fast (NVMe SSD) location where it can then be acknowledged back to the source system as having been committed to disk, and then it can be spooled from ARC (in RAM) to disk at the speed of the mechanical disk and it only gets written once as ZIL has been moved to a separate device, the SLOG.

Is that all about clear?
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Typically you're going to want to get a bit more detailed with your hardware here. You can add it in your signature like a lot of people do. Check out mine for an example. Also, it would be easier to parse what you write if you use paragraphs.

That said, what I would probably start with is running some iperf3 tests. On your FreeNAS box run and on your Linux box run:

Code:
apt-get update && apt-get install iperf3

iperf3 -c YOUR_FREENAS_SERVER

# Reverse the test.
iperf3 -R -c YOUR_FREENAS_SERVER



Without doing anything else I'm able to nearly saturate a gig connection (940 Mbps).

Maybe start with adding some more info about your setup, running iperf3 tests, and adding the output of "zpool status" would help too.


Here is result from Mate iperf -
Reverse mode, remote host 192.168.50.35 is sending
[ 4] local 192.168.50.94 port 56500 connected to 192.168.50.35 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 10.9 MBytes 91.4 Mbits/sec
[ 4] 1.00-2.00 sec 10.9 MBytes 91.7 Mbits/sec
[ 4] 2.00-3.00 sec 10.9 MBytes 91.8 Mbits/sec
[ 4] 3.00-4.00 sec 10.9 MBytes 91.4 Mbits/sec
[ 4] 4.00-5.00 sec 10.8 MBytes 91.0 Mbits/sec
[ 4] 5.00-6.00 sec 10.9 MBytes 91.4 Mbits/sec
[ 4] 6.00-7.00 sec 10.9 MBytes 91.5 Mbits/sec
[ 4] 7.00-8.00 sec 10.9 MBytes 91.3 Mbits/sec
[ 4] 8.00-9.00 sec 11.0 MBytes 91.9 Mbits/sec
[ 4] 9.00-10.00 sec 10.9 MBytes 91.2 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 109 MBytes 91.6 Mbits/sec 189 sender
[ 4] 0.00-10.00 sec 109 MBytes 91.6 Mbits/sec receiver

iperf Done.

not able to copy and paste from the browser showing run in FreeNAS..
First 10 lines.. each one second
Transfer Bitrate Retr Cwnd
11.1 MB 93.1 MBits/sec 17 90.9 KBytes
10.9 91.7 21 85.2
11 91.9 15 80.9
10.9 91.4 21 76.6
10.8 91.0 21 72.4

Rest were similar - I edited and attached a screen shot

I updated my signature with system information
Thank you.
 

Attachments

  • Screenshot at 2019-03-16 17-57-54.png
    Screenshot at 2019-03-16 17-57-54.png
    185.6 KB · Views: 1,272
Last edited:

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
I think I see where the problem is...

That is absolutely a part of the problem, but not all of it. You might want to get an Intel or other better supported NIC. Realtek NICs are known to be slower.

Hardware Recommendations Guide Rev. 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

Hardware Recommendations by @cyberjock - from 26 Aug 2014 - and still valid
https://forums.freenas.org/threads/hardware-recommendations-read-this-first.23069/

Right. FreeNAS RAM acts as a cache and the transfer goes fast until RAM is filled, then it goes slow because it is waiting for the disk to catch up. The total process can't finish until everything has flushed from RAM to disk on the FreeNAS side. What kind of disks are you using?
You will probably also need to have a SLOG, in the FreeNAS because this is NFS.
Here are some basics about how ZFS works and the terminology of ZFS:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://forums.freenas.org/index.php?threads/terminology-and-abbreviations-primer.28174/

Why not to use RAID-5 or RAIDz1
https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/

What SSD, the one for jails? Hardware description? See the guide here:

Forum Guidelines
https://www.ixsystems.com/community/threads/forum-guidelines.45124/


First, understand that the maximum speed you can attain is the speed of the slowest drive in your pool of drives. You only have two, but just for reference, you should know.
Then, ZFS uses a log called the ZFS Intent Log or ZIL which is inside the pool (collection of disks used for storage) NFS uses synchronous write, so data that is to be committed to the pool is copied (via the network) into the RAM cache (called ARC for Adaptive Replacement Cache) and the ARC is flushed to disk every five seconds, but on synchronous writes, everything is first written to the ZIL and then written again to the actual permanent storage, so everything going into the disk pool is being written twice to the physical disks. In your situation, you will need what is known as SLOG (Separate LOG) device to allow the ZIL data to be written to a very fast (NVMe SSD) location where it can then be acknowledged back to the source system as having been committed to disk, and then it can be spooled from ARC (in RAM) to disk at the speed of the mechanical disk and it only gets written once as ZIL has been moved to a separate device, the SLOG.

Is that all about clear?

Thank you for your response. Yes, there is just one SSD which I have my plex jail on.. and a shared pool file for testing.

I have been studying up on ZFS for a while now.. still had not grasped that the data to my mirror is being written twice to each disk? I have one open Sata connector on the board, so not a lot of room to expand there.. but could add an SSD for SLOG if that would solve the issue... other than the first big load of the system, normal writes would be Movie or tv files from 750MB to a few GB each.

I've added my hardware description to my signature.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Running iperf3 -s on the ubuntu and running it the other way I get this on the ubuntu -

Accepted connection from 192.168.50.35, port 31825
[ 5] local 192.168.50.94 port 5201 connected to 192.168.50.35 port 27972
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 5] 0.00-1.00 sec 8.41 MBytes 70.6 Mbits/sec 0 80.6 KBytes
[ 5] 1.00-2.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 2.00-3.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 3.00-4.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 4.00-5.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 5.00-6.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 6.00-7.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 7.00-8.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 8.00-9.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 9.00-10.00 sec 11.2 MBytes 93.8 Mbits/sec 0 80.6 KBytes
[ 5] 10.00-10.28 sec 3.17 MBytes 94.1 Mbits/sec 0 80.6 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 5] 0.00-10.28 sec 112 MBytes 91.6 Mbits/sec 0 sender
[ 5] 0.00-10.28 sec 0.00 Bytes 0.00 bits/sec receiver


Looks like the NIC on the FreeNAS is the problem because of all the Retrs?
 

Attachments

  • Screenshot at 2019-03-16 18-23-41.png
    Screenshot at 2019-03-16 18-23-41.png
    186.3 KB · Views: 1,023

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
I noticed the screen shots were hard to read on the other files.. his is one of the FreeNAS machine when running iperf3 -s on it.

I think Retr means re-try? The number ranged from 13 to 22.. so if that is so, that is the problem, correct?

Thank you.
 

Attachments

  • Screenshot at 2019-03-16 18-30-28.png
    Screenshot at 2019-03-16 18-30-28.png
    251.3 KB · Views: 1,520

Stux

MVP
Joined
Jun 2, 2016
Messages
4,358
You could try disabling sync writes on the target pool to see if sync write performance is the source of your bottleneck.
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Hrmm yeah that's the problem then. You're only getting 100Mbps.

I see Retr too of 300+ and can still saturate 1 gig. You could have a bad cable.

Try running ifconfig on FreeNAS and look for the media line. See if you're link negotiated to 100Mbps. One of my boxes says "media: Ethernet autoselect (10Gbase-T <full-duplex>)" for example (10gig interface). You'll need to look at the physical interface e.g. ix0 or ix1.

On Linux, run "ethtool eth0 | grep -i speed" (run ifconfig first to see what your primary interface is and replace eth0 with it). Mine shows "Speed: 1000Mb/s" for example.

If both show 1gig, try swapping cables anyway.
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
BTW if you don't have ethtool simply "apt-get install ethtool".

Once you swap one or both cables re-run the iperf tests and see if you get gig.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
For my MB I can only go with a x1 PCIe card. I've narrowed it down to two.. one has an Iintel 82576 controller, the other is EXPI9301CTBLK. Would one be better than the other... or since they are intel is it a "can't go wrong either way" kind of thing?
 

Meyers

Patron
Joined
Nov 16, 2016
Messages
211
Rule out other stuff first. I've never had ANY trouble getting gig with Realtek cards with cheap switches.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Hrmm yeah that's the problem then. You're only getting 100Mbps.

I see Retr too of 300+ and can still saturate 1 gig. You could have a bad cable.

Try running ifconfig on FreeNAS and look for the media line. See if you're link negotiated to 100Mbps. One of my boxes says "media: Ethernet autoselect (10Gbase-T <full-duplex>)" for example (10gig interface). You'll need to look at the physical interface e.g. ix0 or ix1.

On Linux, run "ethtool eth0 | grep -i speed" (run ifconfig first to see what your primary interface is and replace eth0 with it). Mine shows "Speed: 1000Mb/s" for example.

If both show 1gig, try swapping cables anyway.

Ok... great information. I'll try to get all that done tomorrow. Thank you.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
not able to copy and paste from the browser showing run in FreeNAS..
You need to setup the ability to SSH into your FreeNAS instead of trying to use the shell thought the GUI.
I updated my signature with system information
Just keep in mind that anyone viewing from a mobile platform will not be able to see what is in your signature, so if someone asks, that is probably why.
You could have a bad cable.
More likely it is that Realtek NIC. FreeNAS really doesn't like those.
Rule out other stuff first. I've never had ANY trouble getting gig with Realtek cards with cheap switches.
On FreeNAS? On Windows they are fine because Realtek makes a Windows driver. The BSD driver is created without any input from the hardware vendor.

qxotic, do try what Stux suggested with regard to disabling sync write. It will allow you to determine if that is indeed part of the problem. If you find that it is, then you know that a SLOG device is the answer.
EXPI9301CTBLK
If you don't get any other solution, either of those Intel cards should be fine.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
You need to setup the ability to SSH into your FreeNAS instead of trying to use the shell thought the GUI.

Just keep in mind that anyone viewing from a mobile platform will not be able to see what is in your signature, so if someone asks, that is probably why.

More likely it is that Realtek NIC. FreeNAS really doesn't like those.

On FreeNAS? On Windows they are fine because Realtek makes a Windows driver. The BSD driver is created without any input from the hardware vendor.

qxotic, do try what Stux suggested with regard to disabling sync write. It will allow you to determine if that is indeed part of the problem. If you find that it is, then you know that a SLOG device is the answer.

If you don't get any other solution, either of those Intel cards should be fine.
Ok.. had to learn SSH this morning for first time. Here is the data both ways from the Freenas system -

root@freenas[~]# iperf3 -R -c 192.168.50.94
Connecting to host 192.168.50.94, port 5201
Reverse mode, remote host 192.168.50.94 is sending
[ 5] local 192.168.50.35 port 33271 connected to 192.168.50.94 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 11.3 MBytes 94.4 Mbits/sec
[ 5] 1.00-2.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 2.00-3.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 3.00-4.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 4.00-5.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 5.00-6.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 6.00-7.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 7.00-8.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 8.00-9.00 sec 11.2 MBytes 93.9 Mbits/sec
[ 5] 9.00-10.00 sec 11.2 MBytes 93.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 112 MBytes 94.2 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 112 MBytes 93.9 Mbits/sec receiver

iperf Done.
root@freenas[~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.50.94, port 38276
[ 5] local 192.168.50.35 port 5201 connected to 192.168.50.94 port 38278
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 11.4 MBytes 96.0 Mbits/sec 13 109 KBytes
[ 5] 1.00-2.00 sec 11.2 MBytes 94.1 Mbits/sec 26 55.2 KBytes
[ 5] 2.00-3.00 sec 11.2 MBytes 94.1 Mbits/sec 19 62.3 KBytes
[ 5] 3.00-4.00 sec 11.2 MBytes 94.1 Mbits/sec 20 68.0 KBytes
[ 5] 4.00-5.00 sec 11.2 MBytes 94.1 Mbits/sec 21 70.8 KBytes
[ 5] 5.00-6.00 sec 11.2 MBytes 94.2 Mbits/sec 21 73.7 KBytes
[ 5] 6.00-7.00 sec 11.2 MBytes 94.1 Mbits/sec 20 78.0 KBytes
[ 5] 7.00-8.00 sec 11.2 MBytes 94.1 Mbits/sec 22 79.4 KBytes
[ 5] 8.00-9.00 sec 11.2 MBytes 94.1 Mbits/sec 21 85.1 KBytes
[ 5] 9.00-10.00 sec 11.2 MBytes 94.1 Mbits/sec 20 90.8 KBytes
[ 5] 10.00-10.00 sec 4.24 KBytes 118 Mbits/sec 0 90.8 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 112 MBytes 94.3 Mbits/sec 203 sender


And here are the results from the mate system -
mate:~$ iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.50.35, port 13148
[ 5] local 192.168.50.94 port 5201 connected to 192.168.50.35 port 33271
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 5] 0.00-1.00 sec 8.43 MBytes 70.7 Mbits/sec 0 60.8 KBytes
[ 5] 1.00-2.00 sec 11.3 MBytes 94.8 Mbits/sec 0 60.8 KBytes
[ 5] 2.00-3.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 3.00-4.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 4.00-5.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 5.00-6.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 6.00-7.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 7.00-8.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 8.00-9.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 9.00-10.00 sec 11.2 MBytes 93.8 Mbits/sec 0 60.8 KBytes
[ 5] 10.00-10.27 sec 3.16 MBytes 97.0 Mbits/sec 0 60.8 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 5] 0.00-10.27 sec 112 MBytes 91.7 Mbits/sec 0 sender
[ 5] 0.00-10.27 sec 0.00 Bytes 0.00 bits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
^M^Ciperf3: interrupt - the server has terminated
mate@mate:~$ iperf3 -R -c 192.168.50.35
Connecting to host 192.168.50.35, port 5201
Reverse mode, remote host 192.168.50.35 is sending
[ 4] local 192.168.50.94 port 38278 connected to 192.168.50.35 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 1.00-2.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 2.00-3.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 3.00-4.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 4.00-5.00 sec 11.2 MBytes 94.2 Mbits/sec
[ 4] 5.00-6.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 6.00-7.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 7.00-8.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 8.00-9.00 sec 11.2 MBytes 94.1 Mbits/sec
[ 4] 9.00-10.00 sec 11.2 MBytes 94.1 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 112 MBytes 94.3 Mbits/sec 203 sender
[ 4] 0.00-10.00 sec 112 MBytes 94.2 Mbits/sec receiver

I will work on running more of the tests today.
Thank you.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Rule out other stuff first. I've never had ANY trouble getting gig with Realtek cards with cheap switches.
Today - before I ran the iperf tests I added below - I moved the freenas machine near the router and used a different cable for it.. a cat 5e. I made sure that both it and the mate system were connected directly to the router and neither of them are through the attached switch now. I also reconnected a dell machine running Mint 18 to where the freenas had been. When I ran iperf between the mint and mate all the retr values were 0.

I will work on running more tests today.
Thank you.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Hrmm yeah that's the problem then. You're only getting 100Mbps.

I see Retr too of 300+ and can still saturate 1 gig. You could have a bad cable.

Try running ifconfig on FreeNAS and look for the media line. See if you're link negotiated to 100Mbps. One of my boxes says "media: Ethernet autoselect (10Gbase-T <full-duplex>)" for example (10gig interface). You'll need to look at the physical interface e.g. ix0 or ix1.

On Linux, run "ethtool eth0 | grep -i speed" (run ifconfig first to see what your primary interface is and replace eth0 with it). Mine shows "Speed: 1000Mb/s" for example.

If both show 1gig, try swapping cables anyway.

Here is the ifconfig information -
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Hrmm yeah that's the problem then. You're only getting 100Mbps.

I see Retr too of 300+ and can still saturate 1 gig. You could have a bad cable.

Try running ifconfig on FreeNAS and look for the media line. See if you're link negotiated to 100Mbps. One of my boxes says "media: Ethernet autoselect (10Gbase-T <full-duplex>)" for example (10gig interface). You'll need to look at the physical interface e.g. ix0 or ix1.

On Linux, run "ethtool eth0 | grep -i speed" (run ifconfig first to see what your primary interface is and replace eth0 with it). Mine shows "Speed: 1000Mb/s" for example.

If both show 1gig, try swapping cables anyway.

ethtool returned the following on my mate machine -
ethtool enp0s31f6 | grep -i speed
Cannot get wake-on-lan settings: Operation not permitted
Speed: 100Mb/s
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
You could try disabling sync writes on the target pool to see if sync write performance is the source of your bottleneck.
How do I do this? I tried setting sync to disabled but get a call error when I try to save it, says to set it to none, which is not one of the three options.. it is set on Standard, the only other option is always.

Thank you.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Top