SOLVED! Very Slow Disk Performance

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Ok.. I don't think the call error kept it from changing. When I went in to legacy, it was set to disabled... I enabled and disabled again just in case. I've re-run the tests but no change.

I had the drives before I had the parts to build the new system. I connected them in my ubuntu machine and ran the in depth smart tests on them, then I created a ZFS mirror while on that machine and copied a bunch of media over to it. (I was hoping I'd be able to just import the pool on the new system.. but it never saw it). On that system the transfer rates to the drives were normal. I did not blank the drives or anything before putting them into the freenas build... I just created them as a mirror there when it did not see them for import. Could not wiping them cause this trouble? The SSD had been formatted as EXT4 prior to this install.. I had not wiped it either.

Thank you everyone for your time and ideas.. even if not successful I have learned some things and that is always a benefit.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This is a thing you might want to look at to test the performance of the pool:

solnet-array-test (for drive / array speed) non destructive test
https://forums.freenas.org/index.php?resources/solnet-array-test.1/

It should tell you something about how your disks are doing. As to the question of if the disks are configured properly, you might want to look at the output of gpart list which should show you something like this:
Code:
Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 4000786939904 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 4000786939904
   offset: 65536
   type: freebsd-zfs
   index: 1
   end: 7814037119
   start: 128
Consumers:
1. Name: da4
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

My disks only have one partition because I don't have swap space on my storage drives, but you should have two partitions with the first being a 2GB swap partition and the second being the rest of the drive as a ZFS partition.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
This is a thing you might want to look at to test the performance of the pool:

solnet-array-test (for drive / array speed) non destructive test
https://forums.freenas.org/index.php?resources/solnet-array-test.1/

It should tell you something about how your disks are doing. As to the question of if the disks are configured properly, you might want to look at the output of gpart list which should show you something like this:
Code:
Geom name: da4
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da4p1
   Mediasize: 4000786939904 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: af7c42c6-bf05-11e8-b5f3-0cc47a9cd5a4
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 4000786939904
   offset: 65536
   type: freebsd-zfs
   index: 1
   end: 7814037119
   start: 128
Consumers:
1. Name: da4
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3

My disks only have one partition because I don't have swap space on my storage drives, but you should have two partitions with the first being a 2GB swap partition and the second being the rest of the drive as a ZFS partition.


Thanks.. I will see if I can figure out the array test..I tried to toggle this as code.. is that the right way? . here is the result of my grep list -
Code:
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 234454999
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 65536
   Mode: r0w0e0
   efimedia: HD(1,GPT,5fbb3e91-45e1-11e9-a7d5-7085c281a37f,0x80,0x400000)
   rawuuid: 5fbb3e91-45e1-11e9-a7d5-7085c281a37f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada0p2
   Mediasize: 117893410816 (110G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2147549184
   Mode: r1w1e2
   efimedia: HD(2,GPT,5fc0ae5b-45e1-11e9-a7d5-7085c281a37f,0x400080,0xdb97f58)
   rawuuid: 5fc0ae5b-45e1-11e9-a7d5-7085c281a37f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 117893410816
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 234454999
   start: 4194432
Consumers:
1. Name: ada0
   Mediasize: 120040980480 (112G)
   Sectorsize: 512
   Mode: r1w1e3

Geom name: ada1
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037127
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: ada1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,514cca33-45e8-11e9-ac5a-7085c281a37f,0x80,0x400000)
   rawuuid: 514cca33-45e8-11e9-ac5a-7085c281a37f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,5162f500-45e8-11e9-ac5a-7085c281a37f,0x400080,0x1d180be08)
   rawuuid: 5162f500-45e8-11e9-ac5a-7085c281a37f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: ada1
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5

Geom name: ada2
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 7814037127
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: ada2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,53111c57-45e8-11e9-ac5a-7085c281a37f,0x80,0x400000)
   rawuuid: 53111c57-45e8-11e9-ac5a-7085c281a37f
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: ada2p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,5324e516-45e8-11e9-ac5a-7085c281a37f,0x400080,0x1d180be08)
   rawuuid: 5324e516-45e8-11e9-ac5a-7085c281a37f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: ada2
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 62656607
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 272629760 (260M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,ae313846-4610-11e9-aa98-7085c281a37f,0x28,0x82000)
   rawuuid: ae313846-4610-11e9-aa98-7085c281a37f
   rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
   label: (null)
   length: 272629760
   offset: 20480
   type: efi
   index: 1
   end: 532519
   start: 40
2. Name: da0p2
   Mediasize: 31792824320 (30G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 272650240
   Mode: r1w1e1
   efimedia: HD(2,GPT,ae441822-4610-11e9-aa98-7085c281a37f,0x82028,0x3b38000)
   rawuuid: ae441822-4610-11e9-aa98-7085c281a37f
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 31792824320
   offset: 272650240
   type: freebsd-zfs
   index: 2
   end: 62627879
   start: 532520
Consumers:
1. Name: da0
   Mediasize: 32080200192 (30G)
   Sectorsize: 512
   Mode: r1w1e2  
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
here is the result of my grep list -
These partitions look normal. I don't think that is creating any problems for you.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
It is early on in the test, but looks ok so far.

Code:
Selected disks: ada1 ada2
<HGST HMS5C4040BLE640 MPAOA5D0>    at scbus2 target 0 lun 0 (pass1,ada1)
<HGST HMS5C4040BLE640 MPAOA5D0>    at scbus3 target 0 lun 0 (pass2,ada2)
Is this correct? (y/N): y
Performing initial serial array read (baseline speeds)
Sun Mar 17 18:03:53 CDT 2019
Sun Mar 17 18:08:23 CDT 2019               
Completed: initial serial array read (baseline speeds)

Array's average speed is 125.7 MB/sec per disk

Disk    Disk Size  MB/sec %ofAvg
------- ---------- ------ ------
ada1     3815447MB    129    103
ada2     3815447MB    123     97

Performing initial parallel array read
Sun Mar 17 18:08:23 CDT 2019
The disk ada1 appears to be 3815447 MB.       
Disk is reading at about 129 MB/sec        
This suggests that this pass may take around 493 minutes
                                           
                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
ada1     3815447MB    129    129    100
ada2     3815447MB    123    125    102   
 
Last edited:

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Pool test update. Here are the numbers from the test overnight. I don't see any indication of the problem. Actually... I see it says the parallel reading is only 26 MB/sec... The serial read is over 120MBs... is this common for a parallel? If not, how can that be fixed?
Code:

Awaiting completion: initial parallel array read
Mon Mar 18 04:52:32 CDT 2019
Completed: initial parallel array read

Disk's average time is 38350 seconds per disk

Disk    Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
ada1        4000787030016   38052     99
ada2        4000787030016   38649    101

Performing initial parallel seek-stress array read
Mon Mar 18 04:52:32 CDT 2019
The disk ada1 appears to be 3815447 MB.     
Disk is reading at about 26 MB/sec       
This suggests that this pass may take around 2435 minutes
                                         
                   Serial Parall % of
Disk    Disk Size  MB/sec MB/sec Serial
------- ---------- ------ ------ ------
ada1     3815447MB    129     26     20
ada2     3815447MB    123     30     25

Awaiting completion: initial parallel seek-stress array read
 
Last edited:

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
I did a test tonight. using a 9.7GB movie. Using the file manager on my mate system I copied it from the SSD on the FreeNAS to the Mirrored pool on the FreeNAS... took forever... over 9 minutes. I then logged into the FreeNAS by SSH from the mate computer. I used the copy command to copy the file from the folder one way.. and then back the other way. It took about a minute and twenty seconds each way.

This is what I'm thinking.. tell me what I'm missing if I have it wrong..
1. The drives are performing normally.. because by command line they transfer quickly.
2. Things I try to do over the network are extremely slow.... reasons?
Reason 1... the Realtek NIC
Reason 2... The FreeNAS OS does not work well with my MB or Memory... (I don't know enough about the OS and how it copies things... is it using the same commands behind the scenes I used from command line?)

I had TOP running under all tests.. system was using very little resources on memory or cpu from what I co
Screenshot at 2019-03-18 18-40-19.png
uld tell. I did take a few screen shots.
One above is from when requesting over network.
Screenshot at 2019-03-18 18-46-26.png


This one is from when doing it using command line.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Did the networking get figured out? If I read correctly you are going from Freenas to the router (you removed the switch) and from the MATE system to the router. And you have 1gb/s hardware on the Freenas. And if you have 1gb/s on the MATE system, then why is the network running 100mb/s? What is the router model? Does it have 1gb/s ports or 100mb/s ports? If the router is 100mbps then problem (potentially) solved.

I would try direct connecting Freenas to the MATE system. You should be getting 100 MB/s file transfers using 1gbe ports on both systems.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
2. Things I try to do over the network are extremely slow.... reasons?
Reason 1... the Realtek NIC

"Realtek are known for poor performance", https://www.ixsystems.com/community/threads/freenas-and-realtek®-rtl8111h.57506/ . That's more due to the FreeBSD drivers for Realtek than anything else.

Reasonable idea: Stick an addon Intel NIC into a PCIe port and try again. Specific models are in the guide at https://www.ixsystems.com/community/resources/hardware-recommendations-guide.12/

Not entirely reasonable idea: Learn kernel coding; compare FreeBSD Realtek drivers to Linux Realtek drivers; code and test and submit Realtek drivers that work great; wait for those to show up in FreeBSD; wait for them to show up in FreeNAS. Definitely quite the windmill to be tilting at. So much easier to spend 10 bucks and put an Intel NIC in there.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
You need to setup the ability to SSH into your FreeNAS instead of trying to use the shell thought the GUI.

Just keep in mind that anyone viewing from a mobile platform will not be able to see what is in your signature, so if someone asks, that is probably why.

More likely it is that Realtek NIC. FreeNAS really doesn't like those.

On FreeNAS? On Windows they are fine because Realtek makes a Windows driver. The BSD driver is created without any input from the hardware vendor.

qxotic, do try what Stux suggested with regard to disabling sync write. It will allow you to determine if that is indeed part of the problem. If you find that it is, then you know that a SLOG device is the answer.

If you don't get any other solution, either of those Intel cards should be fine.

I bought the intel NIC - Intel Gigabit CT PCI-E Network Adapter EXPI9301CTBLK
I installed it and rebooted... both it and the on-board are connected to the LAN... I don't see any sign of the new NIC.. not in the system or running ifconfig. Does the on-board have to be disabled? I tried taking it out and putting it back in, but still not seen. How do I change the system over to the new pcie NIC?
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Did the networking get figured out? If I read correctly you are going from Freenas to the router (you removed the switch) and from the MATE system to the router. And you have 1gb/s hardware on the Freenas. And if you have 1gb/s on the MATE system, then why is the network running 100mb/s? What is the router model? Does it have 1gb/s ports or 100mb/s ports? If the router is 100mbps then problem (potentially) solved.

I would try direct connecting Freenas to the MATE system. You should be getting 100 MB/s file transfers using 1gbe ports on both systems.
I wish it were running at 100... (that's about what I'd expect for reads from my spinning HDDs. It drops to about 10 pretty quick... trying to get a new intel NIC up and running in it to see if that was the prob.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
I wish it were running at 100... (that's about what I'd expect for reads from my spinning HDDs. It drops to about 10 pretty quick... trying to get a new intel NIC up and running in it to see if that was the prob.

Right, because your network is running at 100mb/s (according to those iperf tests).

I don't understand the setup. You have a freenas box with a 1 gb/s nic. You have a linux box with a 1 gb/s nic. Both boxes are connected to your router.

So at what speed is the switch in the router running? Unless you verify (a) the router has 1 gb/s ports and (b) your links on your freenas and linux boxes are connected at 1 gb/s, then you still have a network problem.

If you can't verify the router directly (but you should be able to see the spec given the model), direct connect your freenas to your linux box and test the network speed.

The realtek nic is not going to be the issue with your network running 100 mbps when you have 1 gbps nics. (It may cause other issues under heavier loads, but this is not the issue you are seeing. In my opinion.) For example, If the router has 100 mbps ports then put whatever card you want into the freenas and linux boxes. Your network is gonna run 100 mbps.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Right, because your network is running at 100mb/s (according to those iperf tests).

I don't understand the setup. You have a freenas box with a 1 gb/s nic. You have a linux box with a 1 gb/s nic. Both boxes are connected to your router.

So at what speed is the switch in the router running? Unless you verify (a) the router has 1 gb/s ports and (b) your links on your freenas and linux boxes are connected at 1 gb/s, then you still have a network problem.

If you can't verify the router directly (but you should be able to see the spec given the model), direct connect your freenas to your linux box and test the network speed.

The realtek nic is not going to be the issue with your network running 100 mbps when you have 1 gbps nics. (It may cause other issues under heavier loads, but this is not the issue you are seeing. In my opinion.) For example, If the router has 100 mbps ports then put whatever card you want into the freenas and linux boxes. Your network is gonna run 100 mbps.

The router is a Asus Rt-ac66u B1 - the ports are all gigabit ports.. definitely seems to be a problem with the FreeNAS OS talking to the Realtek NIC.. trying to get an Intel PCIe to work.. but suspect I got a dead one... going to test it at work on a different machine today to verify. If so.. there will be a delay in testing as it gets returned. I get a decent transfer between the mate system and my mint system.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
I got to work early today to test the Intel NIC... worked with no problems on the Win10 system here.. didn't even have to load drivers. I can think of two reasons... 1. The ASRock MB has a bad PCIe slot... or bad BIOS support...(I checked their website prior to install for BIOS updates.. surprised there were not any) or 2. My FreeNAS install was messed up so bad because of the Realtek NIC. Any other reasons I am missing?

Even though I was able to connect to the system over the Realtek... my testing last night leads me to believe it was never really set up... for two reasons.. I noticed in the browser GUI, under Network - interfaces - it was not listed.. nothing was. If I went to add interface (as I tried to do to find the Intel NIC) only the re0 would show.. with no message scrolling at the bottom as referred to in the 11.2 user guide as follows: "An interface can only be added when there is a NIC that has not already been configured. Clicking ADD when there are no NICs available will display a message across the bottom of the screen that All interfaces are already in use.. "

I also tried option one to set up network with the monitor and keyboard attached to the NAS... It did not list anything, only the q option to quit.

I am thinking tonight of disabling the Realtek in Bios.. installing the Intel in the PCIe slot and doing a fresh install of FreeNAS... wouldn't that give it a better chance of seeing it and setting it up correctly? Is there a better process/approach?

Thank you all.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Ok.. Intel NIC worked fine at work... came home and installed, disabled the realtek in bios, did a clean install of FreeNAS.. better.. but still only getting 11 MBs at the end of large file transfers.

There is nothing listed under network - interfaces... shouldn't the Intel NIC show there?
Here is what ifconfig gives me -
[CODE}
ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
ether 00:1b:21:12:05:03
hwaddr 00:1b:21:12:05:03
inet 192.168.50.50 netmask 0xffffff00 broadcast 192.168.50.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: lo
[/CODE]

I noticed the dashboard shows the correct name of the CPU, but the monitor with menu after boot refers to it as unknown intel cpu.. is that normal?
Should I try adding the Intel to network interfaces? (I still have a feeling that is where the problem lies.)
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Yes, you expect your Intel NIC to show up in the UI under Network - Interfaces. That it does not could just be an UI bug. I stand corrected :). I have a LAG, for no good reason other than that it was an option, and that counts as "manually configured" and shows up here.

I'm going through the thread and don't see your zpool list, or any mention of capacity. ZFS can have rather drastic performance drops once it's over 80% full.

Hang in there! Surely there's an explanation. For testing purposes, trying a transfer from a host directly attached to FreeNAS, bypassing your network, also seems prudent. Just so you know for certain where to point the finger: If transfer is still slow, the issue is somewhere with FreeNAS or the FreeNAS/host interaction such as sync writes. If transfer is then fast, it's time to look at what your network is doing to the transfer.
 
Last edited:

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,924
Yes, you expect your Intel NIC to show up in the UI under Network - Interfaces. That it does not could just be an UI bug.
Only if it's been manually configured - read the docs...
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
Yes, you expect your Intel NIC to show up in the UI under Network - Interfaces. That it does not could just be an UI bug. I stand corrected :). I have a LAG, for no good reason other than that it was an option, and that counts as "manually configured" and shows up here.

I'm going through the thread and don't see your zpool list, or any mention of capacity. ZFS can have rather drastic performance drops once it's over 80% full.

Hang in there! Surely there's an explanation. For testing purposes, trying a transfer from a host directly attached to FreeNAS, bypassing your network, also seems prudent. Just so you know for certain where to point the finger: If transfer is still slow, the issue is somewhere with FreeNAS or the FreeNAS/host interaction such as sync writes. If transfer is then fast, it's time to look at what your network is doing to the transfer.

The first pool are two HGST 4 TB in mirror... not much of anything on them yet.
I also have an 120GB WD Green SSD I plan to use for the plex jail and transcoding.

It doesn't seem like it could be network... the progress bar on the mate machine.. for a smaller file... 2GB or under, goes right over (because it is going into memory on the FreeNAS initially).. Then takes several minutes to clear.... A large file.. 9.7GB.. starts off the same... then at about 3GB you see the transfer speed steadily decrease until it's only around 11 MB/s.. so it seems to indicate a ZFS slow down... I'm trying turning off the sync again after I get things set up again.. and will do some more iperf3 tests. It may just be the hardware doesn't play nice.. but I'm not ready to give up on it yet. I've got some time to work on it before I need to start using it.
 

qxotic

Dabbler
Joined
Mar 16, 2019
Messages
28
SOLVED! I'm not sure if it was the router re-boot or just removing and re-plugging cables.. but the iperf now tests at near a gig... adding the Intel NIC didn't hurt, because the retrs are now zero. The 9.7 GB test file finished in a min and forty seconds... so much better, now I can get back to setting up the plex. Thank you all for the suggestions and your time. I did learn a few things I'm sure I'll need again sometime in the future.
 
Top