Replication task running at 60MB/s on 100Gbit network

Joined
Sep 18, 2019
Messages
9
Hi, dear FreeNAS users !

I'm struggling with file transfer between two FreeNAS boxes, ServerA and serverB over NFS.

I started testing NFS after noticing that a snapshot replication was running at ~60 MB/s (on 100Gb network).

I will test other share possibilities after I'm done with those tests.

I'm looking for help identifying the bottleneck.

Machines

ServerA:
Hostname ServerA
Build FreeNAS-11.1-U6
Platform Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
Memory 195196MB
System Time Fri, 4 Oct 2019 18:22:30 +0200
Uptime 6:22PM up 290 days, 3:26, 5 users
Load Average 1.83, 2.31, 2.45
network: double 100 Gb in bond

Storage: 32x NVMe SSDs
no SLOG/L2ARC


ServerB:
Hostname ServerB
Build FreeNAS-11.2-U6
Platform Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz
Memory 65465MB
System Time Fri, 4 Oct 2019 18:23:13 +0200
Uptime 6:23PM up 2 days, 2:02, 5 users
Load Average 2.29, 2.01, 1.74
network: 04:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] (100 Gb)

Storage: pool of 11 x 10TB drives in RaidZ (not good,I know, but let's forget it for the sake of the current problem)
no SLOG/L2ARC

iperf (throughput > 3 GB/s)

Code:
root@serverA:~ # iperf -c serverB -P 8
------------------------------------------------------------
Client connecting to serverB, TCP port 5001
TCP window size: 64.2 KByte (default)
------------------------------------------------------------
[  9] local serverA port 54407 connected with serverB port 5001
[  5] local serverA port 54406 connected with serverB port 5001
[  7] local serverA port 54404 connected with serverB port 5001
[ 10] local serverA port 54408 connected with serverB port 5001
[  6] local serverA port 54405 connected with serverB port 5001
[  4] local serverA port 54403 connected with serverB port 5001
[  3] local serverA port 54401 connected with serverB port 5001
[  8] local serverA port 54402 connected with serverB port 5001
[ ID] Interval       Transfer     Bandwidth
[  9]  0.0-10.0 sec  3.46 GBytes  2.98 Gbits/sec
[  5]  0.0-10.0 sec  2.46 GBytes  2.12 Gbits/sec
[  7]  0.0-10.0 sec  6.27 GBytes  5.39 Gbits/sec
[ 10]  0.0-10.0 sec  4.32 GBytes  3.71 Gbits/sec
[  6]  0.0-10.0 sec  6.21 GBytes  5.33 Gbits/sec
[  4]  0.0-10.0 sec  5.55 GBytes  4.76 Gbits/sec
[  3]  0.0-10.0 sec  6.10 GBytes  5.24 Gbits/sec
[  8]  0.0-10.0 sec  4.09 GBytes  3.51 Gbits/sec
[SUM]  0.0-10.0 sec  38.5 GBytes  33.0 Gbits/sec


IOzone Locally on serverB (writing > 2 GB/s)

Code:
iozone -a -i 0 -s 128g -y 64k -q 1024k -Rb output.ods

The top row is records sizes, the left column is file sizes                  
Writer Report                  
    64    128    256    512    1024
134217728    1860641    2533219    2661507    2804997    2899522
134217.728    1860.641    2533.219    2661.507    2804.997    2899.522
                  
Re-writer Report                  
    64    128    256    512    1024
134217728    2088055    2594037    2656033    2853643    2799819


IOzone on serverA mounted NFS export from serverB (writing < 200 MB/s)

iozone -ac -i 0 -s 128g -y 64k -q 1024k -Rb output.ods

Temporary conclusion
  • Network is fast
  • Local writing is fast
  • Writing over NFS is 100 times slower than saturation (CPUs are far from being saturated)

If I can run any kind of test or give some info about the setup, please let me know.

Thank for any help, idea, testing tool etc...

All the best,

Leo
 
Last edited:
Joined
Sep 18, 2019
Messages
9
After testing by another client with a much slower setup (10 Gb network, 40Gb card on one side, 10 Gb on the other), I witness around 190 MB/s for snapshot replication.

It isn't as fast as what I would like, but clearly faster than 60 MB/s.

Which leads me to this question: does anyone witness network file transfer between FreeNAS boxes on 10Gb (or even better 100 Gb ) that are around 1GB/s ? (assuming the disk hardware can provide those read/write rates).
 
Joined
Sep 18, 2019
Messages
9
I've been reading a lot since last time and realized I have yet a lot to learn/discover.

But first:
1) I removed all tunables on both servers (A & B)
2) I created a 1GB test file using dd if=/dev/urandom of=/mnt/Pool/test/random.txt bs=1G count=1
3) using a rsync module I rsync-ed it from server A to server B getting 138.3 MB/s
Code:
root@serverA:/mnt/Pool # /usr/bin/time -h rsync random.txt serverB::iozone/test/
        7.23s real              5.66s user              1.11s sys

4) then from from server B to server A getting 137,4 MB/s (and this went entirely from ARC as, while transfer was running, "zpool iostat" was showing no read)
Code:
root@serverA:/mnt/Pool # /usr/bin/time -h rsync serverB::iozone/random.txt .
        7.28s real              5.65s user              0.71s sys

5) server A is a NVMe SSDs raid array that wasn't under load at all during the copy (it can definitely write faster than 137 MB/s).
6) the very similar transfer rate let me think that local disks/filesystem are not the limiting factor.
7) I cannot believe this is the most I can get from 2 100Gb cards (04:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]) (especially since iperf shows me very different results)

Could you guys tell me if I am thinking wrongly/making wrong assumptions ? Point me in some search direction ?

Thank you for any answer.
 
Joined
Sep 18, 2019
Messages
9
After creating a SMB share on both servers (A & B) and copying a 5GB iso from A to B from a windows machine, I get around 400 MB/s transfer rate.

The windows machine has the same 100 Gb Mellanox card and is connected to the same switch.

Code:
root@serverB[/mnt]# zpool iostat storage 1
               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     24.9T  75.1T      0  2.01K  12.0K   109M
storage     24.9T  75.1T      0  3.58K      0   192M
storage     24.9T  75.1T      0  4.37K      0   380M
storage     24.9T  75.1T      0  6.10K      0   543M
storage     24.9T  75.1T      0  4.75K      0   477M
storage     24.9T  75.1T      0  4.12K      0   349M
storage     24.9T  75.1T      0  5.25K      0   524M
storage     24.9T  75.1T      0  5.24K      0   490M
storage     24.9T  75.1T     45  4.24K   181K   393M
storage     24.9T  75.1T      0  4.98K      0   488M
storage     24.9T  75.1T      0  5.24K      0   521M
storage     24.9T  75.1T      0  4.76K      0   431M
storage     24.9T  75.1T      1  4.30K  7.53K   384M
storage     24.9T  75.1T     20  5.41K  82.8K   467M
storage     24.9T  75.1T     20  2.41K  83.7K   164M


SCP directly between both machines 162.9 MB/s
Code:
root@serverA:/mnt # scp Pool/benchmark/Win10_1809Oct_v2_EnglishInternational_x64.iso root@serverB:/mnt/storage/benchmark_smb/
root@serverB's password: 
Win10_1809Oct_v2_EnglishInternational_x64.iso    100% 5137MB 162.9MB/s   00:31


Rsync directly between both machines 147,6 MB/s
Code:
root@serverA:/mnt # rsync -v --progress Pool/benchmark/Win10_1809Oct_v2_EnglishInternational_x64.iso serverB::benchmark_smb/
Win10_1809Oct_v2_EnglishInternational_x64.iso
  5,386,489,856 100%  144.34MB/s    0:00:35 (xfr#1, to-chk=0/1)

sent 5,387,805,033 bytes  received 35 bytes  147,611,097.75 bytes/sec
total size is 5,386,489,856  speedup is 1.00


Yet I cannot get more than 60 MB/s when replicating the pool between both machine (A & B) which are both running FreeNAS...
 

purduephotog

Explorer
Joined
Jan 14, 2013
Messages
73
Hey-
I'm sorry you're going through this. I've had similar issues with a 40gbe network and not breaking 100/200MB/s. I've been removed from that project so I don't know if anyone ever got my screw up fixed or not, but (despite some folks evidence here) I just never saw the performance I was told it could get. The overall installation cost was around 400K, so we weren't exactly cheaping out on hardware.

I saw you ran iperf. I would look to that as your 'baseline' speed. If you can't get iperf saturated, freenas won't do it either for you. Make sure you can saturate going over AND coming back (use the reverse flag), and then test on the other machines.

In my case I was wanting multiple smaller files concurrent as well as single massive files- and it wouldn't be unusual to have several 'DVDs' worth of data per 'chunk' of work... and multi-10s of terrabytes of data used per instance.

Keep at it, and I wish you well. If you DO figure it out, please let me know too.
 

radambe

Dabbler
Joined
Nov 23, 2014
Messages
10
I’ve been fighting with almost the exact same thing over the past couple of weeks.

FreeNAS server
  • Supermicro X10DRi-T4+ in a model 847 36bay chassis (12g sas3 expanders on a 24 slot backplane up front, and a 12 slot backplane in the rear)
  • 2x Xeon E5-2620v3 6core/12thread 2.4ghz CPU’s
  • 128GB DDR4 2400mhz ECC
  • LSI 9300-8i
  • 24x Seagate Constellation 6TB SATA spinners deployed as one zpool of three 8 disk raidz2 vdevs (local dd tests around 3GB/s)
  • 12x Samsung EVO 1TB SATA SSD’s currently unused
  • Chelsio 62100-LP-CR dual port QSFP28 100GbE pcie NIC installed in a 16 lane pcie3 slot - direct connection to TapeOP/utility machine over FS.com 100GbE DAC
  • 2x dual port Intel X540 10Gbase-T nics that are built into the motherboard (one port configured for house 1G LAN/internet, 2 ports configured as LACP lagg on a netgear switch for 10G artist workstation clients)


    TapeOP/Utility machine - Windows 10 for Workstations
  • Supermicro 7047GR-TRF workstation (X9DRG-Q motherboard)
  • Dual Xeon E5-2650v2 8core/16thread 2.6ghz CPU’s
  • 128GB DDR3 1600mhz ECC
  • LSI MegaRAID 9361-8i 12g RAID
  • 8x Samsung EVO 850 1TB sata SSD’s directly connected to megaraid via minisas-sata breakouts configured as RAID5 (local AJA and Blackmagic disk tests show ~3GB/s)
  • Chelsio 62100-CR dual port 100GbE NIC installed in a pcie3 16 lane slot
  • Chelsio T520-BT dual port 10Gbase-T NIC


I too have not been able to achieve even an iperf test result above 37Gbps max though I did not notice any cpu core pegging. iPerf results usually showing only around 20Gbps

The best I’ve been able to achieve doing an actual file transfer is around 1GB/s and that has been over the 10G. Doing the same over the 100G direct connection usually looks worse.

Ive tried the tunables posted at the 45drives blog and servethehome.com. I’ve also tried configuring the bios of both machines per the Chelsio user guide and per the system configuration notes published in multiple Chelsio white papers.

Someone recently mentioned to me that iXSystems told them that they would expect to be able to achieve something around 80Gbps in iperf using Chelsio 100G nics.

The only accounts I’ve been able to find anywhere on the internet that claim to achieve anything like 80Gbps of throughput using Chelsio nics and FreeBSD or FreeNAS coincidentally come from either Chelsio or iXSystems.

Performance greater than what’s possible with 10GbE is starting to seem more and more like a myth than anything else. If this is something that’s actually possible, I’d love to see even just one simple demonstration of such a feat. Or publish a guide on what it actually takes to make this work, if it does in fact work (yet to be actually seen outside of a Chelsio whitepaper).
 

colmconn

Contributor
Joined
Jul 28, 2015
Messages
174
If I recall, there was a thread earlier in the past couple of weeks where someone was having issues with throughput on a 10G network. I think the issue ended up being that the MTU on the 10G network was not set appropriately. Have you got the MTU set appropriately on your 10G connections? I don't have a 10G network so other than this jogging my memory I can't provide much help beyond that.
 

radambe

Dabbler
Joined
Nov 23, 2014
Messages
10
Yes. Surprisingly, I found that using jumbo packets over the 40 and 100G connections does actually increase performance by at least a couple of Gbits/s. In general over the past few years, I have not found much of a use for jumbo packets on 10Gbase-T networks. Not sure if maybe they are a bigger deal with other types of 10G, but using recent/modern 10GBase-T equipment like Intel X540/X550, Aquantia, Chelsio T520-BT, etc I have found jumbo frames unecessary, 1500 MTU gives basicly the same performance with less hassle. This is just my opinion based purely on the anecdotal evidence of first hand experience.

I would really love for someone from iX or Chelsio to chime in and confirm that the iperf results they have published for 40 and 100G networks (especially using FreeBSD/FreeNAS) are in fact real and have been replicated, by anyone, anywhere.
 

purduephotog

Explorer
Joined
Jan 14, 2013
Messages
73
Yeah, I gave up. 100K$ systems down the drain with performance I could've gotten from a cheaper, but integrated, SAN provider. Frustrating experience to say the least.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
same issue on my system 60 MB/S on 1gbit network.

disable encryption BEFORE running the task maxed out my network.

I think it could related to SSH
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi,

When testing NFS speed, you should always test with write sync ON and write sync OFF. That will show you a gigantic difference...

Any compression in your pools ? Here, I used max compression on one of my server for testing and speed drop to basically hand writting!

In all cases, performance is one of the thing that is always hard to diagnose...

Good luck with your case,
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
I too have not been able to achieve even an iperf test result above 37Gbps max though I did not notice any cpu core pegging.

As a general thought, that sounds like it might be hitting the limits of the PCIe slot the card is in.

Have you checked the actual PCIe link speed being used for the card?

To check, run "lspci" (from the command line) to show the list of pci devices on the system. Then run it again with the address of your 100G network card (from the first list), but with "-vv" as the command line argument to show verbose info.

The items to look for in the returned list are "LnkSta" (Link Status) and "Width".

For example, here's a random PCIe device in my desktop:

LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

2.5GT/s means it's running at PCIe v1, and the Width of x1 means it's using a single PCIe lane. Yours should be much higher. ;)
 

radambe

Dabbler
Joined
Nov 23, 2014
Messages
10
As a general thought, that sounds like it might be hitting the limits of the PCIe slot the card is in.

Have you checked the actual PCIe link speed being used for the card?

To check, run "lspci" (from the command line) to show the list of pci devices on the system. Then run it again with the address of your 100G network card (from the first list), but with "-vv" as the command line argument to show verbose info.

The items to look for in the returned list are "LnkSta" (Link Status) and "Width".

For example, here's a random PCIe device in my desktop:

LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

2.5GT/s means it's running at PCIe v1, and the Width of x1 means it's using a single PCIe lane. Yours should be much higher. ;)

Yes this is definitely a good thought, and was my first step in troubleshooting when I was still attempting to get the Chelsio 62100 cards to work. In fact at first, I was maxing out around 12-13 Gbps and found via “lspci -vv” that the Chelsio was connecting over only 2 lanes. Clearing the cmos / bios by pulling the watch battery off the motherboard and then reconfiguring the bios (per Chelsio’s published performance test white papers) solved that problem. Once lspci -vv began reporting the expected result of 16 lanes / 8GT/s, that’s when I was able to get iperf3 up to ~ 37Gbps, but “real world” tests like a drag’n’drop file transfer, command line FIO tests, command line Blackmagic TestIO test, GUI Blackmagic Disk Test, AJA System Test, and Atto disk test all continued to show very volatile, unstable and relatively slow (never more than 3 to 4 Gbps average) results that were completely unacceptable for our use case, and on average still fell very short of the Intel X540 10GbE interface’s performance.

Our temporary solution to this problem has been to simply swap out the Chelsio 62100 with a much older, much less expensive Mellanox ConnectX-3 VPI, which is a dual port 40GbE card. After simply swapping out the Chelsio, we immediately saw the expected iperf3 result (39.5 Gbps) and real world performance on workstation clients exceeding 2GB/s (more than double the actual throughput of the Intel X540 10g interface).

From what we can tell, we’ve either got three bad Chelsio cards, or there is something very wrong with the current Chelsio driver or firmware for the 62100 model. This issue persisted / card behaved the same and gave the same test results in the latest release of FreeNAS + FreeBSD 11.3, Windows 10 for Workstations, Centris 7, and Centos 8.

If you scour this forum, you may also find what I did- several accounts of users being unable to get Chelsio 40g and 100g NIC’s to work properly, and an almost equal number of accounts of users having success when Mellanox hardware instead.
 
Joined
Dec 29, 2014
Messages
1,135
I have had some challenges getting the most out of my T580's. I am trying to interpret the output of "lsipci -vv" to determine how many lanes I am using.
Code:
83:00.0 Ethernet controller: Chelsio Communications Inc T580-LP-CR Unified Wire Ethernet Controller
        Subsystem: Chelsio Communications Inc Device 0000
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx+
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 64
        Region 0: Memory at fb900000 (64-bit, non-prefetchable)
        Region 2: Memory at fb880000 (64-bit, non-prefetchable)
        Region 4: Memory at fbc0c000 (64-bit, non-prefetchable)
        Expansion ROM at fb800000 [disabled]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 2048 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported-
                        RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported
                        ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
                DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF Disabled
                         AtomicOpsCtl: ReqEn-
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [b0] MSI-X: Enable- Count=34 Masked-
                Vector table: BAR=4 offset=00000000
                PBA: BAR=4 offset=00001000
        Capabilities: [d0] Vital Product Data
                Not readable
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq+ ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr+ BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-
                        MultHdrRecCap- MultHdrRecEn- TLPPfxPres- HdrLogCap-
                HeaderLog: 00000000 00000000 00000000 00000000
        Capabilities: [140 v1] Virtual Channel
                Caps:   LPEVC=0 RefClk=100ns PATEntryBits=1
                Arb:    Fixed- WRR32- WRR64- WRR128-
                Ctrl:   ArbSelect=Fixed
                Status: InProgress-
                VC0:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable+ ID=0 ArbSelect=Fixed TC/VC=01
                        Status: NegoPending- InProgress-
                VC1:    Caps:   PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
                        Ctrl:   Enable- ID=1 ArbSelect=Fixed TC/VC=00
                        Status: NegoPending- InProgress-
        Capabilities: [170 v1] Device Serial Number 00-00-00-00-00-00-00-00
        Capabilities: [190 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 1
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [1a0 v1] #19
        Capabilities: [1c0 v1] Single Root I/O Virtualization (SR-IOV)
                IOVCap: Migration-, Interrupt Message Number: 000
                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
                IOVSta: Migration-
                Initial VFs: 16, Total VFs: 16, Number of VFs: 0, Function Dependency Link: 00
                VF offset: 8, stride: 4, Device ID: 5810
                Supported Page Size: 00000553, System Page Size: 00000001
                Region 0: Memory at 00000000fbc3e000 (64-bit, non-prefetchable)
                Region 2: Memory at 00000000fbb00000 (64-bit, non-prefetchable)
                Region 4: Memory at 00000000fbbe0000 (64-bit, non-prefetchable)
                VF Migration: offset: 00000000, BIR: 0
        Capabilities: [200 v1] Transaction Processing Hints
                Interrupt vector mode supported
                Steering table in MSI-X table

Can you point me at where that would be in the above output? I know I picked an x8 slot to try and give this card the maximum available resources.
 
Joined
Dec 29, 2014
Messages
1,135
It is a getting a little late in the day for me to fully parse this, but this is an interesting link. https://calomel.org/freebsd_network_tuning.html
I am curious if these values seem right, or it I run the risk of completely borking my system if I start messing with these values.
 

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
... an almost equal number of accounts of users having success when Mellanox hardware instead.

Yeah, Mellanox make good hardware. EMC, Isilon, etc use them in their storage solutions for a reason.

It's just a shame Mellanox's business practices have historically been so dodgy and community unfriendly. At least they're now making their old Windows drivers available for people to download, so the average person picking up cards from Ebay isn't left out in the cold. :)
 
Last edited:

JustinClift

Patron
Joined
Apr 24, 2016
Messages
287
... if I run the risk of completely borking my system if I start messing with these values.

Worst case scenario you'd have to install the FreeNAS OS. As long as you've backed up your config before hand though, you can then-reapply it. And your storage should be safe to import too.

That being said, borking your installation past just needing to change the "tuned" settings back is unlikely. Not impossible, but unlikely. You might as well give it a shot (after backing up config, etc). ;)
 
Joined
Dec 29, 2014
Messages
1,135
The card in that slot is using 8 lanes (Width x8). :)
Excellent, thanks!
I am curious about what I think is it saying that it is a PCIe 2.0 connection. I believe I have PCIe 3 enabled in the BIOS. The T580 is in slot 5.


1577616388415.png

Capabilities: [70] Express (v2) Endpoint, MSI 00
This part of the "lsipci" output looks like it is saying PCIe 2.0. According to the Chelsio product brief, the card is capable of PCI Express Gen3 x8.
 
Top