NFS dies under load

Status
Not open for further replies.

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
I've been using the latest release FreeNAS 9.1 as a Crashplan backup system target. Under load, it seems to stop responding to NFS requests after a few hours. It used to help to stop NFS and restart it, but with the latest version a full FreeNAS reboot is required.

Am I the only one experiencing this?

How can I help to debug this?
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
I use CrashPlan as well for my backups. I've never noticed the NFS completely dying though. If I'm moving something large from my desktop to the NAS the NFS will become very sluggish during the move for all users here in my house. But once the move is complete then everything ramps back up. I've noticed this even with my previous NAS hosted through Ubuntu (It was an external enclosure). I would like to find out why its so slow during the large moves but I have yet to figure it out. My only thought is that my switch can't handle it? Not entirely sure if thats the case or not though. Its a large enough switch it should do just fine.
 

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
This is the Enterprise version of Crashplan, with about 30 clients backing up to the Crashplan server, which, in turn, mounts the storage via NFS from the FreeNAS 9.1 RELEASE server running on a Dell R720xd with 12 3TB individually-addressed drives. When all the clients connect in the morning, it can be quite a load, as the CP server has 4 Gbe interfaces and the FreeNAS server has a 10Gb interface.

Once this happens, there's no fix other than to reboot the FreeNAS server - unmounting/remounting the NFS export does not work - as a matter of fact, you can't unmount it without the -l ("el") flag. I'm sure it's not a client problem, since once I reboot the FreeNAS server, I can then mount the NFS drive again without further touching anything on the client.

This happens regularly. It only got worse between the Beta and Release version (on the Beta version, stopping and then starting the NFS service would allow the client to remount the drive, now it requires a full reboot).

FreeNAS is unusable in this state.

I use CrashPlan as well for my backups. I've never noticed the NFS completely dying though. If I'm moving something large from my desktop to the NAS the NFS will become very sluggish during the move for all users here in my house. But once the move is complete then everything ramps back up. I've noticed this even with my previous NAS hosted through Ubuntu (It was an external enclosure). I would like to find out why its so slow during the large moves but I have yet to figure it out. My only thought is that my switch can't handle it? Not entirely sure if thats the case or not though. Its a large enough switch it should do just fine.
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
I do belive this is the same issue i tough where my NIC driver.

Do try, instead of rebooting the server, "ifconfig (NIC) down" and then "ifconfig (NIC) up"
this resets the NFS transfers for me.

Best
Andreas
 

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
I've moved everything back off the server, it was just getting too bad. I was going to try OpenFiler or one of the Solaris clones next, since FN seems way too buggy.
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Been there done that, FreeNAS got some great features, but when you get it 100% you don't want to go back.

If you are looking for other distroes, then check out "Smart OS " (http://smartos.org/)
and Omni OS with the napp-it pluggin. (http://omnios.omniti.com/) (http://napp-it.org/)

You will miss the ease of use FreeNAS have to offer, but I guess you will find out ;)

BTW, on FreeNAS 8.3.1 then this NFS problem is not there.
We are using FreeNAS as a Media Storage for .DPX sequenses in Video Production, for real time 4K deliverable.
the share is NFS v4. FreeNAS 8.3.1 and lots of RAM, + 10GbE NIC.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your loss. You might have gotten more responses if you had included your hardware and all that stuff as mentioned in the forum rules.
 

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
I do belive this is the same issue i tough where my NIC driver.

Do try, instead of rebooting the server, "ifconfig (NIC) down" and then "ifconfig (NIC) up"
this resets the NFS transfers for me.

Best
Andreas


The only interface plugged in is the 10Gbe and the web interface is responding fine over that. I initially thought the same and unplugged the lone 1Gbe interface I was using for management because the system was for some reason replying over it even though the clients where talking to the 10Gbe.

I just tried doing a "ifconfig ix1 down; sleep 5; ifconfig ix1 up" command on it in one command so I wouldn't lose the sole connection. Lo and behold, the client's "df" command which was hung suddenly completed!!

Interesting. What does that say to you?

LOL, I haven't given up COMPLETELY. Just got frustrated for a while, needed my client backups to complete for a bit before I try something else.

As for FreeNAS 8, it wouldn't boot, it immediately panicked. Support said they wouldn't fix, to wait for v9.

I there an easy way to get a hardware dump from the FreeNAS web interface? I posted details about Dell model (R720xd), disk architecture above, but getting all the details about which particular PERC card, which particular ethernet cards, drives, etc is painful. I have a Linux background, not so familiar with BSD - /proc seems to be empty. The output of dmidecode seems to be just a bit too detailed for a forum post, but I'll include it as an attachment for completeness.
 

Attachments

  • dmidecode.txt
    32.5 KB · Views: 282

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Cyberjock - "Idea" published a bugreport some days ago, with all inc. and where told to go to the forum (http://support.freenas.org/ticket/2427)

Ideanatewi -

Yea, I know about that problem, i did make a small script to reset the NIC.
Do you use the Intel 10GbE Card ?

I think this is the ixgbe drivers of the 9.1 release, that somehow hangs with lots of smaller files in a big cue.
I have the X520 card, one port is in use for HighEnd Video , and 3 or 4 times a day i have to reset that port with ifconfig.

The other port is "low" trafic, just audio edditing ProTools, and I dont need to reset that port at all =)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Cyberjock - "Idea" published a bugreport some days ago, with all inc. and where told to go to the forum (http://support.freenas.org/ticket/2427)

Yeah, that's a typical response for people asking for support in the bug tracker. The bug tracker is to track bugs(or expected bugs) but not for technical support, which is what the OP was Idea was asking for.
 

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
Cyberjock,

A request for technical support would be something along the lines of "How do I do...", not "Here's a problem. What information do you need so I can help you locate the bug."

Look at post #3 on that bug report.

When something configured correctly doesn't work as advertised, I call that a bug, not a request for technical support.

And now we have two users with the same issue, so I'll update the bug report with a link to this topic. Maybe it's enough to go on now.

Dvc9,

Yes, I have the Intel X520 DP 10Gb DA/SFP+ Server Adapter in my Dell R720xd, that's the card that's apparently having issues.

Though, it's really odd that the Web GUI keeps responding on that same interface even when NFS stops working!
 

ideanatewi

Cadet
Joined
May 13, 2013
Messages
8
OK, so I'm supposed to "update the description of this bug with the data found" on the bug tracker... but I can't Edit the Description at all, only add to the discussion history.

Does anyone here know if that will get this ticket looked at again?
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
sorry Ideanatewi, FreeNAS is a open source project, of collaboration, and it kinda does not work like that =)
if you want commercial support, or someone to "just fix" it, then i guess the Ix system pp are the best. ( ixsystems.com )
We don't use them, cause I'm fixing this stuff at the side of what I'm doing at my job.

When I have a workaround for my part, and if no one have published a fix, then ill happily share it with you, and everyone here at FreeNAS forum =)
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
About the Network Traffic, and why the web interface works.

It can be the difference between UDP and TCP, cause nfs v3 uses UDP all the way, if you don't specify it to be TCP.
Ill try that in the morning =)

My belief's is that there is some sort of ixgbe driver error,
cause a quick search ( https://www.google.no/search?q=ixgb...69i57j69i62l3.4280j0&sourceid=chrome&ie=UTF-8 )
gives me lots of ideas.

Testing around =)

If I can get some guides to switch the driver, or if someone can recompile a newer ixgbe driver, that would be great !
 

freefan

Cadet
Joined
Sep 18, 2013
Messages
8
HI there, I too am experiencing load/box getting "stuck" issues with the newest Freenas (9.1., that I wasn't experiencing in the 8.* branch (this is a long one so please bare with me)

First off Both the Box's enviornment
=============
====================
(Box 1.) Freenas box info 9.1.1
====================
=============
[root@Box00] ~# uname -a
FreeBSD Box00 9.1-STABLE FreeBSD 9.1-STABLE #0 r+16f6355: Tue Aug 27 00:38:40 PDT 2013 root@build.ixsystems.com:/tank/home/jkh/src/freenas/os-base/amd64/tank/home/jkh/src/freenas/FreeBSD/src/sys/FREENAS.amd64 amd64
=============
System load and Memory stats of Freenas box (16gb Of memory - deduplication off)
=============
last pid: 19466; load averages: 0.54, 0.53, 0.51 up 9+06:48:57 10:39:48
26 processes: 1 running, 25 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 0.0% interrupt, 100% idle
Mem: 5368K Active, 381M Inact, 14G Wired, 763M Buf, 1598M Free
ARC: 11G Total, 1874M MFU, 7886M MRU, 32K Anon, 128M Header, 1730M Other
Swap: 8192M Total, 8192M Free
=============
Hard Drive Space usage Stats of Freenas box (1.46TB used out of 5.44TB)
=============
[root@filer00] ~# zpool list
myvol00 5.44T 1.46T 3.98T 26% 1.00x ONLINE /mnt

Error digging: No errors in /var/log/messages or dmesg when what im about to describe happens below this box info, but I think the load increases on freenas too. you'll see description more below.


=============
====================
(Box 2.) Freenas Client info - KVM enviornment (tried both Unbuntu and CentOS)
====================
=============
[root@client] ~# uname -a
Linux myboxA01 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
=============
System load and Memory stats (8gb Of memory)
=============
top - 16:35:58 up 15:06, 2 users, load average: 0.06, 0.04, 0.08
Tasks: 147 total, 1 running, 145 sleeping, 1 stopped, 0 zombie
Cpu(s): 0.3%us, 0.2%sy, 0.0%ni, 98.6%id, 0.9%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7889784k total, 5393792k used, 2495992k free, 112964k buffers
Swap: 8028152k total, 0k used, 8028152k free, 2039944k cached
=============
Mounting the NFS freenas share in /etc/fstab with:

also ive tried many different ways of mounting, but this issue still arises. Here is my latest mount config,


=============
10.95.1.40:/mnt/myvol00/mydir /mnt/mydirnfs soft,timeo=900,retrans=3,tcp


Description of my problems: (first part - rsync process getting "stuck" but runs again when user issues a command at shell) I did end up getting through this
(this first part may be related to the 'dieing under load' part)
Ok it all started when I first installed to Freenas 9.1.1 from the previous 8.* branch. After I installed the latest version and everything was good, I began r-syncing from Freenas some data off an external volume that I had backed to before the install that's connected externally from the main zpool of freenas (ie external hard drive or lsi card seperate volume)- when I noticed Freenas would get "stuck" during the rsync. Not completely hung up, not frozen, but 'stuck' being the key word here. I would begin the rsync, come back a few hours later and run a df -ah at the shell expecting much progress to be made, only to find out maybe 4gb transfered. That's odd I thought. I top the box, and it looked like the rsync process was still alive and well, but the load on it was very low, as if it went to sleep or something. Ok so i come back to the shell and df -ah a couple more times and see that that the rsync increased to 5GB, and the process load kicks up again. Ok strange I though.. Let me wander off another couple hours. Sure enough I came back a couple hours later, df -ah and its only at 8GB. Strange again I think. So i rince, wash and repeat, same behavior! rsync only continues if the box has some kind of user interactivity.. a df in that case. So I painstakingly get all the data transfered over by having to continually df the box to keep it 'awake' over a few days (weird huh?)..

Once that was all done, and my data was back on my fresh new Freenas 9.1.1 I chalked up that file transfer rsync issue to maybe a fluke, or some weird rsync bug that didn't have to do with freenas - anyways, i didn't really care, i finally had all my data back in place in a new version of freenas, (Woohoo!) that was until my real life critical problem started happening:

Description of my problems: (second part - Client core dumping with the following error messages and core dump)
So I install my client (the first time being Ubuntu 12, and later on as I'll explain to rule out any client OS specific issues Cent OS 6.4), get the client all setup with nfs utils, everything mounts great! no problems the first time. "sweet" im thinking.. I configure my site up and everything comes up, awesome! all the data is there! So I go away thinking "all in a days work, everything is cool". Until I come back about 2 hours later again. I load the site up - everything has come to a crawl, it doesn't load. I log into the client and top the box - the load is very high, and climbing. However I see no process causing this really high load. So i start to check the web server logs - nothing.. Then I check the kern.log on unbuntu and messages log on Cent OS and see these:

Under load, and randomly the the freenas client gets these

Sep 16 02:18:46 myboxA0 kernel: [ 3600.648109] INFO: task wget:2373 blocked for more than 120 seconds.
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648250] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648347] wget D ffff88021fd14580 0 2373 2372 0x00000000
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648353] ffff880210509b98 0000000000000002 ffff880210509fd8 0000000000014580
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648357] ffff880210509fd8 0000000000014580 ffff880211b85dc0 ffff88021fd14e30
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648360] ffff88021ffb72e8 0000000000000002 ffffffff8113f130 ffff880210509c10
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648364] Call Trace:
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648375] [<ffffffff8113f130>] ? wait_on_page_read+0x60/0x60
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648381] [<ffffffff816f843d>] io_schedule+0x9d/0x130
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648384] [<ffffffff8113f13e>] sleep_on_page+0xe/0x20
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648387] [<ffffffff816f6180>] __wait_on_bit+0x60/0x90
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648390] [<ffffffff8113eeff>] wait_on_page_bit+0x7f/0x90
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648394] [<ffffffff81085560>] ? wake_atomic_t_function+0x40/0x40
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648399] [<ffffffff8114bac1>] ? pagevec_lookup_tag+0x21/0x30
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648402] [<ffffffff8113f011>] filemap_fdatawait_range+0x101/0x190
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648485] [<ffffffff8114ab0e>] ? do_writepages+0x1e/0x40
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648488] [<ffffffff811406e9>] ? __filemap_fdatawrite_range+0x59/0x60
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648491] [<ffffffff811407ff>] filemap_write_and_wait_range+0x3f/0x70
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648519] [<ffffffffa01425b8>] nfs_file_fsync+0x78/0x90 [nfs]
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648525] [<ffffffff811d52fd>] generic_write_sync+0x4d/0x60
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648528] [<ffffffff8114152e>] generic_file_aio_write+0x9e/0xc0
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648537] [<ffffffffa01428b1>] nfs_file_write+0xb1/0x1e0 [nfs]
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648542] [<ffffffff811a6ab0>] do_sync_write+0x80/0xb0
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648545] [<ffffffff811a71ed>] vfs_write+0xbd/0x1e0
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648549] [<ffffffff811a7c29>] SyS_write+0x49/0xa0
Sep 16 02:18:46 myboxA0 kernel: [ 3600.648554] [<ffffffff81702fef>] tracesys+0xe1/0xe6
Sep 16 02:19:25 myboxA0 kernel: [ 3639.264053] nfs: server 10.95.1.40 not responding, still trying
Sep 16 02:19:47 myboxA0 kernel: [ 3661.600063] nfs: server 10.95.1.40 not responding, still trying


So I'm like, "ok is freeenas unresponsive? what gives?" So I ssh from the client to the freenas box - Now here's the interesting part, the second I initiate that SSH connect to freenas - Everything comes back! The webpage loads right away, and the client load goes back down to normal - but the box has already core dumped. Now I'm starting to think of shades of the df -ah thing that made the rsync continue. Could I replicate this? Sure enough, it's a continuous cycle, some process which accesses the freenas share a lot (I've seen it crash with wget, nginx, and other processes , I did a lot of google searching to make sure it wasn't process specific) hangs because the NFS server 'goes away' . But the moment I try to connect to it on the network with SSH, everything come back - like literally the web browser goes from spinning and waiting to instant load.

Summing it all up

I'm not sure if both problems are related, but it seems like freenas is 'going to sleep' or something weird under heavy load. Having to df to make the file transfer continue, and having to ssh to bring the NFS server back from 'not responding' both seem like the same kind of behavior. I tried Ubuntu and Cent OS in hopes in ruling out an operating system specific issue, but this happens on both clients with the exact same setup.

Also, I've googled about the "blocked for more than 120 seconds" thing and results come up, but they are older.

Any ideas why Freenas is getting stuck in these weird scenarios? I really want to stick with Freenas too, as it's awesome, and having to downgrade back to the 8.* branch would be a pain.
 

aufalien

Patron
Joined
Jul 25, 2013
Messages
374
Chiming in late but in ref to other distros, SmartOS is a hypervisor and OmniOS has outdated hardware support. So lets not get ahead of ourselves.

And Nappit is ugly as hell, written in Perl I think so it was pretty slow. I hated it big time where as FreeNAS UI is far superior.

I'm surprised you'd name drop those, tacky indeed dvc9.
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Chiming in late but in ref to other distros, SmartOS is a hypervisor and OmniOS has outdated hardware support. So lets not get ahead of ourselves.
And Nappit is ugly as hell, written in Perl I think so it was pretty slow. I hated it big time where as FreeNAS UI is far superior.
I'm surprised you'd name drop those, tacky indeed dvc9.


well thats not really the case her , is it ?

NFS still dies under load, and the ixgbe driver in freenas 9.1.1 is still wierd.

I did however find a workaround.
Setting up my NIC´s in LAG provents them from killing NFS .

Duno Why.
If you want a debate about the other OS s for NAS, then go ahead. start youre own thread.
 

freefan

Cadet
Joined
Sep 18, 2013
Messages
8
Ahh okay - A couple of questions then:

1. Is there an official bug already filed for the ixgbe driver in freenas 9.1.1 as crashing and anyone exploring a possible software patch? Obviously other people will run into this problem and need potential updates to accommodate the issue if Freenas is going to be used on production systems.

2. Does the previous version of freenas also suffer from the same bug?

I was hoping to go with the LAG setup as a quick fix, but my switch does not support it.

Like I said, i love Freenas but something crashing in production every (6-12 hours now is what im seeing) no matter how cool it is, just can't be practical. Any other advice or ideas? Thanks guys!
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
Yea i know, its crazy in Production ;)
Well, i have seen a "beta" ixgbe update in the
Bug reporter. I dont know if by saying "issue resolved" in 9.1.1 or if it meant issue resolved in up comming 9.1.2 ;)

Im wondering Of installing a Alpha Of 9.1.2 to test :p

I alsow noticed that my preformance has also decreesed from 8.3 to 9.1.1 so there is something fishy here.
 

dvc9

Explorer
Joined
May 2, 2012
Messages
72
here is my ixgbe sysctl, and this is also the driver version :)

i think the current ixgbe is 2.5.15, and we are on 2.5.7


[root@FRAG-SERVER] ~# sysctl dev.ix
dev.ix.0.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.7 - STABLE/9
dev.ix.0.%driver: ix
dev.ix.0.%location: slot=0 function=0
dev.ix.0.%pnpinfo: vendor=0x8086 device=0x151c subvendor=0x8086 subdevice=0xa03c class=0x020000
dev.ix.0.%parent: pci8
dev.ix.0.fc: 3
dev.ix.0.enable_aim: 1
dev.ix.0.advertise_speed: 0
dev.ix.0.dropped: 0
dev.ix.0.mbuf_defrag_failed: 0
dev.ix.0.watchdog_events: 0
dev.ix.0.link_irq: 23
dev.ix.0.queue0.interrupt_rate: 100000
dev.ix.0.queue0.irqs: 6620855
dev.ix.0.queue0.txd_head: 354
dev.ix.0.queue0.txd_tail: 354
dev.ix.0.queue0.tso_tx: 151567
dev.ix.0.queue0.no_tx_dma_setup: 0
dev.ix.0.queue0.no_desc_avail: 0
dev.ix.0.queue0.tx_packets: 7061274
dev.ix.0.queue0.rxd_head: 765
dev.ix.0.queue0.rxd_tail: 764
dev.ix.0.queue0.rx_packets: 1360637
dev.ix.0.queue0.rx_bytes: 1973685017
dev.ix.0.queue0.rx_copies: 46380
dev.ix.0.queue0.lro_queued: 1323642
dev.ix.0.queue0.lro_flushed: 1214009
dev.ix.0.queue1.interrupt_rate: 125000
dev.ix.0.queue1.irqs: 4476143
dev.ix.0.queue1.txd_head: 1262
dev.ix.0.queue1.txd_tail: 1262
dev.ix.0.queue1.tso_tx: 2727
dev.ix.0.queue1.no_tx_dma_setup: 0
dev.ix.0.queue1.no_desc_avail: 0
dev.ix.0.queue1.tx_packets: 4400923
dev.ix.0.queue1.rxd_head: 1944
dev.ix.0.queue1.rxd_tail: 1943
dev.ix.0.queue1.rx_packets: 1378200
dev.ix.0.queue1.rx_bytes: 2013721914
dev.ix.0.queue1.rx_copies: 37656
dev.ix.0.queue1.lro_queued: 1349833
dev.ix.0.queue1.lro_flushed: 1219201
dev.ix.0.queue2.interrupt_rate: 125000
dev.ix.0.queue2.irqs: 4614456
dev.ix.0.queue2.txd_head: 1491
dev.ix.0.queue2.txd_tail: 1491
dev.ix.0.queue2.tso_tx: 747
dev.ix.0.queue2.no_tx_dma_setup: 0
dev.ix.0.queue2.no_desc_avail: 0
dev.ix.0.queue2.tx_packets: 4518706
dev.ix.0.queue2.rxd_head: 1435
dev.ix.0.queue2.rxd_tail: 1434
dev.ix.0.queue2.rx_packets: 1750427
dev.ix.0.queue2.rx_bytes: 2567824639
dev.ix.0.queue2.rx_copies: 35485
dev.ix.0.queue2.lro_queued: 1720541
dev.ix.0.queue2.lro_flushed: 1561953
dev.ix.0.queue3.interrupt_rate: 125000
dev.ix.0.queue3.irqs: 4882004
dev.ix.0.queue3.txd_head: 902
dev.ix.0.queue3.txd_tail: 902
dev.ix.0.queue3.tso_tx: 867139
dev.ix.0.queue3.no_tx_dma_setup: 0
dev.ix.0.queue3.no_desc_avail: 0
dev.ix.0.queue3.tx_packets: 4316905
dev.ix.0.queue3.rxd_head: 979
dev.ix.0.queue3.rxd_tail: 978
dev.ix.0.queue3.rx_packets: 6781907
dev.ix.0.queue3.rx_bytes: 2697348910
dev.ix.0.queue3.rx_copies: 2645009
dev.ix.0.queue3.lro_queued: 6536050
dev.ix.0.queue3.lro_flushed: 3247281
dev.ix.0.queue4.interrupt_rate: 55555
dev.ix.0.queue4.irqs: 2141854
dev.ix.0.queue4.txd_head: 225
dev.ix.0.queue4.txd_tail: 225
dev.ix.0.queue4.tso_tx: 169871
dev.ix.0.queue4.no_tx_dma_setup: 0
dev.ix.0.queue4.no_desc_avail: 0
dev.ix.0.queue4.tx_packets: 2099209
dev.ix.0.queue4.rxd_head: 683
dev.ix.0.queue4.rxd_tail: 682
dev.ix.0.queue4.rx_packets: 1475243
dev.ix.0.queue4.rx_bytes: 2155923782
dev.ix.0.queue4.rx_copies: 38942
dev.ix.0.queue4.lro_queued: 1443537
dev.ix.0.queue4.lro_flushed: 1311422
dev.ix.0.queue5.interrupt_rate: 125000
dev.ix.0.queue5.irqs: 32197578
dev.ix.0.queue5.txd_head: 1141
dev.ix.0.queue5.txd_tail: 1141
dev.ix.0.queue5.tso_tx: 235165
dev.ix.0.queue5.no_tx_dma_setup: 0
dev.ix.0.queue5.no_desc_avail: 0
dev.ix.0.queue5.tx_packets: 32595976
dev.ix.0.queue5.rxd_head: 1043
dev.ix.0.queue5.rxd_tail: 1042
dev.ix.0.queue5.rx_packets: 39197715
dev.ix.0.queue5.rx_bytes: 55548278168
dev.ix.0.queue5.rx_copies: 2118708
dev.ix.0.queue5.lro_queued: 39056159
dev.ix.0.queue5.lro_flushed: 4864118
dev.ix.0.queue6.interrupt_rate: 100000
dev.ix.0.queue6.irqs: 3537958
dev.ix.0.queue6.txd_head: 1913
dev.ix.0.queue6.txd_tail: 1913
dev.ix.0.queue6.tso_tx: 476
dev.ix.0.queue6.no_tx_dma_setup: 0
dev.ix.0.queue6.no_desc_avail: 0
dev.ix.0.queue6.tx_packets: 3462373
dev.ix.0.queue6.rxd_head: 495
dev.ix.0.queue6.rxd_tail: 494
dev.ix.0.queue6.rx_packets: 1368559
dev.ix.0.queue6.rx_bytes: 2000549392
dev.ix.0.queue6.rx_copies: 35443
dev.ix.0.queue6.lro_queued: 1340135
dev.ix.0.queue6.lro_flushed: 1220697
dev.ix.0.queue7.interrupt_rate: 100000
dev.ix.0.queue7.irqs: 9065535
dev.ix.0.queue7.txd_head: 1060
dev.ix.0.queue7.txd_tail: 1060
dev.ix.0.queue7.tso_tx: 30748
dev.ix.0.queue7.no_tx_dma_setup: 0
dev.ix.0.queue7.no_desc_avail: 0
dev.ix.0.queue7.tx_packets: 8484192
dev.ix.0.queue7.rxd_head: 2006
dev.ix.0.queue7.rxd_tail: 2005
dev.ix.0.queue7.rx_packets: 13725654
dev.ix.0.queue7.rx_bytes: 19779182225
dev.ix.0.queue7.rx_copies: 496453
dev.ix.0.queue7.lro_queued: 13306901
dev.ix.0.queue7.lro_flushed: 10416614
dev.ix.0.mac_stats.crc_errs: 0
dev.ix.0.mac_stats.ill_errs: 10
dev.ix.0.mac_stats.byte_errs: 10
dev.ix.0.mac_stats.short_discards: 0
dev.ix.0.mac_stats.local_faults: 109
dev.ix.0.mac_stats.remote_faults: 8
dev.ix.0.mac_stats.rec_len_errs: 0
dev.ix.0.mac_stats.xon_txd: 0
dev.ix.0.mac_stats.xon_recvd: 0
dev.ix.0.mac_stats.xoff_txd: 0
dev.ix.0.mac_stats.xoff_recvd: 0
dev.ix.0.mac_stats.total_octets_rcvd: 89006001099
dev.ix.0.mac_stats.good_octets_rcvd: 89003018111
dev.ix.0.mac_stats.total_pkts_rcvd: 67054258
dev.ix.0.mac_stats.good_pkts_rcvd: 67037250
dev.ix.0.mac_stats.mcast_pkts_rcvd: 26171
dev.ix.0.mac_stats.bcast_pkts_rcvd: 9759
dev.ix.0.mac_stats.rx_frames_64: 2675059
dev.ix.0.mac_stats.rx_frames_65_127: 2467708
dev.ix.0.mac_stats.rx_frames_128_255: 3507889
dev.ix.0.mac_stats.rx_frames_256_511: 380037
dev.ix.0.mac_stats.rx_frames_512_1023: 290341
dev.ix.0.mac_stats.rx_frames_1024_1522: 57716216
dev.ix.0.mac_stats.recv_undersized: 0
dev.ix.0.mac_stats.recv_fragmented: 0
dev.ix.0.mac_stats.recv_oversized: 0
dev.ix.0.mac_stats.recv_jabberd: 10
dev.ix.0.mac_stats.management_pkts_rcvd: 0
dev.ix.0.mac_stats.management_pkts_drpd: 0
dev.ix.0.mac_stats.checksum_errs: 0
dev.ix.0.mac_stats.good_octets_txd: 50483663318
dev.ix.0.mac_stats.total_pkts_txd: 95404130
dev.ix.0.mac_stats.good_pkts_txd: 95404130
dev.ix.0.mac_stats.bcast_pkts_txd: 92
dev.ix.0.mac_stats.mcast_pkts_txd: 555
dev.ix.0.mac_stats.management_pkts_txd: 0
dev.ix.0.mac_stats.tx_frames_64: 21363202
dev.ix.0.mac_stats.tx_frames_65_127: 42600708
dev.ix.0.mac_stats.tx_frames_128_255: 685682
dev.ix.0.mac_stats.tx_frames_256_511: 42748
dev.ix.0.mac_stats.tx_frames_512_1023: 917088
dev.ix.0.mac_stats.tx_frames_1024_1522: 29794702
dev.ix.1.%desc: Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.7 - STABLE/9
dev.ix.1.%driver: ix
dev.ix.1.%location: slot=0 function=1
dev.ix.1.%pnpinfo: vendor=0x8086 device=0x151c subvendor=0x8086 subdevice=0xa03c class=0x020000
dev.ix.1.%parent: pci8
dev.ix.1.fc: 3
dev.ix.1.enable_aim: 1
dev.ix.1.advertise_speed: 0
dev.ix.1.dropped: 0
dev.ix.1.mbuf_defrag_failed: 0
dev.ix.1.watchdog_events: 0
dev.ix.1.link_irq: 3
dev.ix.1.queue0.interrupt_rate: 100000
dev.ix.1.queue0.irqs: 7438003
dev.ix.1.queue0.txd_head: 500
dev.ix.1.queue0.txd_tail: 500
dev.ix.1.queue0.tso_tx: 151363
dev.ix.1.queue0.no_tx_dma_setup: 0
dev.ix.1.queue0.no_desc_avail: 0
dev.ix.1.queue0.tx_packets: 7054681
dev.ix.1.queue0.rxd_head: 1777
dev.ix.1.queue0.rxd_tail: 1776
dev.ix.1.queue0.rx_packets: 17417969
dev.ix.1.queue0.rx_bytes: 21399095819
dev.ix.1.queue0.rx_copies: 2986177
dev.ix.1.queue0.lro_queued: 16073418
dev.ix.1.queue0.lro_flushed: 7581287
dev.ix.1.queue1.interrupt_rate: 125000
dev.ix.1.queue1.irqs: 4733276
dev.ix.1.queue1.txd_head: 541
dev.ix.1.queue1.txd_tail: 541
dev.ix.1.queue1.tso_tx: 2654
dev.ix.1.queue1.no_tx_dma_setup: 0
dev.ix.1.queue1.no_desc_avail: 0
dev.ix.1.queue1.tx_packets: 4395732
dev.ix.1.queue1.rxd_head: 54
dev.ix.1.queue1.rxd_tail: 53
dev.ix.1.queue1.rx_packets: 5584950
dev.ix.1.queue1.rx_bytes: 8246643561
dev.ix.1.queue1.rx_copies: 64102
dev.ix.1.queue1.lro_queued: 5558679
dev.ix.1.queue1.lro_flushed: 5064132
dev.ix.1.queue2.interrupt_rate: 125000
dev.ix.1.queue2.irqs: 4813892
dev.ix.1.queue2.txd_head: 1950
dev.ix.1.queue2.txd_tail: 1950
dev.ix.1.queue2.tso_tx: 768
dev.ix.1.queue2.no_tx_dma_setup: 0
dev.ix.1.queue2.no_desc_avail: 0
dev.ix.1.queue2.tx_packets: 4515812
dev.ix.1.queue2.rxd_head: 1525
dev.ix.1.queue2.rxd_tail: 1524
dev.ix.1.queue2.rx_packets: 5391861
dev.ix.1.queue2.rx_bytes: 7997525703
dev.ix.1.queue2.rx_copies: 38307
dev.ix.1.queue2.lro_queued: 5364846
dev.ix.1.queue2.lro_flushed: 4914468
dev.ix.1.queue3.interrupt_rate: 125000
dev.ix.1.queue3.irqs: 3858245
dev.ix.1.queue3.txd_head: 1457
dev.ix.1.queue3.txd_tail: 1457
dev.ix.1.queue3.tso_tx: 840439
dev.ix.1.queue3.no_tx_dma_setup: 0
dev.ix.1.queue3.no_desc_avail: 0
dev.ix.1.queue3.tx_packets: 4211144
dev.ix.1.queue3.rxd_head: 1669
dev.ix.1.queue3.rxd_tail: 1668
dev.ix.1.queue3.rx_packets: 2223749
dev.ix.1.queue3.rx_bytes: 3104096684
dev.ix.1.queue3.rx_copies: 142125
dev.ix.1.queue3.lro_queued: 2192341
dev.ix.1.queue3.lro_flushed: 1887496
dev.ix.1.queue4.interrupt_rate: 125000
dev.ix.1.queue4.irqs: 2171568
dev.ix.1.queue4.txd_head: 464
dev.ix.1.queue4.txd_tail: 464
dev.ix.1.queue4.tso_tx: 173249
dev.ix.1.queue4.no_tx_dma_setup: 0
dev.ix.1.queue4.no_desc_avail: 0
dev.ix.1.queue4.tx_packets: 2099905
dev.ix.1.queue4.rxd_head: 1141
dev.ix.1.queue4.rxd_tail: 1140
dev.ix.1.queue4.rx_packets: 2569333
dev.ix.1.queue4.rx_bytes: 2200640937
dev.ix.1.queue4.rx_copies: 1004247
dev.ix.1.queue4.lro_queued: 2378395
dev.ix.1.queue4.lro_flushed: 1721958
dev.ix.1.queue5.interrupt_rate: 125000
dev.ix.1.queue5.irqs: 32554383
dev.ix.1.queue5.txd_head: 1055
dev.ix.1.queue5.txd_tail: 1055
dev.ix.1.queue5.tso_tx: 239646
dev.ix.1.queue5.no_tx_dma_setup: 0
dev.ix.1.queue5.no_desc_avail: 0
dev.ix.1.queue5.tx_packets: 32610469
dev.ix.1.queue5.rxd_head: 1488
dev.ix.1.queue5.rxd_tail: 1487
dev.ix.1.queue5.rx_packets: 11609552
dev.ix.1.queue5.rx_bytes: 17214985575
dev.ix.1.queue5.rx_copies: 73532
dev.ix.1.queue5.lro_queued: 11582217
dev.ix.1.queue5.lro_flushed: 10616988
dev.ix.1.queue6.interrupt_rate: 100000
dev.ix.1.queue6.irqs: 3706600
dev.ix.1.queue6.txd_head: 921
dev.ix.1.queue6.txd_tail: 921
dev.ix.1.queue6.tso_tx: 454
dev.ix.1.queue6.no_tx_dma_setup: 0
dev.ix.1.queue6.no_desc_avail: 0
dev.ix.1.queue6.tx_packets: 3457802
dev.ix.1.queue6.rxd_head: 920
dev.ix.1.queue6.rxd_tail: 919
dev.ix.1.queue6.rx_packets: 4053912
dev.ix.1.queue6.rx_bytes: 5994396870
dev.ix.1.queue6.rx_copies: 43796
dev.ix.1.queue6.lro_queued: 4025510
dev.ix.1.queue6.lro_flushed: 3690984
dev.ix.1.queue7.interrupt_rate: 500000
dev.ix.1.queue7.irqs: 8447674
dev.ix.1.queue7.txd_head: 378
dev.ix.1.queue7.txd_tail: 378
dev.ix.1.queue7.tso_tx: 31344
dev.ix.1.queue7.no_tx_dma_setup: 0
dev.ix.1.queue7.no_desc_avail: 0
dev.ix.1.queue7.tx_packets: 8499167
dev.ix.1.queue7.rxd_head: 812
dev.ix.1.queue7.rxd_tail: 811
dev.ix.1.queue7.rx_packets: 1784620
dev.ix.1.queue7.rx_bytes: 2622527965
dev.ix.1.queue7.rx_copies: 37269
dev.ix.1.queue7.lro_queued: 1757009
dev.ix.1.queue7.lro_flushed: 1599316
dev.ix.1.mac_stats.crc_errs: 0
dev.ix.1.mac_stats.ill_errs: 0
dev.ix.1.mac_stats.byte_errs: 0
dev.ix.1.mac_stats.short_discards: 0
dev.ix.1.mac_stats.local_faults: 64
dev.ix.1.mac_stats.remote_faults: 1
dev.ix.1.mac_stats.rec_len_errs: 0
dev.ix.1.mac_stats.xon_txd: 0
dev.ix.1.mac_stats.xon_recvd: 0
dev.ix.1.mac_stats.xoff_txd: 0
dev.ix.1.mac_stats.xoff_recvd: 0
dev.ix.1.mac_stats.total_octets_rcvd: 68982671207
dev.ix.1.mac_stats.good_octets_rcvd: 68980159990
dev.ix.1.mac_stats.total_pkts_rcvd: 50642938
dev.ix.1.mac_stats.good_pkts_rcvd: 50634420
dev.ix.1.mac_stats.mcast_pkts_rcvd: 0
dev.ix.1.mac_stats.bcast_pkts_rcvd: 9414
dev.ix.1.mac_stats.rx_frames_64: 7232
dev.ix.1.mac_stats.rx_frames_65_127: 4369761
dev.ix.1.mac_stats.rx_frames_128_255: 207302
dev.ix.1.mac_stats.rx_frames_256_511: 719767
dev.ix.1.mac_stats.rx_frames_512_1023: 782307
dev.ix.1.mac_stats.rx_frames_1024_1522: 44548051
dev.ix.1.mac_stats.recv_undersized: 0
dev.ix.1.mac_stats.recv_fragmented: 0
dev.ix.1.mac_stats.recv_oversized: 0
dev.ix.1.mac_stats.recv_jabberd: 0
dev.ix.1.mac_stats.management_pkts_rcvd: 0
dev.ix.1.mac_stats.management_pkts_drpd: 0
dev.ix.1.mac_stats.checksum_errs: 0
dev.ix.1.mac_stats.good_octets_txd: 49962166130
dev.ix.1.mac_stats.total_pkts_txd: 94980204
dev.ix.1.mac_stats.good_pkts_txd: 94980204
dev.ix.1.mac_stats.bcast_pkts_txd: 97
dev.ix.1.mac_stats.mcast_pkts_txd: 571
dev.ix.1.mac_stats.management_pkts_txd: 0
dev.ix.1.mac_stats.tx_frames_64: 21364531
dev.ix.1.mac_stats.tx_frames_65_127: 42526187
dev.ix.1.mac_stats.tx_frames_128_255: 687565
dev.ix.1.mac_stats.tx_frames_256_511: 42895
dev.ix.1.mac_stats.tx_frames_512_1023: 890029
dev.ix.1.mac_stats.tx_frames_1024_1522: 29468997
 
Status
Not open for further replies.
Top