Problems with 8.01 - kernel using a lot of CPU; AFP struggling

Status
Not open for further replies.

cheongi

Dabbler
Joined
Aug 10, 2011
Messages
18
Previously 8.0-Release amd64 on HPMicroserverN36L 8GB RAM 4x2TB discs. Upgraded using CD method yesterday. System seems mostly functional and no serious errors on boot.

Since early this am, CPU has been running about 50-60% usage according to the graph, whereas on 8.0, it would sit on nearly nothing. AFP shares having trouble bringing up a directory.

top
Code:
last pid:  3582;  load averages:  0.54,  0.68,  0.80      up 0+00:49:18  07:17:43
284 processes: 4 running, 265 sleeping, 15 waiting
CPU:  0.4% user,  0.0% nice, 17.4% system,  3.2% interrupt, 79.0% idle
Mem: 96M Active, 35M Inact, 1244M Wired, 4240K Cache, 141M Buf, 6414M Free
Swap: 8192M Total, 8192M Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root          2 171 ki31     0K    32K CPU0    0  55:55 152.93% idle
    0 root        164  -8    0     0K  2608K -       1  26:17 34.47% kernel
  159 root         10  -8    -     0K   144K zio->i  1   9:05 12.45% zfskern
   12 root         15 -60    -     0K   240K WAIT    0   3:01  5.57% intr
    4 root          1  -8    -     0K    16K -       1   2:08  2.59% g_down
    3 root          1  -8    -     0K    16K -       1   1:23  1.17% g_up
 1774 root          6  44    0   115M 63560K uwait   1   0:11  0.00% python
    7 root          1  76    -     0K    16K ccb_sc  0   0:05  0.00% xpt_thrd
 1874 root          7  44    0 64020K  8188K ucond   0   0:02  0.00% collectd
 3390 root          2  66    0 31496K  4416K select  0   0:01  0.00% afpd
 2278 www           1  44    0 19324K  4076K kqread  1   0:01  0.00% lighttpd
 2367 root          1  76    0 64096K 24448K ttyin   1   0:01  0.00% python
 1515 root          1  44    0 11776K  2784K select  1   0:01  0.00% ntpd
   14 root         24 -64    -     0K   384K -       0   0:01  0.00% usb
   58 root          1  -8    -     0K    16K mdwait  1   0:00  0.00% md2
 2913 root          1  44    0  9224K  3284K CPU1    1   0:00  0.00% top
    2 root          1  -8    -     0K    16K -       0   0:00  0.00% g_event
 2583 root          1  44    0 33300K  5044K select  1   0:00  0.00% sshd



AFP reboot:
Code:
Oct  5 07:14:06 microserver freenas[1774]: Executing: /usr/sbin/service netatalk restart
Oct  5 07:14:06 microserver freenas: No matching processes were found
Oct  5 07:14:06 microserver freenas: No matching processes were found
Oct  5 07:14:06 microserver cnid_metad[3387]: Set syslog logging to level: LOG_NOTE
Oct  5 07:14:06 microserver [3389]: Set syslog logging to level: LOG_NOTE
Oct  5 07:14:06 microserver afpd[3390]: AFP/TCP started, advertising 192.168.0.62:548 (2.2.0)
Oct  5 07:14:07 microserver freenas[1774]: Executing: /bin/pgrep -F /var/run/afpd.pid afpd
Oct  5 07:14:11 microserver afpd[3390]: bind(fd, (struct sockaddr *)&address, address_length) failed: Address already in use


So going to switch back to 8.0-Release. What information can I provide to help debug this?
 

cheongi

Dabbler
Joined
Aug 10, 2011
Messages
18
Looks like the kernel spent about 8 hours doing something, now appears to functioning normally. Any idea what that was??? (All disks as JBOD.)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
How much data do you have on your disks? My guess is that your system was doing the automatic monthly scrub. Did you notice a lot of disk activity?
 

cheongi

Dabbler
Joined
Aug 10, 2011
Messages
18
OK. I think you are right. "zpool status" from command line reports scrub completed. I haven't managed to find any documentation of the "automatic scrubbing" process. The GUI and documentation only mentions a manual scrub. The system log may possibly have mentioned a scrub, but the log is too short to see what happened at the time the system started using CPU.

It would be nice to have access to any scheduled periodic tasks in the GUI - is this planned?
 

cheongi

Dabbler
Joined
Aug 10, 2011
Messages
18
So I found configurable frequency and archived scrub results in an accepted ticket http://support.freenas.org/ticket/286.

And the scrub code run daily but based on a default threshold of 30 days:
Code:
/etc# cat periodic.conf
[...]
daily_scrub_zfs_enable="YES"
[...]


See script /etc/periodic/daily/800.scrub-zfs
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
You could use the scrub script in this post and schedule it using cron (ie. once a week, Sunday morning at 3am which is when I run it). It will send an email at the end of the scrub with the zpool status.

I tend to create a "bin" directory in my storage and run scripts from there - on FreeNAS 7 I would copy bin to /var during boot to avoid subsequently spinning up disks when running scripts, but that's not yet an option in FreeNAS 8 as there is no way to configure boot up/pre-init type scripts (at least, not easily, though it could be done with cron and some minor GUI changes...)

Daily Periodic checks the zpool history to determine when the last scrub took place, so running the linked script more frequently than every 30 days will override the daily periodic scrub and stop it having any effect. By design the scheduling of the periodic scrub tends to be a little random, it could start on a weekend, but most likely to start on a weekday.

Until there is more control over daily/monthly period scheduling, this script (or something similar) is the next best option.
 
Status
Not open for further replies.
Top