All MD5 hashes matched on my system to what you provided. I disabled the SMART service, these settings were at the system default as I never changed them. I went ahead and configured it to match your settings just in case it gets re enabled. For comparison, the original settings were: Check: 30, Power Mode: Never, Difference: 0, Informational: 0, Critical: 0.
So I changed the settings, and I let it sit all day, never acted up, was even able to do some decent size NFS traffic. All the while I was getting drive temp reports back at my Graphite server. The moment I open the WebUI Dashboard, immediate CPU spike again. It feels more and more like there is something in the code for the Web UI Dashboard of my install that kills collectd while it tried to summarize the system metrics for the front page. It is always either the reporting tab or the Front Page Dashboard of the UI where the system metrics and hardware is summarized that tends to kick off collectd, rrdcached, and python. Best I can figure out, is that the reason it doesn't have a problem initially is because there isn't much data from a new boot, better yet if the system was shutdown for a bit. But once enough time goes by, there is enough metrics that the code isn't properly handling between collectd and the UI backend code. This is pure speculation on my part, and where it unravels is I don't know why seemingly only I am having this problem. I have seen no indications elsewhere that match my symptoms of the UI backend running out of control, or atleast to the point collectd brings the system to its knees. And aside from socket timeouts (which I believe are happening because collectd maxes out the system load), all messages from collectd, and others have been reported or asked by others, buy responses are almost always, "yeah, I have it to, it doesn't do anything, don't worry about it."
This is so confusing.