Spin-down Disk Failed when Shutting Down

Status
Not open for further replies.

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
This morning my freenas seems so sluggish, the hdd led always working.

I tried to shut down. At the last line before shutting down, there were message like this:
(ada8:ata0:0:0:0): Spin-down disk failed
(ada9:ata0:0:1:0): Spin-down disk failed

What's wrong with them? Thanks.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
See this ticket, although your SYBA controller is based on the Sil3124 chipset which is only slightly different to the - reported as working - Sil3114. You might want to chase for an update, post shutdown results for FreeBSD 9 as requested by FreeNAS development (gcooper) in the ticket, or confirm the same problem but with your Sil3124 controller.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
It's just a guess, but there is an automatic monthly scrub and I'm guessing it was still running when you tried to shutdown. There was probably a message for each disk.
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
It's just a guess, but there is an automatic monthly scrub and I'm guessing it was still running when you tried to shutdown. There was probably a message for each disk.

I guess so... :) But what is the schedule of the automatic monthly scrub, can I disable or change it manually?

How long it takes to do the scrub, as it has already ran about 5 hours?
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I'm not sure what the schedule is or how to change it.

How long it takes to do the scrub, as it has already ran about 5 hours?

It depends on how big your pool is and how much data you have. I have a 5.3TB raidz2 which is 3/4 full and it took 13.5 hours the other night.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I guess so... :) But what is the schedule of the automatic monthly scrub, can I disable or change it manually?

It's part of the daily periodic that kicks off at 3am each day - it will perform a scrub of all pools when 30 days has elapsed since they were last scrubbed (it determines the date/time of the most recent scrub from "zpool history").

If you perform your own scrubs according to a more frequent interval (ie. weekly, which is recommended for non-Enterprise disks) then periodic will never initiate it's own scrub. If you leave it to periodic, it's harder to predict on what day the automatic scrub will kick off, and it's very likely to start on a weekday which may be highly undesirable if it takes more than 5-6 hours and FreeNAS is being used in an office environment as the scrub will over run into normal working hours.

There's no easy way to reconfigure periodic, other than to mess around with the default scripts which will then be overwritten with each firmware upgrade, so not really ideal. Better to schedule your own scrub - see here for a script I use to schedule a weekly scrub (on Sunday morning) using cron.

How long it takes to do the scrub, as it has already ran about 5 hours?
Depends. A 4x 1TB disk RAIDZ1 volume with 2TB of used data takes 3 hours 45 mins on an AMD Neo (Intel Atom-class CPU) with 4GB RAM.
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
Thanks all for your quick reply.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
How long it takes to do the scrub, as it has already ran about 5 hours?
It depends on how big your pool is and how much data you have. I have a 5.3TB raidz2 which is 3/4 full and it took 13.5 hours the other night.
Depends. A 4x 1TB disk RAIDZ1 volume with 2TB of used data takes 3 hours 45 mins on an AMD Neo (Intel Atom-class CPU) with 4GB RAM.

"How long" has been asked a few times recently, so I thought I'd compare our figures. :)

I have a second AMD Neo (HP N36L) system (8GB RAM, LSI 9211-8i controller and two RAIDZ1 vdevs of 4 disks each) and the time it takes to scrub 2.46GB used capacity (3h32m) is slightly improved over the lower-specified 4GB RAM system with 4-disks and motherboard SATA (1.92GB in 3h43m).

8.6GB/minute (143MB/s) scrubbed on the 4GB RAM/motherboard SATA system/4-disk 1-vdev RAIDZ1
11.6GB/minute (193MB/s) scrubbed on the 8GB RAM/LSI 9211-8i/8-disk 2-vdev RAIDZ1

Unfortunately I can't tell if the 8GB RAM system has better performance due to the extra RAM, LSI controller or the two vdevs. Or maybe it's a combination of all three. :)

@Protosd, not sure of your setup but by my calculations you're hitting 6.55GB/minute (109MB/s) - no doubt this drop in scrub performance is due to the additional RAIDZ2 overhead, which would of course make perfect sense.

A mirrored vdev should see very little overhead, and it would be interesting to get a ballpark figure for a mirrored setup.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
A mirrored vdev should see very little overhead, and it would be interesting to get a ballpark figure for a mirrored setup.

I'm not sure if you were asking me to try making a mirrored vdev or that was a general suggestion for someone to try and post the results on.

I don't have the resources (disks) to try it with, I would if I could. It seems to me that the last time I scrubbed, just before upgrading to 8.01 release, it finished an hour earlier. I don't think I've added that much since then but since I've divided my pool up into datasets it's not clear how much space is used. I have 1.3T free on all datasets and since total it's 5.3T, I must have 4TB in use.
 

Durkatlon

Patron
Joined
Aug 19, 2011
Messages
414
I have a mirrored setup with 2x1.5TB. The system runs an AMD E350 with 8GB of RAM, amd64 version. Scrub takes a little over an hour.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
I'm not sure if you were asking me to try making a mirrored vdev or that was a general suggestion for someone to try and post the results on.

Definitely the latter - just a general "throwing it out there" :)

I have 1.3T free on all datasets and since total it's 5.3T, I must have 4TB in use.

Based on that revised usage figure, you're scrubbing at about 4.94GB/minute (82.3MB/s, assuming the scrub completed in 13h30m). This is very roughly half the averaged performance of my two RAIDZ1 systems, presumably due almost entirely to the additional parity overhead. If/when RAIDZ3 becomes available in FreeNAS, presumably the increase in scrub/resilvering duration will be somewhat linear.

A quote from the linked article is worth pondering by anyone pining for 3TB+ drives! :)

Perhaps even more ominously, in a few years, reconstruction will take so long as to effectively strip away a level of redundancy.

I have a mirrored setup with 2x1.5TB. The system runs an AMD E350 with 8GB of RAM, amd64 version. Scrub takes a little over an hour.

Thanks Durkatlon - exactly how much of that 1.5TB usable storage is actually in use (and exactly how long did your last scrub take)?
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
I have total 13,6TB and still 4.3TB available on all datasets. So, total in use is 9.3TB. Scrubbing just finished, it took about 16 hours.

It's time to update to v8.01. :)
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Scrubbing just finished, it took about 16 hours. I have total 13,6TB and still 4.3TB available on all datasets. So, total in use is 9.3TB.

Thanks... that blows that theory :) You're hitting 9.7GB/minute (161MB/s) which is on par with the average of my two RAIDZ1 systems, despite your volume being RAIDZ2, and double what protosd is achieving with another RAIDZ2 volume. Do you have a single vdev for all 10x 2TB disks? I notice you have a shed load of RAM... your total potential IOPS must be pretty significant too.
 

ctantra

Dabbler
Joined
Aug 4, 2011
Messages
29
Yes, I have single vdev for all 10x2TB. Perhaps 16GB RAM helps a lot. :)
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Yes, I have single vdev for all 10x2TB. Perhaps 16GB RAM helps a lot. :)

You could always take half of it out and let us know in a day or so! :)

I suspect the high number of drives in the vdev may have had a bigger impact though - you're hitting 16.1MB per disk per sec, which is less than my 4GB/4-disk Z1 volume achieved - 35MB/disk/sec - but cumulatively your throughput is higher (161MB/sec vs. 143MB/sec).

Assuming I built a 4-disk Z1 volume with 9.0TB of usable storage (using 4x 3TB disks), extrapolating my current 4GB system scrub figure of 8.6GB/minute (143.5MB/sec, 35MB/disk/sec) would result in a scrub/resilver duration of 17h25m... plenty of time for a second disk to drop! :eek:
 

jfr2006

Contributor
Joined
May 27, 2011
Messages
174
Back on topic:

I too had the spin down messages yesterday, when i shutdown my system to put the USB stick with the new freenas version. Also, from time to time, when i do a shutdown, i get an error message saying that it failed to synchronize cache on ada8. It does not bother me much, since volume2 is normally in standby.
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Back on topic:

I too had the spin down messages yesterday, when i shutdown my system to put the USB stick with the new freenas version. Also, from time to time, when i do a shutdown, i get an error message saying that it failed to synchronize cache on ada8. It does not bother me much, since volume2 is normally in standby.

You've got the same controller detailed in ticket #500 - maybe you can assist the developers with the information they're looking for?
 

jfr2006

Contributor
Joined
May 27, 2011
Messages
174
Well..what do i have to do?
 
Status
Not open for further replies.
Top