Pending sector, how to force reallocate (and should I)?

Status
Not open for further replies.
Joined
Jan 25, 2015
Messages
26
Don't worry about that, it is very low actually.



In fact that's wrong. A drive draw about 30-35 W during spin-up. See https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/ and https://forums.freenas.org/index.php?threads/how-to-measure-the-drive-spin-up-peak-current.38885/ for more info ;)
Thanks for that, interesting info. Still.. looking at my setup does it look like I'd ever hit that? Even if all drives spun up at the same time, that's a 210w spike and from what I can tell the rest of the components in my server don't even come close to 100W between them so I don't see how the PSU could get overloaded, even in the spikiest circumstances. Plus this happened while all the other drives were already spun up.

EDIT: Just seen the bit about the fans, shall recalculate after pancakes.

So the bit about the fans on that page seems pretty excessive. Would it really be as high as 15-30w per fan? Looking on the Noctua site, even the largest fan's maximum input power is 1.56w. Are they really going to consume 10-20x what the manufacturer says on their spec sheets?
 
Last edited:
Joined
Jan 25, 2015
Messages
26
OK thanks for that link! I just got to the section with the pre-calculated guesses and this is the one that applies to me:

1) For an Avoton C2550/C2750 (18-35W board, 12W memory):
  • 1-2 Drives: 132W peak, 46W idle -> SeaSonic G-360
  • 3-4 Drives: 202W peak, 71W idle -> SeaSonic G-360
  • 5-6 Drives: 297W peak, 118W idle -> SeaSonic G-450
  • 7-8 Drives: 367W peak, 134W idle -> SeaSonic G-450
  • 9-10 Drives: 437W peak, 150W idle -> SeaSonic G-550
  • 11-12 Drives: 507W peak, 166W idle -> SeaSonic G-650 or X-650
So it looks like I should be fine with a G-360 (looking at the peak number) *but* it seems that I should maybe have put in a higher wattage PSU... I'll have to look into that. Would make my case neater too as the G-450 is modular! For now I'm still running 5 drives and until the other day the system was powered on for months without issue so I think continuing the way I am with 5 drives is fine for now, but if I get any more I'll need a higher wattage PSU. Whether or not that was the actual issue though I'm not sure - seems a bit coincidental that it would affect the same drive each time but I understand now why that may well have been the cause.

My question about fans still stands though - is the power usage of fans really that much higher (10-20x) than what the manufacturer specified as that max input wattage?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Thanks for that, interesting info. Still.. looking at my setup does it look like I'd ever hit that? Even if all drives spun up at the same time, that's a 210w spike and from what I can tell the rest of the components in my server don't even come close to 100W between them so I don't see how the PSU could get overloaded, even in the spikiest circumstances. Plus this happened while all the other drives were already spun up.

EDIT: Just seen the bit about the fans, shall recalculate after pancakes.

So the bit about the fans on that page seems pretty excessive. Would it really be as high as 15-30w per fan? Looking on the Noctua site, even the largest fan's maximum input power is 1.56w. Are they really going to consume 10-20x what the manufacturer says on their spec sheets?

Noctua fans are very low power. The other thing to check is how you have power connected.

This is why I said to disconnect power to the other drives, because now you're playing guessing games instead
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Since I had this same problem over the weekend, I did some searching, and in another thread on this forum I was able to dig up another helpful tutorial on OPs original question. Posting this to be helpful, I do not pretend to be an expert on the subject:

http://daemon-notes.com/articles/system/smartmontools/current-pending

I think that guide is right but if your drive uses 4096 lba addressing then you have to set bs=4096 instead of 512.

And of course, after writing zeros over a block and running another long test to verify you want to run a scrub.
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
Ah yes sorry I forgot to mention that. Actually I should ask this as well, do we need to re-enable GEOM protection afterwards? If so, how? The article says nothing about it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Ah yes sorry I forgot to mention that. Actually I should ask this as well, do we need to re-enable GEOM protection afterwards? If so, how? The article says nothing about it.

I think it resets on restart
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
That debugflags setting means "Yes, let me overwrite things that are in use". It should not be necessary. If something on that drive is still in use, it should be taken out of use, not just ignored.

Also, almost every 4K drive out there emulates fake 512-byte blocks. The block size setting to dd does not affect that, but if you are writing any amount of data, using at least bs=64k will save time. Think of it as "buffer size".

Really, don't do any of this until a full backup has been made and a replacement drive is in the building.
 

tenjuna

Dabbler
Joined
May 5, 2016
Messages
24
Good to know, thank you.

If there's one thing I have learned from the FreeNAS forums, it's to have hot spares, cold spares, and to not even think about breathing without at least 6 world-class backups. ;-)
 
Status
Not open for further replies.
Top