Why is my scrub taking so long?

Status
Not open for further replies.

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
The WD Purple's, similar to your drives, but for surveillance systems up to 8 drives, include this statement in it's Recommended Use.

Not recommended for use in NAS environments, please consider using WD NAS hard drives

http://www.wdc.com/en/products/products.aspx?id=1310
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
that's because they want to sell you red and red professional - and i don't have a purple drive so that argument is mute
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So it should take just over 7 hours to complete a long SMART test, and of course there will be delays introduced if the drives are really active. Not sure over an hour is reasonable. I'd be curious to see how long a SMART long test takes if you shut down all other activities, basically start the test, unplug the LAN cable and wait 7 hours and check the completion status.

As you can tell, I'm shifting focus to the drives themselves because in my opinion the AV drives could be causing the problem. I'm not saying they are broke or anything, just that I suspect there is an issue with them when being used in a NAS. I can tell you why things looked good one minute and then all of a sudden they become slow but that is why we are trying to help you diagnose the issue. If I had the system with me for a few days, I think I could figure it out but that isn't the case (nor would I want it to be) so it's remote assistance to help you out.

Speaking of the AV drives you have, I've seen several benchmarks which show very good performance but then when put into a real world environment, they are just average.

I do have one Purple drive for my DirecTv DVR. It works fantastic. It's a 3TB model and I have 84% free space, so when I have to replace it, I'm going for the 1TB model. But it's a great drive for a DVR. I would not put it in a normal computer, never, but that's just me.

So, I'm interested in the output of the gstat both while you are not running the scrub and while you are running the scrub.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
No_Scrub.png
No_Scrub_2.png
No scrub

i'll upload some more tomorrow during the scrub. i'm sure that the scrub behaviour is normal for the number and sizes of files i have. tomorrow will tell.
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Ensure you use the "-I 5s" switch or those values will change to quickly to give you an accurate representation.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
Storage_Scrub_Slow2.png

scrub in progress since Sun Apr 17 00:01:01 2016
1.87T scanned out of 12.7T at 50.9M/s, 62h1m to go
0 repaired, 14.72% done
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
scrub in progress since Sun Apr 17 00:01:01 2016
1.87T scanned out of 12.7T at 50.9M/s, 62h1m to go
0 repaired, 14.72% done

Looks like random access. The drives are doing 110 ish IOPS while being fairly (60%) busy, but only returning 7 MB/sec.

Let it run and don't worry about it. Only really needs scrubbing every 2 weeks or so.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
i think you are correct - the schedule is weekly which is probably too often but it was finished in about 21hrs.
i think i will change the scrub to every two weeks
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I thing scrubbing every week is a bit excessive and would opt to once a month.

Our of curiosity, do you have anything set for APM and Acoustic Level other than disable?
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
drive_options.png


i shall research an appropriate schedule - i think my original thinking was, as the discs were not enterprise that weekly scrubs would be appropriate. but that was when the scrub was not a problem.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
You have one in the Misc section of my useful scripts thread if you want (link is in my signature) ;)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
FYI, these are the results of my scrub for comparison, note that I too have a six drive pool as a RAIDZ2, it's only my drives are 2TB, I only have my pool half full, and I'm running on ESXi so there are a few differences however I don't believe my system is as fast as it would be if I were not using ESXi. Additionally the iop/s were a bit faster at the very beginning of the scrub but I didn't snap a screen shot of that. The data below is near the end of my scrub.

gstat.JPG
disk.JPG


Lastly, looking at the gstat for your system, it sure looks like it was taking a long time to read and write your data.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
scan: scrub in progress since Sun Apr 17 00:01:01 2016
3.02T scanned out of 12.7T at 45.7M/s, 61h38m to go
0 repaired, 23.81% done

i think i will move everything off of this pool, destroy it and try again. i think i will do some experiments and try to get to the bottom of my problems.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Good idea, I think something is up but not quite sure what it is either. I'm sure you will post your results when you get there.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
okay i have nearly moved everything from Storage

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Storage 16.2T 275G 16.0T - 3% 1% 1.00x ONLINE /mnt

i turned compression off - i read that it messes up the tests.

[root@freenas] /mnt/Storage# dd if=/dev/zero of=testfile bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 92.741346 secs (552073075 bytes/sec)

[root@freenas] /mnt/Storage# dd if=testfile of=/dev/zero bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 94.608408 secs (541178116 bytes/sec

the first test is write (525.57 megabyte/s) and the second read (515.20 megabytes/s).

These seem good to me - of course these is a contiguous read and write test - things will change when i add my files again. does anybody have any comment ? any other tests i should try?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Here are my results for comparison since we both have a six drive RAIDZ2 setup.

With Compression On (Normal)
Code:
[root@freenas] ~# dd if=/dev/zero of=testfile bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 12.545858 secs (4081028207 bytes/sec)
[root@freenas] ~# dd if=testfile of=/dev/zero bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 4.000649 secs (12797922853 bytes/sec)

With Compression Off
Code:
[root@freenas] /mnt/farm/Test# dd if=/dev/zero of=testfile bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 147.276916 secs (347644433 bytes/sec)
[root@freenas] /mnt/farm/Test# dd if=testfile of=/dev/zero bs=1024000 count=50000
50000+0 records in
50000+0 records out
51200000000 bytes transferred in 124.995366 secs (409615185 bytes/sec)


EDIT: So my system is a bit slower than yours but then again my FreeNAS is on top of ESXi.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
thank you very much for your comparisons - they have helped me decide that everything is normal with the hdd/pool.

i will see what my scrub is like on sunday and i'll definitely change the schedule to at least every 2 weeks - possibly monthly
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
No problem. Hope it works much faster when you are done.
 
Joined
Jul 13, 2013
Messages
286
AV drives operate much differently than a normal hard drive meaning they don't care about errors the same way that a normal hard drive does. AV drives will try to get your data but after a few tries it gives up and moves on to the next piece of data.

But that's exactly what you want in a NAS! The desktop disk behavior, of trying forever to read a failed block, is what causes trouble in a NAS.
 
Status
Not open for further replies.
Top