SOLVED Scrub suddenly taking far longer

Status
Not open for further replies.

Blues Guy

Explorer
Joined
Dec 1, 2014
Messages
69
So, I just completed the scrub in 5:15h (with an average read rate above 225MB/sec). I attached a screenshot, the system is working better than ever.

I must admit, I didn't think, deduplication could be such a bitch. But that indeed was it!

Thanks all for your advice.
 

Attachments

  • scrub.JPG
    scrub.JPG
    106.1 KB · Views: 222

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Dedup is something that can get out of control rapidly, which is why the harsh warnings are included and everyone will add you to an ignore list the second you mention you use dedup.

"Everyone"? What am I, chopped liver?

Dedup is like juggling chainsaws - it's incredibly awesome and wows people when you pull it off, but do it without a lot of prior research and planning, and you're going to have an incredibly messy death. Basically, unless you're running VMware View linked clones, don't.

@Blues Guy Glad I could help you get this sorted. You can always throw a "zfs get all | grep dedup" into a shell prompt to see if there's any other ones lingering, or if you have it set as a default somewhere that would cause inheritance down the line.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@Blues Guy

See why we tell people what to do and not do and get so pissed when they do it anyway? Nothing personal, but we see people take our advice and ignore it, then spend stupendous amounts of time trying to justify our stance later because the end-user is 100% sure that it's not the problem. Many of us were riding this rodeo years ago. We've seen every weird and wacky way of trying to do stupid things slightly less stupid, and almost always it falls flat on your face. I've deliberately done some of the stupidest things you can do on the forum "just to see what would happen". I hate to be the guy that says "do this because I've heard its bad" and don't have some first-hand experienece.

I hope you've learned that next time you'll just do it the way that is already explained and save yourself the heartache and lost time. ;)

@HoneyBadger

You are chopped Badger, which tastes like chopped liver. ;)
 

Blues Guy

Explorer
Joined
Dec 1, 2014
Messages
69
@cyberjock :
I hope you've learned that next time you'll just do it the way that is already explained and save yourself the heartache and lost time.
You're acting like I deduped over 700GB of data on purpose. I intended to dedup only a fraction of it (as I explained above). I have no idea, why the second dataset was configured to dedup, I must have misclicked. So, I think, an "Next time, do it right" can only translate into "don't misclick".

But, I'm not running a production system, just a home server, which is specifically set up to tinker with it and learn new things. So, the time is *never* lost.

I really hope that another problem of mine is solved as well with this, but I can only test it, when I get home.

However, I learned lots and thanks for everything. @cyberjock : Also, nothing personal. I really appreciate what you do for the community. You do spend lots of time writing documentation, how-tos and helping people. forums and communities like this would be a better place, if we had more people like you. I just think, you would get a lot more appreciation from the community, if you could tone down the "I told you so" and "because I say so" a bit, even if it's exhausting to tell people the same things over and over again. ;-)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry, that wasn't meant to be a personal attack if you took it that way. It was just supposed to explain the logical thought process on how we do things on the forum. We have 5 or 6 regular posters in the forum, and FreeNAS is a HUGE project, so trying to teach everyone how everything works is nearly impossible. The project has really taken off in the last 6 months or so with lots of articles written about the project, yet the number of regular posters in the forum really hasn't increased proportional to the size of the community.

It's really a situation where it ends up with a "because we say so" or we'd just close the forums due to so many unanswered questions. There will never be enough manpower to handle everyone trying to do things as a learning experience. We've lost many *amazing* people from the forum because they get tired of the constant "I don't care if you told me to do it, I want you to write me a book and explain why and I want it for free and I want it now". Thankfully some of those users leave after the find out they won't get books written for them on their topic. Some hate FreeNAS and never come back, and I'm okay with that if that's how they want to be. Even myself, I became a moderator out of necessity. Not because I was here so much and so they wanted to promote me. In fact, iXsystems and I didn't have a particularly good relationship when I was made a moderator. Even the other mods sent me a "you have my condolences" PM because of how bad things were.

ZFS is *way* more complicated than people realize, and so we have to manage our resources. Even I'm almost to the point of just saying "screw it" and not answering people anymore. I don't answer posts nearly as often as I used to anymore. Virtually everything that there is to say without writing a 100+ page book on how ZFS or FreeNAS works has been documented in our stickies and presentations. If we had 50 regular forum posters things would be a bit different. But we don't, and we never will because everyone gets burned out after a year or two (assuming they even last that long). You can only tell people to "buy more RAM" and "buy the right hardware" and "don't use that RAID controller" so many times before you decide that it's a lost cause and you just let people buy the wrong hardware because they couldn't read a sticky. They'll learn their lesson when they have a $1500 system that won't even boot FreeNAS without crashing. Unfortunately there's probably 3-5 of those types of threads every day.

I answered your thread because was a bit more complicated because you deduped a small amount of data, so the logical thought process seemed reasonable for someone that is inexperienced with ZFS dedup. On my side of the house I tried what you did once "just to see how it would work" and it didn't go well. I actually think I did use my music collection because I did have lots of duplicates on accident (I finally cleaned up my collection though). When you have limited metadata space in RAM it doesn't take much ddt to fill the metadata space. Then you have to decide if you *really* want to override the metadata sizing (which then limits how much file data you can store) or if you should get more RAM. RAM is so inexpensive these days it's almost always the way to go except for those poor souls that think they can buy a board with 1 or 2 RAM slots and they'll never need more than 8 or 16GB of RAM.

I used to do contract work. I'd do almost anything you wanted, including just 3 hours of Q&A from people wanting to learn ZFS. I don't do that anymore as I don't have the time or inclination to do that kind of work anymore. There is the "FreeNAS lectures" that are free if you want a basic intro to FreeNAS. The lectures are okay. They make a good "starter course" for FreeNAS, but they definitely don't go into serious detail about things like the ddt, metadata space, etc. That level of detail would require something like a week long class.

Anyway, glad you figured out the problem, got that straightened out, and learned a little something in the process. :)
 
Status
Not open for further replies.
Top