NAS is really slow.

Status
Not open for further replies.

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
This is all I see of concern from the SMART results (please correct me if I am wrong).
dev/ada6
Code:
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 1
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 1
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 2
I saw those too.

The LCC counts on the 2 Greens are quite high.
Yes, i am aware of this. The Green disks are really old and my plan was initially to just let them run until they died, and then replace them with Red's.
But if this is what's slowing everything down, it might be worth it to just replace them right now.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yes, i am aware of this. The Green disks are really old and my plan was initially to just let them run until they died, and then replace them with Red's.
But if this is what's slowing everything down, it might be worth it to just replace them right now.
Yeah, would be a good idea to get them replaced.

Gotta run for a bit, "Honey Do" stuff... Will try to check back later to see how things are going. In the meantime, search the forums for "arc_summary.py" and you should see plenty of info regarding the interpretation of the info.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
*I* think the storage issue (87% full) is your #1 problem. Another thought, how fragmented is your pool?

Upgrading to say, 8x6TB drives might be cost prohibitive for your. If you were to upgrade ALL the disks in your pool, it would expand automatically.

How big is your Lightroom library? Perhaps you could create a second pool (mirrored or 3-way mirror) and move your images to it, to free up space on the main pool.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
*I* think the storage issue (87% full) is your #1 problem. Another thought, how fragmented is your pool?

Upgrading to say, 8x6TB drives might be cost prohibitive for your. If you were to upgrade ALL the disks in your pool, it would expand automatically.

How big is your Lightroom library? Perhaps you could create a second pool (mirrored or 3-way mirror) and move your images to it, to free up space on the main pool.

How do i check the fragmentation?

I have about 8000 pictures/videos (total of 220 GB), so it's not that much. But i've been thinking of moving it locally instead, because i think Lightroom requires some pretty extreme speeds. I would probably need a NAS with SSD's and 10Gbps connection if i want it on the network.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Do a zpool list from the command line.

How do i check the fragmentation?

Obviously, you could create a SSD pool on FreeNAS for your pictures, etc. But, at 220GB it won't free up much space. I just *thought* it might be a way of potentially solving two issues: speed up Lightroom and freeing up resources.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Do a zpool list from the command line.



Obviously, you could create a SSD pool on FreeNAS for your pictures, etc. But, at 220GB it won't free up much space. I just *thought* it might be a way of potentially solving two issues: speed up Lightroom and freeing up resources.
I believe the fragmentation is 13%? I don't know what that means though.

Code:
[root@PandorasBox] ~# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  3.72G   945M  2.80G         -      -    24%  1.00x  ONLINE  -
pandora_vol0  21.8T  18.5T  3.29T         -    13%    84%  1.00x  ONLINE  /mnt
[root@PandorasBox] ~#
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
do you have snapshots enabled on your pool ? if so, how many snaps are saved for your pool?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
do you have snapshots enabled on your pool ? if so, how many snaps are saved for your pool?
Tbh i am not very fond with snapshots. I never understood how they work or where they are stored, but i do have snapshots enabled.
Here is a picture showing it;
BUYMpya.png
 

mjt5282

Contributor
Joined
Mar 19, 2013
Messages
139
at the bottom of that page you posted, it will say 1-10 of N items. What is the value of N ?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
at the bottom of that page you posted, it will say 1-10 of N items. What is the value of N ?
No, the picture shows all the snapshots. There were only 3 of them.
Although now there are 4. It created another one this night
jZrOoTA.png
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
I would blame the usage of the freenas as well.. 87% is really cutting it fine. I have heard (don't quote me on this) a few people couldn't delete files after using a certain percentage of capacity (i think it was 90%). My first recommendation is don't write anything onto the system anymore. Make it your golden rule till you get a secondary strategy. I am myself loitering around a 78% mark on a 8x4tb raidz2 system. I know it's tough but totally worth it.
From here you have 3 options. I myself have thought hard thought long and finally these are the only ways out.
1) The simplest and the cheapest is Delete. Yup.. Sorry to break it to you but that is the best thing you can do right now. Downgrade a few movie files from a bluray to a 480p (esp the comedy flicks) and you will save gbs with each file. Use mkv when ripping. Will save another few gbs per file.
2) Plan expansion. Plan on moving to a bigger case. buy a few HBA cards and expanders and slap a few new hard disks. This as per me is the optimum proposition for the money spent. Norco 4224 and Supermicro have 24 bay 4u enclosures which work out cheap and sturdy. I know people will have a lot of comments for the norce being bad but I am just handing out more than one option for you. From that I would recommend remapping your freenas to have 4 vdevs with 6 hdds in raid z2 for ultimate protection. Just my thoughts though. Also be very careful on adding vdevs. They are like your wife. A bad choice can lead to a lifetime of trouble including a complete (data) loss.
3) Plan replacement. Start by pulling out one (3tb in your case) of your disk and replace it with a 6tb (good gb) or a 4tb (good gb/$) and resilver. repeat the process 8 times and you would have upgraded your pool size. Please note.. the size will only expand once you attach the 8th hard disk in your pool. Well with this would have maintained the same hardware (ie case psu mobo ram) with 8 extra hard disks.

I honestly am considering the second option but I am having some issues with heating as well. The disks are reaching around 40-42C during resilvering/scrubbing since the case isn't in a cooled environment. I live in a torrid zone city with sea on one side resulting in temperatures never going down. I can't afford to keep the AC running 24x7 and honestly I am not sure on what to do to keep the system temps low. I have had 3 disks resilvered in under a year mostly due to sectors pending on wd red. Putting them in a loud 4u case would stack more hard disks and thus make them go hotter. Thus I am currently on the first option.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
I would blame the usage of the freenas as well.. 87% is really cutting it fine. I have heard (don't quote me on this) a few people couldn't delete files after using a certain percentage of capacity (i think it was 90%). My first recommendation is don't write anything onto the system anymore. Make it your golden rule till you get a secondary strategy. I am myself loitering around a 78% mark on a 8x4tb raidz2 system. I know it's tough but totally worth it.
From here you have 3 options. I myself have thought hard thought long and finally these are the only ways out.
1) The simplest and the cheapest is Delete. Yup.. Sorry to break it to you but that is the best thing you can do right now. Downgrade a few movie files from a bluray to a 480p (esp the comedy flicks) and you will save gbs with each file. Use mkv when ripping. Will save another few gbs per file.
2) Plan expansion. Plan on moving to a bigger case. buy a few HBA cards and expanders and slap a few new hard disks. This as per me is the optimum proposition for the money spent. Norco 4224 and Supermicro have 24 bay 4u enclosures which work out cheap and sturdy. I know people will have a lot of comments for the norce being bad but I am just handing out more than one option for you. From that I would recommend remapping your freenas to have 4 vdevs with 6 hdds in raid z2 for ultimate protection. Just my thoughts though. Also be very careful on adding vdevs. They are like your wife. A bad choice can lead to a lifetime of trouble including a complete (data) loss.
3) Plan replacement. Start by pulling out one (3tb in your case) of your disk and replace it with a 6tb (good gb) or a 4tb (good gb/$) and resilver. repeat the process 8 times and you would have upgraded your pool size. Please note.. the size will only expand once you attach the 8th hard disk in your pool. Well with this would have maintained the same hardware (ie case psu mobo ram) with 8 extra hard disks.

I honestly am considering the second option but I am having some issues with heating as well. The disks are reaching around 40-42C during resilvering/scrubbing since the case isn't in a cooled environment. I live in a torrid zone city with sea on one side resulting in temperatures never going down. I can't afford to keep the AC running 24x7 and honestly I am not sure on what to do to keep the system temps low. I have had 3 disks resilvered in under a year mostly due to sectors pending on wd red. Putting them in a loud 4u case would stack more hard disks and thus make them go hotter. Thus I am currently on the first option.

Yes, i made some research yesterday about this whole don't-fill-it-all-the-way-theory, and yes, there were some cases where users couldn't delete files, but that was when there was only a couple of bytes left in storage, aka, not even 0.01% free space.
But anyway, i have considered all your opinions, and am sitting right now deleting stuff. So far i've deleted about 1TB of stuff, so i'm doing pretty good.

I also fixed the overheating problem yesterday. Opened up the case and placed a huge table fan next to it. now temps aren't going above 35 degrees on any of the drives. It is a bit loud though, but it's manageable.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
What case are you using? and are the temps you mentioned (35C) during load conditions (scrubs)? I think its time for me to get a table fan too then. :P
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
What case are you using? and are the temps you mentioned (35C) during load conditions (scrubs)? I think its time for me to get a table fan too then. :p
I'm using a Silverstone DS380. According to reviews it should have pretty good temps, but that's not my experience. As mentioned earlier my HDD's stayed in the 40-50 degree range. That was when using three regular 120mm fans.
Now with the case opened and the table fan blowing through it, the temps are at about 32-35 degrees. And yes, that is during load (scrub).
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
As morxy49 said, the issue occurs when someone completely fills their server.

One can generally resolve the probably, but it's not intuitive. And, being 100% full, performance tanks, so trying to clean it up, can take awhile.

I have heard (don't quote me on this) a few people couldn't delete files after using a certain percentage of capacity (i think it was 90%).
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Okay, so here's an update. I cleared up some space in the pool, so now i have 3TB (20%) free space.
It is a bit faster, at least when moving files from and to the NAS. But there are still some things left to fix, for example, music is still laggy, and i've noticed it's not only in foobar2000, but also in VLC.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Can you provide me with a debug file from your system? System -> Advanced -> Save Debug.

You *can* post it here, but it may contain personally identifiable info, so I wouldn't make it your first option. Please PM me the debug.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Can you provide me with a debug file from your system? System -> Advanced -> Save Debug.

You *can* post it here, but it may contain personally identifiable info, so I wouldn't make it your first option. Please PM me the debug.
Done!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks for the debug. Things I saw wrong or potentially "out of place" you should consider fixing:

1. Upgrade the OS. You're on FreeNAS-9.3-STABLE-201412090314 , which is hella-old.
2. Some of your networking stuff looks fine, some of it seems "not quite right" which I could attribute to potential bugs in a FreeNAS build as old as yours. So I'd do #1 for two reasons. ;)
3. ada6 isn't 100% healthy. Not terrible, but it is likely starting to head downhill. It's failing SMART tests like crazy though, and has been for a while.
4. You should disable the hostname lookups for CIFS. It's not working on your network (nothing wrong with that, my network doesn't either).
5. Your log.smbd file has a crapload of errors and other very bad behavior. If you have SMB max protocol set to SMB3 I'd set it to SMB2. If you have any auxiliary parameters for CIFS/Samba set, I would remove them.

I get the impression that there's 3 possibilities:

1. You've done some reading and found some "tweaks" that are supposed to make things better on the FreeNAS, but they aren't helping (and most likely hurting).
2. Your desktops have some kind of "tweaks" that are supposed to make CIFS faster/better, but they're creating problems of their own.
3. You've got some kind of hardware issue that is not making itself immediately obvious.

Keep in mind that I can saturate 1Gb LAN (and do about 350MB/sec on 10Gb) using the default settings on FreeNAS as well as on my desktop. You shouldn't need to do tweaks and other things to saturate 1Gb. Your hardware was chosen pretty well, so I don't think this is an issue of you spec'ing out a system that is inadequate. I'd expect that your hardware should be able to saturate 2x1Gb LAN without breaking a sweat. Of course, that's not what you are actually seeing.

If you look at the debug file you sent me, and grab the file that is ixdiagnose\log\samba4\log.smbd you'll see all the errors I'm talking about. I've never seen them before, but here's a few:


[2016/03/23 00:15:25.856414, 0] ../source3/smbd/oplock.c:335(oplock_timeout_handler)
Oplock break failed for file Foto & Video/Canon EOS 600D/2013-04-04 Sälen, Lindvallen/IMG_3368.JPG -- replying anyway

As an option (if you are pretty sure you didn't do #1 or #2) then you could try this: https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/ I wouldn't necessarily expect this to fix the issue, but it's worth a try since it's easy to implement.

Also you could try making sure you don't have things like "green mode" enabled on your desktop NICs, try disabling (or uninstalling) any firewalls and antivirus you have on your desktop, and make sure you aren't using Wifi on accident.
 
Status
Not open for further replies.
Top