NAS is really slow.

Status
Not open for further replies.

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Thanks for the debug. Things I saw wrong or potentially "out of place" you should consider fixing:

1. Upgrade the OS. You're on FreeNAS-9.3-STABLE-201412090314 , which is hella-old.
2. Some of your networking stuff looks fine, some of it seems "not quite right" which I could attribute to potential bugs in a FreeNAS build as old as yours. So I'd do #1 for two reasons. ;)
3. ada6 isn't 100% healthy. Not terrible, but it is likely starting to head downhill. It's failing SMART tests like crazy though, and has been for a while.
4. You should disable the hostname lookups for CIFS. It's not working on your network (nothing wrong with that, my network doesn't either).
5. Your log.smbd file has a crapload of errors and other very bad behavior. If you have SMB max protocol set to SMB3 I'd set it to SMB2. If you have any auxiliary parameters for CIFS/Samba set, I would remove them.

I get the impression that there's 3 possibilities:

1. You've done some reading and found some "tweaks" that are supposed to make things better on the FreeNAS, but they aren't helping (and most likely hurting).
2. Your desktops have some kind of "tweaks" that are supposed to make CIFS faster/better, but they're creating problems of their own.
3. You've got some kind of hardware issue that is not making itself immediately obvious.

Keep in mind that I can saturate 1Gb LAN (and do about 350MB/sec on 10Gb) using the default settings on FreeNAS as well as on my desktop. You shouldn't need to do tweaks and other things to saturate 1Gb. Your hardware was chosen pretty well, so I don't think this is an issue of you spec'ing out a system that is inadequate. I'd expect that your hardware should be able to saturate 2x1Gb LAN without breaking a sweat. Of course, that's not what you are actually seeing.

If you look at the debug file you sent me, and grab the file that is ixdiagnose\log\samba4\log.smbd you'll see all the errors I'm talking about. I've never seen them before, but here's a few:


[2016/03/23 00:15:25.856414, 0] ../source3/smbd/oplock.c:335(oplock_timeout_handler)
Oplock break failed for file Foto & Video/Canon EOS 600D/2013-04-04 Sälen, Lindvallen/IMG_3368.JPG -- replying anyway

As an option (if you are pretty sure you didn't do #1 or #2) then you could try this: https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/ I wouldn't necessarily expect this to fix the issue, but it's worth a try since it's easy to implement.

Also you could try making sure you don't have things like "green mode" enabled on your desktop NICs, try disabling (or uninstalling) any firewalls and antivirus you have on your desktop, and make sure you aren't using Wifi on accident.

1. Ok, it's upgraded to FreeNAS-9.3-STABLE-201602031011 now. I think i'm gonna wait with 9.10 until i'm sure there are no serious bugs in it.
2. Yup, it's updated now.
3. Yeah, i'm well aware of this. Again, my plan was to let it run until it died, but if you think this may have negative impact on performance, i'll probably replace it right away.
4. Done!
5. It was set to SMB3. There were _alot_ of sub-alternatives for SMB2, but i took the one which said just SMB2. Hope that was correct. No auxillary parameters were set.

About the 3 possibilities:
1. Nope, i haven't done that.
2. No, not that i am aware of anyway. I'm using Windows 10 x64 if that means anything.
3. I hope not. Well, it might be the failing drives, so yeah, i should consider replacing them.


That error message btw... It's really random, since i haven't accessed that file in forever. I have no idea why it's giving errors.

I followed your guide as well and added the auxilary parmeters for CIFS:
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no

And holy crap, those parameters were golden! Well, at least for browsing. I'm gonna see if it works for music and the other things as well.
But thank you! Geez... It did a real difference while browsing. Everything is instant now! I can't believe i haven't done this before.


EDIT: I see now that after the update this came up. The two first ones are expected, but the third one seems a bit scary.
VM8fyKY.png

This is the error message:
Code:
[root@PandorasBox] /data# vi update.failed
ps: cannot mmap corefile
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
Running migrations for api:
- Nothing to migrate.
 - Loading initial data for api.
Installed 0 object(s) from 0 fixture(s)
Running migrations for freeadmin:
- Nothing to migrate.
 - Loading initial data for freeadmin.
Installed 0 object(s) from 0 fixture(s)
Running migrations for vcp:
 - Migrating forwards to 0002_auto__add_vcenterconfiguration.
 > vcp:0001_initial
 - Migration 'vcp:0001_initial' is marked for no-dry-run.
 > vcp:0002_auto__add_vcenterconfiguration
 - Loading initial data for vcp.
Installed 0 object(s) from 0 fixture(s)
Running migrations for jails:
 - Migrating forwards to 0033_add_mtree.
 > jails:0031_jc_collectionurl_to_9_3
 - Migration 'jails:0031_jc_collectionurl_to_9_3' is marked for no-dry-run.
 > jails:0032_auto__add_field_jailtemplate_jt_mtree
 > jails:0033_add_mtree
 - Migration 'jails:0033_add_mtree' is marked for no-dry-run.
 - Loading initial data for jails.
Installed 0 object(s) from 0 fixture(s)
Running migrations for support:
 - Migrating forwards to 0003_auto__add_field_support_support_email.
 > support:0003_auto__add_field_support_support_email
 - Loading initial data for support.
Installed 0 object(s) from 0 fixture(s)
Running migrations for plugins:
- Nothing to migrate.
 - Loading initial data for plugins.
Installed 0 object(s) from 0 fixture(s)
Running migrations for directoryservice:
 - Migrating forwards to 0056_migrate_ldap_netbiosname.
 > directoryservice:0041_auto__add_field_ldap_ldap_schema
 > directoryservice:0042_auto__add_kerberossettings
 > directoryservice:0043_auto__chg_field_ldap_ldap_binddn
 > directoryservice:0044_auto__add_field_idmap_rfc2307_idmap_rfc2307_ssl__add_field_idmap_rfc23
 > directoryservice:0045_auto__add_field_activedirectory_ad_netbiosname_b
 > directoryservice:0045_auto__add_field_idmap_rfc2307_idmap_rfc2307_ldap_user_dn_password
 > directoryservice:0046_auto__add_kerberosprincipal
 > directoryservice:0047_migrate_kerberos_keytabs_to_principals
 - Migration 'directoryservice:0047_migrate_kerberos_keytabs_to_principals' is marked for no-dry-run.
 > directoryservice:0048_auto__add_field_activedirectory_ad_kerberos_principal__add_field_ldap_
 > directoryservice:0049_populate_kerberos_principals
 - Migration 'directoryservice:0049_populate_kerberos_principals' is marked for no-dry-run.
 > directoryservice:0050_auto__del_field_activedirectory_ad_kerberos_keytab__del_field_ldap_lda
 > directoryservice:0051_auto__del_field_kerberoskeytab_keytab_principal
 > directoryservice:0052_change_ad_timeout_defaults
 - Migration 'directoryservice:0052_change_ad_timeout_defaults' is marked for no-dry-run.
 > directoryservice:0053_auto__del_field_activedirectory_ad_netbiosname__add_field_activedirect
 > directoryservice:0054_auto__add_field_activedirectory_ad_allow_dns_updates
 > directoryservice:0055_auto__add_field_ldap_ldap_netbiosname_a__add_field_ldap_ldap_netbiosna
 > directoryservice:0056_migrate_ldap_netbiosname
 - Migration 'directoryservice:0056_migrate_ldap_netbiosname' is marked for no-dry-run.
 - Loading initial data for directoryservice.
Installed 0 object(s) from 0 fixture(s)
Running migrations for sharing:
 - Migrating forwards to 0034_fix_wizard_cifs_vfsobjects.
 > sharing:0032_auto__add_field_cifs_share_cifs_storage_task
 > sharing:0033_add_periodic_snapshot_task
 - Migration 'sharing:0033_add_periodic_snapshot_task' is marked for no-dry-run.
 > sharing:0034_fix_wizard_cifs_vfsobjects
 - Migration 'sharing:0034_fix_wizard_cifs_vfsobjects' is marked for no-dry-run.
 - Loading initial data for sharing.
Installed 0 object(s) from 0 fixture(s)
Running migrations for account:
 - Migrating forwards to 0023_auto__add_field_bsdusers_bsdusr_microsoft_account.
 > account:0023_auto__add_field_bsdusers_bsdusr_microsoft_account
 - Loading initial data for account.
Installed 0 object(s) from 0 fixture(s)
Running migrations for network:
 - Migrating forwards to 0018_auto__add_field_alias_alias_vip__add_field_alias_alias_v4address_b__ad.
update.failed: unmodified: line 1

 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yep, everything you did sounds fine. That failing/failed disk could be creating problems, but it looks like its got a single bad sector. Monitor it, and if worse comes to worse just offline that disk. If your performance issues go away it's pretty clear that the disk was to blame (at least partly).

That means something went wrong with the upgrade, and that file will tell you what went wrong. Can you SSH in and paste the output?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Yep, everything you did sounds fine. That failing/failed disk could be creating problems, but it looks like its got a single bad sector. Monitor it, and if worse comes to worse just offline that disk. If your performance issues go away it's pretty clear that the disk was to blame (at least partly).

That means something went wrong with the upgrade, and that file will tell you what went wrong. Can you SSH in and paste the output?

Yes, it's in the code tags in my post.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Okay, I have no idea what went wrong. I would put in a bug ticket and provide the contents of the file in the bug ticket.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Okay, I have no idea what went wrong. I would put in a bug ticket and provide the contents of the file in the bug ticket.

Hmm. Maybe i should try updating to 9.10 instead then, as they won't fix bugs in 9.3 anymore anyway.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would put in the ticket with the info you have. I'm guessing there's some structure that is missing or extraneous in your database causing that issue. I would definitely get a dev's opinion before going to 9.10. More than likely they'll give you a command or two to fix the issue or ask for your config file so they can fix it for you.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
I would put in the ticket with the info you have. I'm guessing there's some structure that is missing or extraneous in your database causing that issue. I would definitely get a dev's opinion before going to 9.10. More than likely they'll give you a command or two to fix the issue or ask for your config file so they can fix it for you.
I cant seem to find where i create a new ticket... :/
Do you have a link?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Thought about starting a new thread, but i'll just post here again, since it's related.

I have replaced ada6, the Green drive which gave me errors. Planning on replacing the other Green drive later tonight, after which i will only have Red's in the pool.

Did some testing right now, to see if this sped things up.
Code:
# dd if=/dev/zero of=testfile bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 1.009274 secs (50729534 bytes/sec)

# dd if=testfile of=/dev/zero bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 0.421243 secs (121545060 bytes/sec)


If i interpret this correctly, i have 48 MB/s write speed and 116 MB/s read speed. This is not fast for a RAIDz2 pool of 8 drives.

What can i do to improve these speeds?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thought about starting a new thread, but i'll just post here again, since it's related.

I have replaced ada6, the Green drive which gave me errors. Planning on replacing the other Green drive later tonight, after which i will only have Red's in the pool.

Did some testing right now, to see if this sped things up.
Code:
# dd if=/dev/zero of=testfile bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 1.009274 secs (50729534 bytes/sec)

# dd if=testfile of=/dev/zero bs=1024 count=50000
50000+0 records in
50000+0 records out
51200000 bytes transferred in 0.421243 secs (121545060 bytes/sec)


If i interpret this correctly, i have 48 MB/s write speed and 116 MB/s read speed. This is not fast for a RAIDz2 pool of 8 drives.

What can i do to improve these speeds?

First, don't do a block size of 1024. Do bs=1M
Second, testing with "just" 51MB of test data isn't good. Try doing something like 100GB.
 

ttabbal

Dabbler
Joined
Oct 9, 2015
Messages
35
If i interpret this correctly, i have 48 MB/s write speed and 116 MB/s read speed. This is not fast for a RAIDz2 pool of 8 drives.


How do you figure? RaidZ* needs to read/write every disk in the pool, so it's as fast as the slowest disk in the pool, with some overhead. Those speeds aren't bad for a Green, more so is there are errors in the SMART log...

If you want more performance, replace any slow disks. Then add another vdev to spread the load out. That requires more disks though, so price/space/power come into play...
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
First, don't do a block size of 1024. Do bs=1M
Second, testing with "just" 51MB of test data isn't good. Try doing something like 100GB.

Okay, so now i have replaced the last Green drive, and i now have only 8 Reds in a RAIDz2 pool.
I don't know if it's because of this, or because of changing bs=1M, but the speeds were improved alot.
Code:
dd if=/dev/zero of=testfile bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 161.200636 secs (325239412 bytes/sec)

dd if=testfile of=/dev/zero bs=1M count=50000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 187.589146 secs (279487386 bytes/sec)

If i interpret this correctly, i have a write speed of 310 MB/s, and a read speed of 266 MB/s. That's a huge improvement, but isn't it kind of weird to have faster write speed than read speed?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I don't know if it's because of this, or because of changing bs=1M, but the speeds were improved alot.

Probably mostly because of the change from 1 k to 1M.

That's a huge improvement, but isn't it kind of weird to have faster write speed than read speed?

No, we often see that when members do the same tests you did.
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
Probably mostly because of the change from 1 k to 1M.

No, we often see that when members do the same tests you did.

Actually, the real speeds didn't improve that much. copying from my PC to NAS is still really slow at around 40 MB/s
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
What are the NICs you use on the NAS and the PC?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
igb0 on a AsRock C2550D4i, which i believe would be a Intel i210 Gigabit LAN
What is the NIC on your client? And all other stats on client. Where is your source file coming from, is that the bottleneck?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
What is the NIC on your client? And all other stats on client. Where is your source file coming from, is that the bottleneck?

Not sure about the NIC on my client, but it's a intel Gigabit at least.
Specs of client is Asus P8P67 Pro Rev B3, i5 2500k@4.5GHz, 16GB RAM, 512GB SSD.

I tried copying a 9GB .iso file from my PC(SSD) to the NAS, and got speeds varying from 90-110MB/s, which is good.
However, right now i'm copying a bunch of .mkv files (~1GB each, ~200GB total) from dataset A on my NAS to dataset B on my NAS, through my PC, and that gives me a speed of ~40MB/s.
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Not sure about the NIC on my client, but it's a intel Gigabit at least.
Specs of client is Asus P8P67 Pro Rev B3, i5 2500k@4.5GHz, 16GB RAM, 512GB SSD.

I tried copying a 9GB .iso file from my PC(SSD) to the NAS, and got speeds varying from 90-110MB/s, which is good.
However, right now i'm copying a bunch of .mkv files (~1GB each, ~200GB total) from dataset A on my NAS to dataset B on my NAS, through my PC, and that gives me a speed of ~40MB/s.
In this example you are downloading from the NAS, and reuploading it to another dataset on the nas. Is there some reason you feel it should go as fast as a one-way transaction?
 

morxy49

Contributor
Joined
Jan 12, 2014
Messages
145
In this example you are downloading from the NAS, and reuploading it to another dataset on the nas. Is there some reason you feel it should go as fast as a one-way transaction?
I'm no expert. That's why i'm wondering if these are normal speeds or not.
If it's something that's bottlenecking, though, it would be the NAS, since the NICs (and cables) can handle 1Gbps full duplex.
 
Status
Not open for further replies.
Top