SOLVED Terrible network performance with Reading

Status
Not open for further replies.

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I read them and I stand by my comment. You need to add more ram and if that means a new motherboard then get a new motherboard. You will not have a fun time when that pool starts to fill up.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
I dont think more ram is going to help in this situation. The 1 gig per tb is a recommandion for servers that have regular use. This one doesnt. It might get 4 files read a day.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
I think im going to have to find a way to backup the 33 odd tb of stuff i have, and start from scratch.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
It definitely sounds like a memory problem.

When the read speeds are sucking, check gstat. See if the drives are at or near 100% busy, but doing very little total kb/sec. This would indicate they're random read limited at this time.

Being that you're trying to read sequentially, if the drives are being hammered by random reads, then I assume it's reading zfs metadata. If this is the case, it's probably because it can't keep it all in memory to avoid having to pull it off the disks on demand.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Doing a read uses around 5-15 percent of the drives. When the random access noise kicks in, It can spike up to the 70-80 percent busy time.
Often the first 5-10 seconds of the copy hammer out a bit, then it goes flat line for a bit, then a bit more random noise.

It does look like memory doesnt it... i didnt think 32gig of ram would limit the amount of metadata it can store. Anyway to fix that? apart from more ram, as more ram literally means a new socket 2011 mobo and cpu.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Did you ever test your local read speeds using dd?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Anyway to fix that? apart from more ram
I seem to remember you have about 32TB of data right now. If so, you could try rebuilding your pool with a single 8x 8TB vdev, which could potentially improve performance. Then later, when you have a system that supports more RAM, you could add a 2nd 8x 8TB vdev to that pool.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Well..

I spun up a freenas instance in a vm to get some default values for CIFS and Samba, and put them back into the system.

And i have done some testing.
Current write performance:
[root@Riva] ~# dd if=/dev/zero of=/mnt/Riva/testfile bs=1m count=100000
100000+0 records in
100000+0 records out
104857600000 bytes transferred in 71.059593 secs (1475629055 bytes/sec)

Read perf:
load: 1.31 cmd: dd 6274 [running] 31.86r 0.03u 9.13s 25% 2532k
100000+0 records in
100000+0 records out
104857600000 bytes transferred in 85.654738 secs (1224189140 bytes/sec)




And while its reading that test file - iostat:
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
3 993 993 80308 1.7 0 0 0.0 58.6| da0
3 980 980 80580 1.8 0 0 0.0 64.7| da1
3 955 955 79609 1.9 0 0 0.0 65.2| da2
3 1002 1002 80360 1.6 0 0 0.0 58.6| da3
2 965 965 80536 1.9 0 0 0.0 65.6| da4
3 986 986 80596 1.8 0 0 0.0 65.9| da5
3 928 928 79876 1.9 0 0 0.0 62.9| da6
3 973 973 80851 1.9 0 0 0.0 64.9| da7
3 956 956 80448 2.0 0 0 0.0 66.2| da8
3 997 997 80004 1.7 0 0 0.0 61.6| da9
3 963 963 80436 1.9 0 0 0.0 67.0| da10
3 1032 1032 79928 1.5 0 0 0.0 56.9| da11
3 998 998 80512 1.7 0 0 0.0 61.2| da12
2 977 977 80468 1.8 0 0 0.0 64.3| da13
3 976 976 80612 1.8 0 0 0.0 63.9| da14
3 962 962 80488 1.8 0 0 0.0 60.9| da15

And this is the IOstat when simply reading a file via cifs:
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 85 85 7269 2.0 0 0 0.0 6.0| da0
0 86 86 7365 1.8 0 0 0.0 5.4| da1
0 84 84 7381 1.8 0 0 0.0 5.5| da2
0 85 85 7281 2.1 0 0 0.0 6.2| da3
0 85 85 7305 2.3 0 0 0.0 7.0| da4
0 85 85 7257 2.2 0 0 0.0 6.4| da5
0 84 84 7349 2.4 0 0 0.0 6.9| da6
0 85 85 7289 1.9 0 0 0.0 5.8| da7
0 85 85 7317 1.8 0 0 0.0 5.3| da8
0 83 83 7313 2.0 0 0 0.0 5.8| da9
0 83 83 7261 2.1 0 0 0.0 6.2| da10
0 85 85 7385 1.7 0 0 0.0 5.0| da11
0 85 85 7309 1.7 0 0 0.0 5.1| da12
0 83 83 7329 2.1 0 0 0.0 6.2| da13
0 84 84 7361 1.8 0 0 0.0 5.3| da14
0 84 84 7241 2.0 0 0 0.0 5.8| da15
 
Last edited:

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
HAH. Fixed this!! Its the drives!! DO NOT USE SEAGATE ARCHIVE DRIVES as system log. They do NOT like the write access. This is now my current network performance with the System dataset pool set to the boot drive.

sDbu8eO.jpg



Now i can make the upgrade to the new freenas version, as i was worried that i might have to rebuild the entire system if i couldn't solve the issue.
 
Joined
Oct 2, 2014
Messages
925
i've seen this documented already there was a mega-ish thread about it
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
Ah! I would like to find that thread and add my experience in with the seagate archive drives.
 
Joined
Oct 2, 2014
Messages
925

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well it was running on 16gb, and fine for weeks. Its only just recently that its gone down the drain.

That's *exactly* what happens when you starve the system too much. Everything is fine.. then it jumps of a cliff and commits seppuku.

You almost certainly will need more than 32GB of RAM with a zpool of that size. I've got a 60TB zpool (10x6TB in RAIDZ2) and I'm walking a very fine line as the sole user of the server.

I'm very skeptical about your experience claiming it is the disks. I've done some pretty nasty tests to those disks, and unless you are going to tell me you are writing 20GB per disk x however many disks in your zpool, and doing it nonstop for a long period of time (20+ minutes, which is more than your Windows LAN performance charts are showing) then I'm not really buying it.

Since your problem is with reading, that's just negated the whole discussion that it is the disks...

I will definitely put more faith on the RAM being a problem. If you don't have enough RAM, your metadata can't fit, and performance hits the floor. On my system I was getting over 1GB/sec (yes, 1 Gigabyte/sec) and the next morning I couldn't do 10MB/sec. Had to add more RAM to get over the performance problems.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
I am going to drop it back to 16 gig and see what happens, I really want to know if its a ram issue or a disk issue.
Adding the 32gig didnt help my transfer at all, it didnt make any difference. Changing the system dataset made a HUGE difference. Will play with the ram tonight and see what happens.
 

Rainwulf

Explorer
Joined
Jul 12, 2015
Messages
67
This is the transfer rate with only 16 gig of ram.
vPSjJ44.jpg


Zpool size is 116 tib.

My actual stored data is at 28.1tb.

Machine is running 16gig of PC1333 DDR3 ECC.
I am going to keep the 32 on hand, im going to start a long term experiment: Wait until performance goes wonky, throw more ram in to see what happens.
 
Last edited:
Status
Not open for further replies.
Top