Slow performance, 36MB/s, E350 w/ 8GB RAM

Status
Not open for further replies.

srwd

Cadet
Joined
Nov 22, 2012
Messages
5
Hi all,

This is my first freenas system. I have been troubleshooting my performance issues all night by reading the forums, and have made some progress but I am still seeing what I think is slow performance compared to my system.

Here are my key system specs:

Motherboard/CPU: BIOSTAR A68I-350 Deluxe AMD Fusion APU 350D AMD
http://www.newegg.com/Product/Product.aspx?Item=N82E16813138365

RAM: G.SKILL Ripjaws X Series 8GB
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231428

HD: 3x Seagate Barracuda 7200 3 TB 7200RPM SATA 6 Gb/s
http://www.amazon.com/gp/product/B005T3GRLY/?tag=ozlp-20


I first started out with the default freenas/bios settings. This had the sata configuration as "native ide" mode or something similar. With this setup, I got:
Code:
[root@apple] /mnt/MAINVOL/Media# dd if=/dev/zero of=10g.img bs=1000 count=1000000
1000000000 bytes transferred in 56.004517 secs (17855703 bytes/sec) = ~17MB/sec


I noticed in the dmesg log that the drives were only showing a max of 33mb/sec. This didn't seem right and after google'ing, I decided to change my sata mode to achi. This had a large improvement:

Code:
[root@apple] /mnt/MAINVOL/Media# dd if=/dev/zero of=10g.img bs=1000 count=1000000
1000000000 bytes transferred in 29.036828 secs (34439024 bytes/sec) = ~32.8MB/sec


and the drives seemed to be listed ok now:

Code:
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device
ada2: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)


I then found a forum post that talked about the auto-tuning option. I enabled that and got slightly better results:

Code:
[root@apple] /mnt/MAINVOL/Media# dd if=/dev/zero of=10g.img bs=1000 count=1000000
1000000000 bytes transferred in 26.352285 secs (37947373 bytes/sec) = ~36MB/sec



I'm not sure what to try next though. It seems like I am missing some key settings. I see forum posts from people with similar setups getting 100MB/s+. Anyone have some ideas on what else I should check or configure?

Thanks!
 

jeltok

Cadet
Joined
Nov 16, 2012
Messages
3
I have put together similar FreeNas 8.3.0 based server (AMD C60/8GB/3TB WD Red) and I am getting similar inferior results. CrystalDiskMark reports ~45 MB/s reads and ~25 MB/s writes over CIFS. When I transfer a huge file (~5GB) though, I get 70-90 MB/s speeds :confused:.
I have tried many different methods to improve performance( including tuning vfs.zfs.* options etc). Last thing I noticed is that my 3TB hard drives are being treated as 512 byte sector drives (instead of 4k sectors) by the system, which I will try to address tonight. I can see that you system shows the same. I will try to backup data and redo my setup with 4k sectors and see if it makes any difference.
Just for the record, I did import my ZFS volumes into a NAS4Free 9.1 system that I have created for comparison (on the same hardware - just different USB drive) and it shows full speed (~100 MB/s) in CrystalDiskMark. May it be that Samba implementation is superior in FreeBSD 9.1?
Hope we can resolve this matter.
 

srwd

Cadet
Joined
Nov 22, 2012
Messages
5
Thanks for your reply and the extra data points. That is very interesting to hear about NAS4Free being full speed for you. It looks like it uses the same version of ZFS, v28. It seems like I could get another thumb drive, boot into nas4free, and import my zfs volume without losing any data? If so, I might give that a try.
 

srwd

Cadet
Joined
Nov 22, 2012
Messages
5
Did you check 4k sectorsize upon volume creation?

# zdb |grep ashift


I did not check that. I did the defaults for almost everything:
Code:
[root@apple] ~# zdb |grep ashift
            ashift: 12


I assume that is bad? Is there a way to redo it without losing everything I've copied over? Or is it easier to just reformat?
Thanks for your help
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
ashift: 12 means you have 4K sectors. ashift:9 = 512
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
I would suggest you to take ZFS out of equation and test disks individually, raw with dd and maybe try with UFS.
 

jeltok

Cadet
Joined
Nov 16, 2012
Messages
3
Just FYI
When i ran 'zdb |grep ashift' yesterday on my 3TB WD Red drive it showed

ashift: 12
ashift: 9​

When I run 'smartctl' it shows the following for my drive:

Sector Sizes: 512 bytes logical, 4096 bytes physical​

I guess I have missed something while creating the volume. So I'll try to redo it tonight and be more careful this time.
 

jeltok

Cadet
Joined
Nov 16, 2012
Messages
3
OK
Yesterday I wiped and recreated volumes using 'force 4k' option in GUI. That did not change anything. Than I have reformatted the 3TB WD Red with 'gnop create -S 4096 /dev/asa0'. 'zdb | grep ashift' showed 12.
However after creating a new zpool with that drive I still get same miserable results using CrystalDiskMark (~45 MB/s reads and ~25 MB/s writes). Importing exactly same zpool into NAS4Free again shows ~55MB/s reads and ~75 MB/s writes (x3 times difference for writes).
I am puzzled.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I have heard that NAS4Free has a better throughput. My throughput for copying a large file to the NAS through my windoze 7 machine is ~90MB/sec and reading it back is ~70MB/sec. But that isn't real world usage and I'm fairly happy with those results but small files take a hit.

If you want to do valid throughput tests then you need to download and install the Intel NAS Performance Toolkit (search for it on Google). You need to read the manual in order to use it properly but once you have done one test and read the results, the others are a piece of cake. The test takes considerable time to run as well, could be hours. It runs through many things, if you select all the test types. Here is an example of the results from my test I ran yesterday...

Content Creation
Code:
Test generated with NASPT version 1.7.1
Device Manufacturer: FreeNAS 8.3.0
Model Name: 
Number of Disks: 5
RAID Level: 1
Notes: 
Test Mode: batch
Traffic generatated as fast as possible using single transactions

Bytes transferred:
 Reads: 12061650
 Writes: 142846419
 Total:   154908069

Run time: 15699.035ms
Average Throughput: 9.867MB/s

Average Transfer Sizes (Bytes)
 Reads:  1307
 Writes:  12304
 Overall: 7435

Average Service Times (µs)
 Reads: 3.1
 Writes: 855.3
 Opens: 19045.5

Maximum Service Times (µs)
 Reads: 228.0
 Writes: 2180725.0
 Opens: 704201.0

Number of read transactions completed in
 <1ms: 9225
 >1ms: 0

Number of write transactions completed in
 <1ms: 11292
 >1ms: 318

Number of open transactions completed in
 <1ms: 0
 >1ms: 98

Total transactions
 Reads: 9225
 Writes: 11610
 Opens: 98

Number of files accessed: 98

Percentage of sequential accesses: 38.60 %


HDVideo_4Play
Code:
Test generated with NASPT version 1.7.1
Device Manufacturer: FreeNAS 8.3.0
Model Name: 
Number of Disks: 5
RAID Level: 1
Notes: 
Test Mode: batch
Traffic generatated as fast as possible using single transactions

Bytes transferred:
 Reads: 1274475288
 Writes: 0
 Total:   1274475288

Run time: 21221.467ms
Average Throughput: 60.056MB/s

Average Transfer Sizes (Bytes)
 Reads:  256692
 Overall: 256692

Average Service Times (µs)
 Reads: 4214.6
 Opens: 75076.3

Maximum Service Times (µs)
 Reads: 28019.0
 Opens: 120577.0

Number of read transactions completed in
 <1ms: 1355
 >1ms: 3610

Number of open transactions completed in
 <1ms: 0
 >1ms: 4

Total transactions
 Reads: 4965
 Writes: 0
 Opens: 4

Number of files accessed: 4

Percentage of sequential accesses: 10.94 %


Compare my throughput results with my copying of a single large file from Windoze, what a difference.

Did you ever state how your ZFS pool was created? RAIDZ1 with 3 drives?

If you do perform the tests I suggested, please post the results, all that is needed is the name of each test and the line stating the average throughput for this discussion.
 

srwd

Cadet
Joined
Nov 22, 2012
Messages
5
I put NAS4Free onto a flash stick and booted that up, imported my zfs volumes, and reran my dd test. I got the same slow ~30-35MB result.
I think the "dd" test is a better test than any samba tests since it removes the CIFS/samba configuration from the equation and writes directly to the disks.

It is a raidz1 configuration with 3 hard drives.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
That is very slow and you may have a problem. If your drives are close to full then that will cause a slowdown also. You need at least 10% free (if my memory is correct) for a RAID system to perform well. You also haven't stated how our pool is configured, also a possible issue.

Here are the commands I ran and my results for comparison (write a 100GB file):
Code:
[root@freenas] /mnt/farm/data/Test# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 336.745898 secs (318858175 bytes/sec)
[root@freenas] /mnt/farm/data/Test# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 287.611204 secs (373331014 bytes/sec)

Note that this is writing 304MB/sec and reading 356MB/sec respectively.

And we will not get into the difference between a K meaning 1024 or 1000, if someone wants to debate that, create another thread as it's not worth the effort and has no bearing on this persons problem. I use 1024.

You should run the same test commands and post the results. Maybe you are reading the values incorrectly?

As for dd being a better test, not really. It's great as a diagnostic tool but aren't you more concerned with how well it will perform while you are using it? I know I am. So I have 300MB/sec throughput internally. That definitely isn't what I see when I copy files to and from my NAS. If it were, I'd be one happy camper! Using the Intel software (used by many places to benchmark NAS devices, like Toms Hardware) gives you an idea of how well it will perform in your environment. Lets say you actually have a fast dd performance, well making tweaks to samba to get a faster throughput won't help if you're using dd for testing. You need something that was designed for testing a NAS. Also I don't think Crystal Disk is appropriate either for NAS testing. It's up to you what tools you want to use and also we all know samba is slow and ZFS V28 in FreeNAS is slower than V15, but hey, dd says I'm fast as hell. Okay, the horse is dead and I'm stepping off my soap box.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
I'd agree that there's probably something that can be tweaked. Looking at the reporting graphs after running the dd might help locate a cause inside the box. My results:
Code:
[root@freenas] /mnt/bollar/test# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 83.981580 secs (1278544442 bytes/sec)
[root@freenas] /mnt/bollar/test# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 35.932821 secs (2988192411 bytes/sec)
[root@freenas] /mnt/bollar/test# 
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'd agree that there's probably something that can be tweaked. Looking at the reporting graphs after running the dd might help locate a cause inside the box. My results:
Code:
[root@freenas] /mnt/bollar/test# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 83.981580 secs (1278544442 bytes/sec)
[root@freenas] /mnt/bollar/test# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 35.932821 secs (2988192411 bytes/sec)
[root@freenas] /mnt/bollar/test# 
Now that is speedy! You must have SSD somewhere in there or a sweet RAID.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
Now that is speedy! You must have SSD somewhere in there or a sweet RAID.
No SSD, but it's otherwise overpowered. 64G RAM, Dual Xeon 4C E5 2609 2.4 GHz 4 LGA 2011, and Dual LSI 9207-8i HBA across 2x ZRAID2 Vdevs. I'm going to take out half the RAM and one CPU next week and I'll run the tests again.
 

headconnect

Explorer
Joined
May 28, 2011
Messages
59
Hi,

Thought I'd add in my 2c:

Running an AMD E-350 (ASUS ), with 8G ram and 5x 2TB Horrible Crapolainen (tm) disks in RAID-Z1 (think they're actually seagate barracuda greens).

This is what i'm getting:

Code:
[root@Erebus] /mnt/moirae# dd if=/dev/zero of=ddfile.tmp bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 583.361055 secs (184061280 bytes/sec)
[root@Erebus] /mnt/moirae# dd if=ddfile.tmp of=/dev/zero bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 402.878505 secs (266517526 bytes/sec)


So, looking at 175mb/s write speed and 254mb/s read speed. This is definetely somewhat down from before, but still not 'bad'.

Not sure I did much either, save for ensuring these drives are set up as 4k..
 

fluxnull

Cadet
Joined
Nov 29, 2012
Messages
8
Almost Same Setup

AMD E-350
Asus E35M1-I
GSKILL Ripjaw 8GB
4x HGST 7,200 RPM SATA 6.0 1TB (Raid10)


Code:
[root@freenas] /mnt/vault# dd if=/dev/zero of=10g.img bs=1000 count=1000000
1000000+0 records in
1000000+0 records out
1000000000 bytes transferred in 33.227760 secs (30095318 bytes/sec)


Code:
[root@freenas] /mnt/vault# dd if=/dev/zero of=ddfile.tmp bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 620.122893 secs (173149844 bytes/sec)


Code:
[root@freenas] /mnt/vault# dd if=ddfile.tmp of=/dev/zero bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 342.446489 secs (313550250 bytes/sec)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@swrd
Did you ever try the dd test using the values:

Code:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k

dd of=/dev/zero if=tmp.dat bs=2048k count=50k
 

srwd

Cadet
Joined
Nov 22, 2012
Messages
5
@swrd
Did you ever try the dd test using the values:

Code:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k

dd of=/dev/zero if=tmp.dat bs=2048k count=50k

A few days later, it just started performing better. I have no explanation why... as far as I know I didn't make any changes that would have affected it. It is now performing great...

Code:
[root@apple] /mnt/MAINVOL# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 533.769694 secs (201162006 bytes/sec)
[root@apple] /mnt/MAINVOL# dd of=/dev/zero if=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 309.721632 secs (346679635 bytes/sec)


I do have an interesting new "problem" though. I use the machine as a media server and backup server. I don't want the drives running 24/7 since they are only active maybe 1-2hrs per day on average. I set the spindown to 30 minutes and it is correctly spins down after inactivity. The "issue" I am consistently running into is with what I assume is the read cache. Let's say I want to watch a 2 hour long movie. It seems like it loads ~30-45min of it into memory, which means that when I am watching the movie, it will spin down the drives and spin them back up again several times. This makes it spin up all 3 drives to read in the next section causing a 15+second delay and more wear&tear on the drives than if it would have just kept then up and cached less. Is there a way to tell it to do less read caching, even if it slightly affects performance? I guess I could increase the spindown time, but that isn't ideal.

Thanks for all your guy's help!
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I guess I could increase the spindown time, but that isn't ideal.
:confused: It's the intelligent thing to do. I would increase the timeout. There is a script somewhere around here to tell your drives to spindown. I would run that script if you want the drives to spin down sooner. If you forget they will just spindown a little later.

To limit the ARC set the vfs.zfs.arc_max tunable.
 
Status
Not open for further replies.
Top