Notes on Performance, Benchmarks and Cache.

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Not sure if it makes a difference but I have 10x2TB and yours are 4's. Also, I have a mixture of different speed disks with different buffer sizes. For my specific usage I'm perfectly happy with the speed, it's a home/home office system.

One other thing as well, my CPU is a lowly Pentium, not a Xeon. I just re-ran the "DD" command and processor usage shot (system) from 1% to 79% whilst the job was running, %user didn't budge.
 

ludamus

Cadet
Joined
Oct 13, 2014
Messages
4
Hi All,

After reading various posts about performance , I don't feel that my system is performing as well as it should especially one with a build like I have (please see signature)

~Compression and dedup are off.

dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 629.222620 secs (170,645776 bytes/sec)


=100Gb File.

attached is output from "iostat da0 da1 da2 da3 1" you can see that the write speed varies between 30MBs and 145Mbs across all disks..
CPU is hardly touched less than <1%
all disks write around the same speeds at the same time, so I don't think its a duff disk that let the whole raid down (perhaps...)

What kind of performance should i expect to see? are there any obvious numpty things I've overlooked?


egrep 'da[0-9]' /var/run/dmesg.boot

ada0 at ata0 bus 0 scbus0 target 0 lun 0
da0 at mps0 bus 0 scbus2 target 4 lun 0
da0: <ATA WDC WD40EFRX-68W 0A80> Fixed Direct Access SCSI-6 device
da0: Serial Number WD-WCC4E5X4NCYZ
da0: 600.000MB/s transfers
da0: Command Queueing enabled
da0: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da0: quirks=0x8<4K>
da1 at mps0 bus 0 scbus2 target 5 lun 0
da1: <ATA WDC WD40EFRX-68W 0A80> Fixed Direct Access SCSI-6 device
da1: Serial Number WD-WCC4EE7P2ULV
da1: 600.000MB/s transfers
da1: Command Queueing enabled
da1: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da1: quirks=0x8<4K>
da2 at mps0 bus 0 scbus2 target 6 lun 0
da2: <ATA WDC WD40EFRX-68W 0A80> Fixed Direct Access SCSI-6 device
da2: Serial Number WD-WCC4E76XEZN9
da2: 600.000MB/s transfers
da2: Command Queueing enabled
da2: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da2: quirks=0x8<4K>
da3 at mps0 bus 0 scbus2 target 7 lun 0
da3: <ATA WDC WD40EFRX-68W 0A80> Fixed Direct Access SCSI-6 device
da3: Serial Number WD-WCC4E1294856
da3: 600.000MB/s transfers
da3: Command Queueing enabled
da3: 3815447MB (7814037168 512 byte sectors: 255H 63S/T 486401C)
da3: quirks=0x8<4K>



I am a newbie and apreciate all the help I can get !!

Many thanks
 

Attachments

  • iostat.txt
    75.3 KB · Views: 973
Last edited:

vikingboy

Explorer
Joined
Aug 3, 2014
Messages
71
I see 400mb/s from ten 4tb drives in raid z2, your four drives if in z2 would turn in around (I'm guessing) 200mb/s so not a million miles off where you are at 170mb/s already.
Did you try a striped pair setup, it may return more IOPS? Z2 with only 4 may not be optimal.
 
Last edited:

ludamus

Cadet
Joined
Oct 13, 2014
Messages
4
I see 400mb/s from ten 4tb drives in raid z2, your four drives if in z2 would turn in around (I'm guessing) 200mb/s so not a million miles off where you are at 170mb/s already.
Did you try a striped pair setup, it may return more IOPS? Z2 with only 4 may not be optimal.

haven't tried that, going for zfsz2 as figured its a happy medium between write performance vs data loss prevention / risk etc etc, is there a penalty for using only 4 drives then? i know that writing to the pool is expensive as it has to work out parity etc but ive seen equal if not better performance on z2 rigs with lower spec'd HP micro. Understandably my bottle neck is network side...but that not the point. Grrr.
 

vikingboy

Explorer
Joined
Aug 3, 2014
Messages
71
I don't think its about 4 drives as such but more the overheads inherent in ZFS itself. However, this is what gives you all that lovely redundancy and bit-rot protection. Cyberjocks newbie guide explains performance issues of various stripe configurations and is worth a refresher read for sure. It doesn't necessarily prepare you for the hit of the ZFS overheads but does give a good indication of what perf you could expect from array types.
CPU wise you look well equipped, more RAM could help slightly but the only way to get significantly more throughput would be via a different disk configuration which unless adding additional disks may reduce redundancy, for example, a striped pair of mirrored drives would possibly give you slightly more throughput but at slightly less redundancy (technically two drives still but only one from each array).
Before you put the machine into production use, it would be worth experimenting with various configurations to see where the sweet spot is for perf vs redundancy. At least that way you would know for sure you've maximised your investment.
I'm pretty new to this stuff too so happy to being corrected from one of the more experienced posters here but your performance looks like its about right to me for the spec and number of your drives in that particular config.

EDIT: Im still on build 9.2.1.7 myself still but from memory I don't think 9.2.1.8 moved on from using the v16 M1015 firmware and I note from your sig you are on M1015 HBA (IT Mode Firmware 17.00.01.00-IT). Ideally you should match the two version numbers up to prevent any future issues. Please just verify you do have the right FW version installed for 9.2.1.8.
 

tanik1

Contributor
Joined
Mar 31, 2013
Messages
163
this is what i was getting with mines and i thought not too bad.

Code:
dd if=/dev/zero of=/mnt/pool/test/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 401.069519 secs (267719628 bytes/sec)

dd if=/mnt/pool/test/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 264.887955 secs (405356984 bytes/sec)
 
Joined
Jan 8, 2015
Messages
5
checkout iostat, zpool iostat and iozone for the physical performance stats.

you must all have 10 GBit NICs :)
 
Joined
Oct 2, 2014
Messages
925
Looking for a little input because my numbers seem off, i have a RAIDz3 consisting of x7 500Gb hdds (for testing before i dump a boat load of money lol). I ran the 2 tests from the original post:

dd if=/dev/zero of=/mnt/Storage_Pool/tmp.dat bs=2048k count=50k

dd if=/mnt/Storage_Pool/tmp.dat of=/dev/null bs=2048k count=50k

And my outputs were as followed:

Write:

dd if=/dev/zero of=/mnt/Storage_Pool/tmp.dat bs=2048k count=50k

51200+0 records in

51200+0 records out

107374182400 bytes transferred in 37.615197 secs (2854542608 bytes/sec)

2.85Gb/s


Read:

dd if=/mnt/Storage_Pool/tmp.dat of=/dev/null bs=2048k count=50k

51200+0 records in

51200+0 records out

107374182400 bytes transferred in 19.129861 secs (5612909738 bytes/sec)

5.63Gb/s

.......What am i doing wrong? These speeds seem insane....
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yep... /dev/zero compresses AMAZINGLY well with lz4 compression. Rumor is something like 1000000:1.
 
Joined
Oct 2, 2014
Messages
925
Ha ha, and im an idiot, didnt even think of it.... Actual results now lol

Write
Code:
dd if=/dev/zero of=/mnt/Storage_Pool/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 307.437768 secs (349255015 bytes/sec)


349MB/s

Read
Code:
dd if=/mnt/Storage_Pool/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 162.182518 secs (662057685 bytes/sec)


662MB/s
 

Osiris

Contributor
Joined
Aug 15, 2013
Messages
148
specs of succubus: see signature. mainly 11 x 4TB WD red, zfs3 (so 8+3), raid controller, raid expander.
Only 16GB RAM though ...

with lz4 compression
Code:
[root@succubus] /mnt/succz12x4tbb# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 76.437976 secs (1404722995 bytes/sec)

[root@succubus] /mnt/succz12x4tbb# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 34.909019 secs (3075829270 bytes/sec)
write: 1.310 GB/sec
read: 2.941 GB/sec

with compression off
Code:
[root@succubus] /mnt/succz12x4tbb# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 279.851468 secs (383682755 bytes/sec)

[root@succubus] /mnt/succz12x4tbb# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 204.987403 secs (523808687 bytes/sec)
write: 357 MB/sec
read: 490 MB/sec

compression back on (lz4), random 10GB file.
Code:
[root@succubus] /mnt/succz12x4tbb# dd if=/dev/random of=tmp.dd bs=512k count=20480 conv=sync
20480+0 records in
20480+0 records out
10737418240 bytes transferred in 110.587841 secs (97094022 bytes/sec)

[root@succubus] /mnt/succz12x4tbb# dd if=tmp.dd of=/dev/random bs=512k count=20480 conv=sync
20480+0 records in
20480+0 records out
10737418240 bytes transferred in 22.209550 secs (483459510 bytes/sec)
write: 90 MB/sec
read: 454 MB/sec
 
Last edited:
Joined
Oct 2, 2014
Messages
925
specs of succubus: see signature. mainly 11 x 4TB WD red, zfs3 (so 8+3), raid controller, raid expander.
Did i read that right? That and your system specs, raid controller? Oh boy....hope its in JBOD.
 

Osiris

Contributor
Joined
Aug 15, 2013
Messages
148
it is jbod.
how did you hook up those 20+ drives?
 
Last edited:
Joined
Oct 2, 2014
Messages
925
it is jbod.
how did you hook up those 20+ drives?
Just making sure :p , 20+ drives on which chassis? On my FreeNAS server it has a backplane expander that takes 1,2,or 2 SAS cables to 1 or 2 HBA's. My SAN server that has 24 bays uses 1 SAS per 4 hdds, which required 3 HBA's as the backplane isnt a expander
 

Osiris

Contributor
Joined
Aug 15, 2013
Messages
148
Last edited:
Joined
Oct 2, 2014
Messages
925

X7JAY7X

Dabbler
Joined
Mar 5, 2015
Messages
20
Did some testing today with my new FreeNas box:

Lenovo TS440
E3-1225 V3 CPU
24GB RAM
(4) - HGST 4TB 7200RPM NAS Drives
Volume Compression - None

Test 1 - RaidZ2 (4) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 380.987598 secs (281831175 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 342.898519 secs (313136909 bytes/sec)
[root@NAS] ~#

Test 2 - RaidZ1 (4) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 260.383053 secs (412370088 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 239.311809 secs (448678997 bytes/sec)

Test 3 - RaidZ1 (3) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 369.173904 secs (290849871 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 336.783982 secs (318822118 bytes/sec)

Test 4 - Mirror (2) disks
[root@NAS] ~# dd if=/dev/zero of=/mnt/Data0/temp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 770.711529 secs (139318251 bytes/sec)
[root@NAS] ~# dd if=/mnt/Data0/temp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 432.971643 secs (247993568 bytes/sec)


I thought a mirror had the best performance but from my testing it is the slowest.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Would you consider testing a pair of mirrors, i.e. striped mirrors? That would be a fairer comparison with Test 1 and Test 2.
 
Top