Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Notes on Performance, Benchmarks and Cache.

Mr_N

FreeNAS Experienced
Joined
Aug 31, 2013
Messages
289
Thanks
28
That's equal to...
3.72 GB/s or 29.79 Gb/s Write
5.84 GB/s or 46.77Gb/s Read
You need to turn compression off :p

Code:
dd if=/dev/zero of=tmp.dat bs=2048k count=5  0k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 168.356010 secs (637780513 bytes/sec)

dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 101.454100 secs (1058352324 bytes/sec)
 

Visseroth

FreeNAS Experienced
Joined
Nov 4, 2011
Messages
505
Thanks
17
You need to turn compression off :p
and how would one go about doing that? Turn off compression in the storage pool or something else?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,715
Thanks
646
Yes, in the pool settings ;)
 

Visseroth

FreeNAS Experienced
Joined
Nov 4, 2011
Messages
505
Thanks
17
That would make sense. I'll have to do some testing. Thank you! :)
 

Mr_N

FreeNAS Experienced
Joined
Aug 31, 2013
Messages
289
Thanks
28
You didn't really think your pool was that fast did you? :p
 
Joined
Jun 6, 2016
Messages
12
Thanks
5
Writing a 10GB file I get 100.6MB/sec write and 108.6MB/sec read

Dell t20 g3220 with 2x5400RPM drives in a mirror, 12GB RAM, encryption enabled, compression disabled.

When reading from disk it only seems to use 1 disk at a time - i.e. 1 disk shows 100% busy in gstat and it will occasionally swap to the other disk, zpool iostat shows the same. It just won't use both disk simultaneously to do a read

I made sure prefetch is enabled:
vfs.zfs.prefectch_disable=0

On a 20GB file it set both disks at around 60/40% or 50/50% for about 20 seconds but spent the rest of the time with one disk at 100% and the other at 0%, swapping every 20 seconds or so. 115MB/sec read

I guess this is caused by geli?

edit: when copying data to an external drive it does distribute load across the drives
 
Last edited:
Joined
Jun 13, 2016
Messages
5
Thanks
0
First post so please forgive any lack of information or understanding.

I am currently trialing using FreeNAS instead of a conventional NAS due to a mix of personal media and business data i need to securely store.

I'm doing test using the following hardware:

Dell PowerEdge R510:
Dual Socket - Quad Core Intel(R) Xeon(R) CPU E5640 @ 2.67GHz
16GB of DDR3 ECC Ram
IBM ServeRAID M1015 (SAS9211-8i) in IT mode using firmware version 20.00.07.00
8 Bay SAS Backplane

Currently have:
4x HITACHI HUS156030VLS600 300GB 15k RPM SAS Disks in RaidZ1
2x WDC Black WD10JPLX-00MBPT0 1TB 7200RPM 2.5" SATA drives in Mirror

Both have encryption enabled and this CPU and motherboard have AES offloading.

I am seeing very slow write and read speeds on both datasets this is tested in both iscsi mounts, NFS and SMB transfers.

on the RAIDZ1, i am seeing around 76mB/s write and 63mB/s read
On the mirror, i am seeing a max of 81mB/s write and 72mB/s read

I can understand less than optimal speeds on the raidz1 considering the age of the disks, but the mirrored disks are exceptionally slow.

Wondering if someone can give any input, etc as i'd like to clear this up well before i take the plunge.

EDIT:

So after re-reading this whole thread and then testing the 100G file creation using dd (which bypasses the caching to avoid incorrect speed inflation) it seems the speeds are sufficient when testing directly:

raidz1 dataset:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 35.434391 secs (3030225163 bytes/sec)

mirror dataset:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 36.590565 secs (2934477313 bytes/sec)

My thought is now to do with the network connection, so i'll work on this. (i intend on doing a 802.3ad / lacp with 4x Intel GIG nics)
 
Last edited:

Artion

FreeNAS Experienced
Joined
Feb 12, 2016
Messages
330
Thanks
42
2x WDC Black WD10JPLX-00MBPT0 1TB 7200RPM 2.5" SATA drives in MirrorI can understand less than optimal speeds on the raidz1 considering the age of the disks, but the mirrored disks are exceptionally slow.
Hi @Morphix ,
As for the mirrored one, AFAIK, the speed is that of a single disk, the slowest of the two. Because the same data is written on both of them. Reading speed maybe have to be faster.

PS.: Consider opening a new thread for your case.
If compression is enabled on the volumes you're testing, the command dd with /dev/zero is not accurate at all for performance. If that's the case try disabling compression and repeat the test.
 
Joined
Nov 6, 2013
Messages
5,900
Thanks
980
First post so please forgive any lack of information or understanding.

I am currently trialing using FreeNAS instead of a conventional NAS due to a mix of personal media and business data i need to securely store.

I'm doing test using the following hardware:

Dell PowerEdge R510:
Dual Socket - Quad Core Intel(R) Xeon(R) CPU E5640 @ 2.67GHz
16GB of DDR3 ECC Ram
IBM ServeRAID M1015 (SAS9211-8i) in IT mode using firmware version 20.00.07.00
8 Bay SAS Backplane

Currently have:
4x HITACHI HUS156030VLS600 300GB 15k RPM SAS Disks in RaidZ1
2x WDC Black WD10JPLX-00MBPT0 1TB 7200RPM 2.5" SATA drives in Mirror

Both have encryption enabled and this CPU and motherboard have AES offloading.

I am seeing very slow write and read speeds on both datasets this is tested in both iscsi mounts, NFS and SMB transfers.

on the RAIDZ1, i am seeing around 76mB/s write and 63mB/s read
On the mirror, i am seeing a max of 81mB/s write and 72mB/s read

I can understand less than optimal speeds on the raidz1 considering the age of the disks, but the mirrored disks are exceptionally slow.

Wondering if someone can give any input, etc as i'd like to clear this up well before i take the plunge.

EDIT:

So after re-reading this whole thread and then testing the 100G file creation using dd (which bypasses the caching to avoid incorrect speed inflation) it seems the speeds are sufficient when testing directly:

raidz1 dataset:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 35.434391 secs (3030225163 bytes/sec)

mirror dataset:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 36.590565 secs (2934477313 bytes/sec)

My thought is now to do with the network connection, so i'll work on this. (i intend on doing a 802.3ad / lacp with 4x Intel GIG nics)
Please redo your dd tests. The ones you did are invalid because you had compression on. You need to test with compression of.

Sent from my Nexus 5X using Tapatalk
 
Joined
May 12, 2014
Messages
47
Thanks
2
I am running a r710 currently RaidZ2 w/ 6x WD 8TB Red 64GB RAM and 2x L5630 Xeons.
Here are my results.

Write 107374182400 bytes transfered in 197.954388 sec 542418803 bytes/sec 542 MB/s
Read 107374182400 bytes transfered in 155.651580 sec 689836764 bytes/sec 689 MB/s

So here is my concern/issue.

I just upgraded this system to 64GB RAM and RaidZ2, prior it was 4x WD 8TB Red in a striped mirror setup.
I was able to easily saturate the 1GB network with about 104MB/s transfer on FTP before.

After the update while moving all my data back to the pool, I can only get between 40/60MB/s transfers almost half of what I had before.

Based on these tests if they are true and accurate I should hit 104MB/s super easy, and I setup everything the same as before so I have no idea where my speed issue is.
I almost moved back to mirrors just because when dumping 7TB of data it took forever.
 
Joined
Jun 13, 2016
Messages
5
Thanks
0
Since my post prior I've removed the very old sas disks and moved my sata disks over into the system under a 8 drive raidz2.

Along with that the system is now using 16gb ecc reg ram, might change that upto 32gb if i can see real world benefits.

Currently getting around 300mB/s on write with compression off using dd.

I'm currently awaiting my new 802.3ad switch to arrive which will help things on the network side such freenas and others.

Using 5400rpm NAS disks that is perfectly acceptable speeds for me.

I'll update further as i progress.

Sent from my LG-H990 using Tapatalk
 
Joined
Jun 13, 2016
Messages
5
Thanks
0
Well the failed disk replacement is on its way (glad i did raidz2) and i just received the new switch today, I'll retest dd speeds when the array is no longer degraded and then network speeds once i have the lacp/lagg setup.

Sent from my LG-H990 using Tapatalk
 
Joined
Oct 1, 2017
Messages
75
Thanks
0
Code:
[root@freenas ~]# dd if=/dev/zero of=tmp.dat bs=2048k count=50k				
51200+0 records in															
51200+0 records out															
107374182400 bytes transferred in 28.772643 secs (3731815056 bytes/sec)


Code:
[root@freenas ~]# dd if=tmp.dat of=/dev/null bs=2048k count=50k				
51200+0 records in															
51200+0 records out															
107374182400 bytes transferred in 12.876275 secs (8338916462 bytes/sec)


Compression is off.




I have a lot of ram, how do I do better tests?
 
Last edited by a moderator:
Joined
Oct 13, 2017
Messages
6
Thanks
0
If openZFS obeys volume options as non-openZFS, the following option should prevent reads being cached in the ARC:
primarycache=none

There is also a secondarycache option if you should have an L2ARC.

Writes, unless you are using synchronous writes, will always hit RAM and be flushed out to hard disks at ZFS' discretion (5 second intervals as default, I believe). If you have SLOG, logbias=throughput should avoid your synchronous writes being satisifed by on-FLASH (presumably) ZIL.
 

Scharbag

FreeNAS Experienced
Joined
Feb 1, 2012
Messages
430
Thanks
66
Here is something funny:

Spinning rust pool is 2x6x4TB with good INTEL 16GB ZIL mirror.

SSD is mirrored 500GB cheapos with crap ADADA 16GB ZIL mirror.

Shares are NFS to ESXi and then 5GB partitions introduced to the VM.

Surprised to see how well the spinning rust performs with a good ZIL and a whole lot of ARC memory!!

Screenshot 2017-12-30 13.11.54.png
Screenshot 2017-12-30 13.18.44.png
Screenshot 2017-12-30 13.19.27.png
 
Joined
May 29, 2017
Messages
27
Thanks
3
Reviving an old thread, here're my system specs:

OS: FreeNAS 11.1-U4
RAM: 2 sets of Micron 16GB DDR4 2400MHz (ECC)
CPU: Intel Xeon E3-1240 v6 @ 3.7GHz (4 cores w/ hyperthreading enabled)
Motherboard: SuperMicro X11SSH-LN4F-O
Disk configuration: 2 HGST DeskStar NAS 3.5" 4TB 7200 RPM 128MB Cache SATA 6.0Gb/s
GELI Encryption: enabled
Pool: 3.62TB 2-way mirror, currently 58% filled
Filesystem: Compression off (all other settings default as provided by FreeNAS 11.0-RELEASE, i.e. when I created the pool, e.g. 128KB record size)

And here are my results:


-> dd if=/dev/zero of=tmp.000 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 710.367122 secs (151153086 bytes/sec)



-> dd if=tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 557.808094 secs (192493052 bytes/sec)


I.e. 144.15 MB/s write & 183.57 MB/s read speeds. Do those numbers seem like acceptable performance for systems with specs similar to mine? I get the feeling that I'm not reaping all of the benefits my rig has to offer, but I have to admit I don't have much to base that expectation on...
 

Visseroth

FreeNAS Experienced
Joined
Nov 4, 2011
Messages
505
Thanks
17
Looks about right for a set if mirrored spinners to me.
More drives usually = more speed but also = more power consumed and you're basically going to get the speed of 1 drive since they are mirrored.
Maybe someone will chime in that knows more than I.
 

anmnz

FreeNAS Experienced
Joined
Feb 17, 2018
Messages
160
Thanks
93
you're basically going to get the speed of 1 drive since they are mirrored.
In theory you'll get the write speed of 1 drive but reads can be faster because ZFS can read from both sides of the mirror at once.

I guess I'd be vaguely disappointed that reading the mirror is only 25% faster than writing according to these numbers. But don't have anything solid to base that on.
 
Joined
May 29, 2017
Messages
27
Thanks
3
Thanks to @Visseroth & @anmnz for your replies!

I was looking at the manufacturer specs for my drive model, HGST HDN726040ALE614, https://www.hgst.com/sites/default/files/resources/DS_NAS_spec.pdf, and they claim a typical sustained transfer rate of 202 MB/s (if I'm reading that sheet correctly). So if I'm getting 144.15 MB/s write & 183.57 MB/s read speeds, I guess I'm indeed not doing that bad at all! (again, assuming I'm reading the specs sheet correctly).

SMB/CIFS speeds are a little different, though, as expected:


dd if=/dev/zero of=/Volumes/test/tmp.000 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 2227.664021 secs (48200349 bytes/sec)



dd if=/Volumes/test/tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 1550.864058 secs (69235071 bytes/sec)


Which translates into a write speed of 45.96 MB/s (approx. time of 37.11 minutes) and a read speed of 66.02 MB/s (approx. time of 25.85 minutes), out of an SMB share of that same filesystem on which I conducted my first test, accessed over 802.11ac Wifi from my 2014 MacBook Pro.

But I don't expect to be able to improve those numbers all that much until I get serious about my home networking and at least get a better router and switch. My FreeNAS rig is 10Gbps enabled, but my current router/switch is a crappy RCN-provided 1Gbps Arris DG2470A, and something tells me I shouldn't have much higher hopes for it.
 
Joined
Jun 15, 2013
Messages
8
Thanks
0
Just found this post, this is the test on my main NAS with compression:ON

Code:
#WRITE
[root@nas2] /mnt/VOL01/TEMP# dd if=/dev/zero of=tmp.000 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 22.190906 secs (4,838,657,030 bytes/sec)
#READ
[root@nas2] /mnt/VOL01/TEMP# dd if=tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 11.842084 secs (9,067,169,514 bytes/sec)
 
Top