Shouldn't this run faster?

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I did the same test so I could give you a point for comparison.
With compression on:
Code:
root@Emily-NAS:/mnt/iSCSI # dd if=/dev/zero of=testfile bs=1m count=15000
15728640000 bytes transferred in 3.665079 secs (4291487416 bytes/sec) or 34.3 gigabit

root@Emily-NAS:/mnt/iSCSI # dd if=testfile of=/dev/null bs=1m count=15000
15728640000 bytes transferred in 2.157541 secs (7290078234 bytes/sec) or 58.3 gigabit

With compression off:
Code:
root@Emily-NAS:/mnt/iSCSI # dd if=/dev/zero of=testfile bs=1m count=15000
15728640000 bytes transferred in 15.289570 secs (1028716961 bytes/sec) or 8.2 gigabit

root@Emily-NAS:/mnt/iSCSI # dd if=testfile of=/dev/null bs=1m count=15000
15728640000 bytes transferred in 8.046245 secs (1954780030 bytes/sec) or 15.6 gigabit

So your numbers are not bad. I have 16 drives in my iSCSI pool but they are older drives that don't transfer data as quickly as new drives would
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
So this is better....
upload_2018-3-26_21-28-33.png
upload_2018-3-26_21-28-41.png



Using that AJA benchmark tool and noticed that my speeds shot up after choosing "16bit RGBA" vs the default 8 bit.. I think 8 bit was the default. so this means I was successfully reading at 1500 Mbps (1.5 gigabits).. Is it just the different type of data which scews these tests? This s encouraging as it validates the switch and NICs aren't necessarily an issue.

Ps.. disregard the 4 different metrics from esxtop. One is for the physical nic and the other is the same metric from the vmkernel adapter. Only two NICs involved.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Did you add a SLOG device to the pool yet?
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Is it just the different type of data which scews these tests?

Possibly. The dd tests were using a continuous stream of zeros. Even with compression turned off in ZFS, there are other parts of the system that could recognize that and take advantage of it. Drives could (and do, I think) use compression to increase their write cache space, and a megabyte of zeros compresses down very well.
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
Did you add a SLOG device to the pool yet?

Not yet. After enabling sync on the zvol for iSCSI with my desktop PC grade Samsung SSD as SLOG, my write speed went from 105MB/S to 40.. So I assumed I needed a much better SLOG device. This wouldn't affect read in any way correct?

I was hoping I could get closer to 2 gigabit read performance even without SLOG or an L2ARC device. Assuming each drive should be able to provide a sustained speed of 60MB/s..
 
Last edited:

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
I did the same test so I could give you a point for comparison.
With compression on:
Code:
root@Emily-NAS:/mnt/iSCSI # dd if=/dev/zero of=testfile bs=1m count=15000
15728640000 bytes transferred in 3.665079 secs (4291487416 bytes/sec) or 34.3 gigabit

root@Emily-NAS:/mnt/iSCSI # dd if=testfile of=/dev/null bs=1m count=15000
15728640000 bytes transferred in 2.157541 secs (7290078234 bytes/sec) or 58.3 gigabit

With compression off:
Code:
root@Emily-NAS:/mnt/iSCSI # dd if=/dev/zero of=testfile bs=1m count=15000
15728640000 bytes transferred in 15.289570 secs (1028716961 bytes/sec) or 8.2 gigabit

root@Emily-NAS:/mnt/iSCSI # dd if=testfile of=/dev/null bs=1m count=15000
15728640000 bytes transferred in 8.046245 secs (1954780030 bytes/sec) or 15.6 gigabit

So your numbers are not bad. I have 16 drives in my iSCSI pool but they are older drives that don't transfer data as quickly as new drives would

Thank you! Are you using VMware ESXi with this array at all?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Drives could (and do, I think) use compression to increase their write cache space, and a megabyte of zeros compresses down very well.
I did the test on my system using dev/random and got very different results.
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
Not really helpful or related, but check these numbers out from our "Pure" flash array at work. I implemented it back toward the end of 2016 with a set of Nexus 10Gb switches. Wow this is fast compared to what I am working in here at home. Just need 50k now... lol

upload_2018-3-27_9-19-45.png
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
Those who fail history are doomed to repeat it.

https://forums.freenas.org/index.php?threads/notes-on-performance-benchmarks-and-cache.981/

Literally, the top post.

-tm

Lol... Nice, I feel like this was a quote from history that I should know, but I don't and feel stupid. So, reading the first post from the reference thread it looks like the key is actually testing a file larger than you have actual memory on board? That way you aren't getting a memory speed test, you are actually hitting the drives...?

In my case, I have 8 WD RED 2TB drives. Basically RAID 10 configuration. I know ZFS doesn't call it that. 4 mirrors. I hvae 32 Gb RAM. So should I bet testing with at least a 64 Gb file..? On the right track? Here's a quote for you. All be it much less impressive.. "I want to learn the ways of the force like my father".

Help me Obi wan kenobi, you're my only hope.... ;)
 

Adam Tyler

Explorer
Joined
Oct 19, 2015
Messages
67
The drives don't report any errors, but that doesn't mean much because they've never had a single SMART test run on them. Run a long test on all drives using smartctl -t long /dev/adaX and wait a few hours for them to finish.

Closing the loop here. output from each drive after running the long test. Think the drives are fine?

Code:
root@TYL-NAS01:~ # smartctl -a /dev/da0 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da1 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da2 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da3 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da4 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da5 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da6 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
root@TYL-NAS01:~ # smartctl -a /dev/da7 | grep "SMART overall-health self-assessment"
SMART overall-health self-assessment test result: PASSED
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thank you! Are you using VMware ESXi with this array at all?
That is the plan, but it is for my home-lab and progress has been slow due to competing priorities. For the moment, my ESXi system is using an internal SSD for storage as I have not had the time to touch it since I got my iSCSI pool configured.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
root@TYL-NAS01:~ # smartctl -a /dev/da0 | grep "SMART overall-health self-assessment"
The value of what the drive reports here is somewhere between zero and "worthless". I've seen disks report all sorts of errors while still saying "Yup, I'm healthy!".
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The value of what the drive reports here is somewhere between zero and "worthless". I've seen disks report all sorts of errors while still saying "Yup, I'm healthy!".
I second that assessment. If a drive fails the internal self check, it never comes ready to begin with. that is when you get a drive that is connected but never 'shows up'. Drives can have all sorts of problems and still 'pass'. They are not SMART at all... They need to give us the data so we can be the judge.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Not really helpful or related, but check these numbers out from our "Pure" flash array at work. I implemented it back toward the end of 2016 with a set of Nexus 10Gb switches. Wow this is fast compared to what I am working in here at home. Just need 50k now... lol

View attachment 23634

My little living room server was not 50K.

index.php


Note more realistic performance is about 700MB/s

https://forums.freenas.org/index.php?threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/
 

millst

Contributor
Joined
Feb 2, 2015
Messages
141
So, reading the first post from the reference thread it looks like the key is actually testing a file larger than you have actual memory on board? That way you aren't getting a memory speed test, you are actually hitting the drives...?

Yeah, make sure that you are using the correct if/of, large enough block size, and enough blocks to prevent cache from interfering too much.

-tm
 
Status
Not open for further replies.
Top