Drive/DD Testing - Clarification of odd results

Status
Not open for further replies.

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Greetings,

I've been doing basic performance testing on some new acquired 6G SAS drives, and I'm not sure I understand my results.

Code:
DD if=/dev/da0 of=/dev/null bs=1M

This results in reporting a read speed of around 135MBps, not as high as most 3.5" 7200RPM SATA drives, but tolerable for slight seek time performance increase of SAS.

Code:
DD if=/dev/zero of=/dev/da0 bs=1M count=15000

This results in reporting write speeds of 18MBps, yeah...18! Hmm...Terrible...

Hardware for this test is same signature system, but added an HP D2700 6G SAS enclosure 8088 connected to a 6G LSI 9205-8e (HP H221 HBA). I takes up to 25 2.5" disks.

I checked a few things out, thinking maybe the controller or new disk enclosure (HP D2700) sucked. I threw a consumer grade SSD in one of the slots and read ~450MBps and got ~250MBps write speeds. So I can see that the controller and enclosure don't seem to be jacked, but who knows.

Next I'm thinking, my old 3G SAS drives totally suck (read ~80MBps and write ~30MBps) so why shouldn't these? But these disks are 6G, and read much faster, so I'm a little puzzled why they perform worse on a DD write test, so maybe I'm the one screwing this up? Doing a wrong/stupid test?

I went ahead and threw two of these disks into a mirror to test writing in the pool, it oddly shows better performance.
Code:
dd if=/dev/zero of=testfile bs=1M count=15000

This was ran from inside a dataset called test, and yes, both the dataset and the pool had compression turned off.

This time the result was ~250MBps write. That doesn't make a lot of sense to me, but that's why I'm asking this question, I must not understand what's going on and maybe I'm totally doing something wrong.

I included a snip from the performance graph which seems to show each drive writing at about ~150MBps. I guess I can't understand why dd results in only 18, but this seems to result in 150.

Thanks in advance



-Supporting Info Below-
Code:
[root@Carmel-SANG2] /mnt/Tier1-Group2/Test# zpool status Tier1-Group2
  pool: Tier1-Group2
state: ONLINE
  scan: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        Tier1-Group2                                    ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/318af5d4-52ac-11e5-8a24-0025902b18fa  ONLINE       0     0     0
            gptid/3228acb4-52ac-11e5-8a24-0025902b18fa  ONLINE       0     0     0

errors: No known data errors

Code:
[root@Carmel-SANG2] /mnt/Tier1-Group2/Test# zfs get compression Tier1-Group2
NAME          PROPERTY     VALUE     SOURCE
Tier1-Group2  compression  off       local

Code:
[root@Carmel-SANG2] /mnt/Tier1-Group2/Test# smartctl -a /dev/da0
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p16 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               HP
Product:              EG0300FAWHV
Revision:             HPDE
User Capacity:        300,000,000,000 bytes [300 GB]
Logical block size:   512 bytes
Rotation Rate:        10000 rpm
Form Factor:          2.5 inches
Logical Unit id:      0x5000c50043d00f43
Serial number:        6SE538AQ0000B214B9K0
Device type:          disk
Transport protocol:   SAS (SPL-3)
Local Time is:        Thu Sep  3 23:09:58 2015 EDT
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

The drive shows SPL-3 now that I combined it with a SATA disk during testing, but no changes in speeds have been seen between before and after (when there were no SATA drives present) It originally showed 600MBps (6G), I checked that it was properly detecting the speeds.
 

Attachments

  • Performance Graph.PNG
    Performance Graph.PNG
    16.3 KB · Views: 379

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Do you have ZFS in control of your drives and then you are trying to test the disks while bypassing ZFS ?

Did you try to test with different blocksizes ?
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Do you have ZFS in control of your drives and then you are trying to test the disks while bypassing ZFS ?
I'm not sure what you mean by this unless you're talking about them being in a pool. The drives were unused at the time, or not added to any pools/volumes.

Did you try to test with different blocksizes ?
I didn't, I usually use 1M blocks as a standard for all the drives. Any pools I create with these drives are used for VMware datastores. Maybe I need to further research a good blocksize for VMware, but I thought 1M was right based on VMFS' block size of 1M.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Any other ideas or explanations of these results from anyone?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
dd to raw disks may be inaccurate in some cases when device has caches disabled (that can happen for industrial SAS disks). In such case each 128K block write will wait for full platter revolution, that, with track density of few megabytes, will give you 1/10 of normal throughput. ZFS same time normally sends several requests simultaneously, so it is much less affected by this issue.

You can check caching for SAS disks with `camcontrol mode daX -m 0x08` command. You should normally have RCD (Read Cache Disable) set to 0, while WCE (Write Cache Enable) set to 1.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
dd to raw disks may be inaccurate in some cases when device has caches disabled (that can happen for industrial SAS disks). In such case each 128K block write will wait for full platter revolution, that, with track density of few megabytes, will give you 1/10 of normal throughput. ZFS same time normally sends several requests simultaneously, so it is much less affected by this issue.

You can check caching for SAS disks with `camcontrol mode daX -m 0x08` command. You should normally have RCD (Read Cache Disable) set to 0, while WCE (Write Cache Enable) set to 1.
@mav, would using "128K block write" change dd test results ?

P.S.
In my naivety, I was thinking that the read cache should be enabled, and anywhere (excluding ZFS) write cache should be disabled...
 
Last edited:

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
You can check caching for SAS disks with `camcontrol mode daX -m 0x08` command. You should normally have RCD (Read Cache Disable) set to 0, while WCE (Write Cache Enable) set to 1.
Here's the result of checking the cache for a SAS disk
Code:
camcontrol mode da5 -m 0x08
IC: 0
ABPF: 0
CAP: 0
DISC: 1
SIZE: 0
WCE: 0
MF: 0
RCD: 0
Demand Retention Priority: 0
Write Retention Priority: 0
Disable Pre-fetch Transfer Length: 65535
Minimum Pre-fetch: 0
Maximum Pre-fetch: 65535
Maximum Pre-fetch Ceiling: 65535

EDIT:
I guess I can see how the grouping of commands might effect the raw performance, I just didn't notice it with other disks, but it's porbably because they're all the same and it's effecting all of them.

I guess the best way to test a drives functionality from that standpoint is to keep it in a pool and let ZFS do what it does best.

I just wanted to ensure these 6G disks and enclosure were going to be worth buying more of.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
@mav, would using "128K block write" change dd test results ?
FreeBSD limits all raw I/O operations to 128K. That is why there is almost no difference whether you run dd with 1M or 128K block size.

In my naivety, I was thinking that the read cache should be enabled, and anywhere (excluding ZFS) write cache should be disabled...
ZFS properly uses disk cache flushing. It means that there is no need to disable write cache of the underlying disk. With cache disabled disk as to synchronize cache after each write, while for ZFS really needs it only on the edge of transaction group, when it forces cache flushing any way.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Here's the result of checking the cache for a SAS disk
Code:
WCE: 0
And here it is -- write cache is not enabled! You can enable write cache by editing this mode page by adding -e argument to the command. Just be ready to edit it in vi. ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
@mav, thank you very much for the explanation!

May I have one more question :D ?

I know that the source would have the answer, it just might not be comprehensible to me... What is camcontrol mode ? My man pages do not have it... Is there any other way of getting the same result? For example to learn about caching on my disks I would use
Code:
[root@freenas /]# camcontrol identify 0:0
...
Feature      Support  Enabled  Value  Vendor
read ahead     yes      yes
write cache    yes      yes
flush cache    yes      yes
...
However, is camcontrol mode the only way to change the settings?

Thank you in advance!
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
I know that the source would have the answer, it just might not be comprehensible to me... What is camcontrol mode ?
'mode' is abbreviation of 'modepage' subcommand.

My man pages do not have it... Is there any other way of getting the same result?
For SCSI disks mode pages is the only way to control caching. You can use `camcontrol modepage` or may be `sg_modes`.

For example to learn about caching on my disks I would use
Code:
[root@freenas /]# camcontrol identify 0:0
...
Feature      Support  Enabled  Value  Vendor
read ahead     yes      yes
write cache    yes      yes
flush cache    yes      yes
...
`camcontrol identify` command works only for ATA disks. SCSI disks have completely different command set and so way to manage their settings.
 
Status
Not open for further replies.
Top