HP DL180 / G6 SAS Backplane with IBM ServeRAID 1015

Status
Not open for further replies.

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Re-running for 100GB...
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Signature updated.

So it looks like 2.80 Gig/sec write and 5.08 Gig/sec read with this setup. Servers has not been rebooted in 90 days either, but it does not get heavily utilized. It's a pretty basic file server.

Yeah, as mentioned aboved, that seems rather high. If accurate, I'm pretty excited.

What kind of drives are you using, and in what configuration?

I fear maybe with a 50GB testfile when you have 48GB of RAM, may mean you are actually testing RAM bandwidth to RAM Cache instead of actual array drive speed.
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
And I did delete the ddtest.bin file before re-running the test.

[root@GMC-NAS-01 /]# dd if=/dev/zero of=/mnt/ZFSVolume/ddtest.bin bs=1M count=10
0000
100000+0 records in
100000+0 records out
104857600000 bytes transferred in 36.675428 secs (2859069554 bytes/sec)
[root@GMC-NAS-01 /]# dd if=/mnt/ZFSVolume/ddtest.bin of=/dev/zero bs=1M count=10
0000
100000+0 records in
100000+0 records out
104857600000 bytes transferred in 20.654575 secs (5076725105 bytes/sec)
[root@GMC-NAS-01 /]#
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Re-running for 100GB...

Just saw this.

Looking forward to seeing it!

Signature updated.

So it looks like 2.80 Gig/sec write and 5.08 Gig/sec read with this setup. Servers has not been rebooted in 90 days either, but it does not get heavily utilized. It's a pretty basic file server.

With Linux/Unix systems I think you'll find that frequent reboots won't be necessary. My server essentially only reboots during long power outages or upgrades. These things are very stable.
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
The drives are pretty straightforward 7,200rpm SATA server drives. Not sure on the exact model.
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Heck...re-running for 500GB now. I want to see what happens.
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Still running, web interface is still very responsive. %CPU is sitting at no more than 20%, "System Load" is at 1.5.

Done...

[root@GMC-NAS-01 /]# dd if=/dev/zero of=/mnt/ZFSVolume/ddtest.bin bs=1M count=50
0000
500000+0 records in
500000+0 records out
524288000000 bytes transferred in 182.904821 secs (2866452599 bytes/sec)
[root@GMC-NAS-01 /]# dd if=/mnt/ZFSVolume/ddtest.bin of=/dev/zero bs=1M count=50
0000
500000+0 records in
500000+0 records out
524288000000 bytes transferred in 105.013656 secs (4992569733 bytes/sec)
[root@GMC-NAS-01 /]#
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
The drives are pretty straightforward 7,200rpm SATA server drives. Not sure on the exact model.

Nice,

Yeah, most people on here use consumer drives due to the cost and noise levels, so I'm not used to seeing those.

Performance is going to be very dependent on how they are set up.

How have you configured your pool/vdevs?

Are they all in the same dev, or multiple devs in a pool? Raidz1, Raidz2 or Raidz3 or mirror?
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Heck...re-running for 500GB now. I want to see what happens.

I would expect speeds to keep getting slower the larger you make the test file, slowly approximating the true drive speed, as the RAM cache becomes a smaller and smaller fraction of the test.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Still running, web interface is still very responsive. %CPU is sitting at no more than 20%, "System Load" is at 1.5.

Done...

[root@GMC-NAS-01 /]# dd if=/dev/zero of=/mnt/ZFSVolume/ddtest.bin bs=1M count=50
0000
500000+0 records in
500000+0 records out
524288000000 bytes transferred in 182.904821 secs (2866452599 bytes/sec)
[root@GMC-NAS-01 /]# dd if=/mnt/ZFSVolume/ddtest.bin of=/dev/zero bs=1M count=50
0000
500000+0 records in
500000+0 records out
524288000000 bytes transferred in 105.013656 secs (4992569733 bytes/sec)
[root@GMC-NAS-01 /]#

Very nice!

Again, would be interested in knowing how you have your volume configured.
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Is there an easy way to dump the config to a text file? I don't remember exactly how I set it up, it was almost a year ago now. I can see the drives and volumes in the graphical interface, but I think a text file dump would make more sense on the forum.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
You can display basic info with the "zpool status" command.

If you want to dump it to a file (this is a unix thing) do "zpool status >/mnt/volumename/filename.txt"

You can then copy this file from the mounted network drive.

Just keep in mind that *nix uses a different line break format than Dos/Windows text files, so it may look odd, unless you open it in wordpad or word, which should be able to interpret the unix line breaks.

If you paste it here inside CODE brackets it will print with fixed char width, and should look right.
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
You have to disable compression on the dataset under test if you use /dev/zero as a data source. All those 50 gigs of zeros will be compressed down nicely into just a hand full of zeros resulting in insane and wrong results.
 
Last edited:

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Well...that makes sense. Figuring out how to disable compression on the dataset...
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Well...that makes sense. Figuring out how to disable compression on the dataset...

You can do this easily from the web interface (volumes, click your volume, then the wrench for settings) but I am not sure how to from the command line.

I finally got my DL180 G6 up and running.

My pool is now:
2 6 drive RAIDz2 vdevs
mirrored Samsung 850 Pro SSD's for the SLOG for ZIL.

Looks as follows:
Code:
# zpool status
  pool: zfshome
 state: ONLINE
  scan: resilvered 1.28T in 10h9m with 0 errors on Mon Aug 25 05:36:39 2014
config:

        NAME                                            STATE     READ WRITE CKSUM
        zfshome                                         ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/85faf71f-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/86d3925a-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/87a4d43b-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/887d5e7f-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/89409ac9-2b00-11e4-bc04-d8d3855ce4bc  ONLINE       0     0     0
            gptid/3db34343-2bff-11e4-b231-001517168acc  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/56fb015b-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/576cde68-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/57dbbac1-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/584a4dcc-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/58f4ec2f-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/5a0a813f-2bfc-11e4-be49-001517168acc  ONLINE       0     0     0
        logs
          mirror-2                                      ONLINE       0     0     0
            gptid/0053fa01-2bfd-11e4-be49-001517168acc  ONLINE       0     0     0
            gptid/007bf444-2bfd-11e4-be49-001517168acc  ONLINE       0     0     0


I have the same compression issue you do, and don't ahve remote access to the web interface so I can't test my new speeds. My old pool didn't have compression enabled (it was created long ago before compression was a default setting, and I never enabled it)

I did have an overheating issue during resilvering though.

I spent so much time during my fan mod (to make this thing quiet enough for my basement) worrying about CPU temps, that I neglected cooling of the riser card in the back bay (where you have your two rear drives). I made an incorrect assumption that since the IBM M1015 was in IT mode it wasn't doing parity calculations and as such wouldn't get hot. Turns out it does.

The solution to this is probably to move it to the low profile riser slot on the other side. I can easily get a low profile bracket for the M1015, but the problem is I have a low profile dual port Intel Pro/1000 PT adapter in that low profile slot, and I can't for the life of me find a full height bracket for it, to move it over :(
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
actually, pretty easy from cli as well:

# zfs set compression=off poolname

Once done turn it back on with

# zfs set compression=lz4 poolname
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Alright,

With compression off my DD bench of 500gigs (to make sure I completely take ARC (RAM Cache) out of the picture is:

680MB/s reads
340MB/s writes

This is a little bit disappointing for my setup, I was expecting a lot more, considering I was getting ~800MB/s reads and ~480MB/s writes on my old single 6 disk RAIDz2. Could possibly be - in part - due to the backplane slowing things down (used to be direct attached to the controller), or the M1015 getting a little hot and throttling.

It's a bit disappointing, but it will likely be sufficient (I hope).
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Also, would you mind sharing your drive temps? Mine are running rather hot crammed in this tight backplane. Want to make sure it's not just me.

Easy way to accomplish this is:

#smartctl -a /dev/da1 or da2 or da3 (whatever the device name you are interested in is) and look for anything related to temperature.

I'd really appreciate it!
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Alright,

With compression off my DD bench of 500gigs (to make sure I completely take ARC (RAM Cache) out of the picture is:

680MB/s reads
340MB/s writes

This is a little bit disappointing for my setup, I was expecting a lot more, considering I was getting ~800MB/s reads and ~480MB/s writes on my old single 6 disk RAIDz2. Could possibly be - in part - due to the backplane slowing things down (used to be direct attached to the controller), or the M1015 getting a little hot and throttling.

It's a bit disappointing, but it will likely be sufficient (I hope).

So for comparison, I have since moved away from the DL180G6 primarily due to noise concerns. There simply was no way to make it even basement compatible (I could hear it going in the basement when trying to sleep two floors up in the bedroom, and multiple closed doors between)

In my new setup, I kept everything identical, except, instead using the HP SAS expander and backplane, I used a Norco backplane, and used two M1015 SAS controllers hooked up natively to the drives, instead of using an expander.

I think my theory regarding the performance being held back by the integrated HP SAS Expander were accurate. Performance with theis otherwise identical system is as follows:

950MB/s reads
675MB/s writes

This was - of course - with compression temporarily set to "off".

The DL180 G6 seems great if you ahve high noise tolerance, and want a simple file server, but if you need high performance out of said file server, maybe something else is better.
 

MtK

Patron
Joined
Jun 22, 2013
Messages
471
Hey,
I'm looking into taking similar actions and replacing the p410i on my HP DL180 G6 to a "normal" SAS Controller, but looking at the M1015 I see it has the connectors (for the cable) on the top part, while the p410 has it on the rear edge of the card.

Is that an issue?
Is the existing cable from the p410 to the backplane long enough?
 
Status
Not open for further replies.
Top