HP DL180 / G6 SAS Backplane with IBM ServeRAID 1015

Status
Not open for further replies.

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Here is the Q of the day...please accept my apology as I have tagged along on a few other threads with the question. Not much yet as to a response.

The chassis came with a p410 RAID controller. It only works with FreeNAS if you config each drive as it's own RAID0 array. Turns out it is a really bad idea to rely on this configuration.

My question is this...if I replace the HP p410 controller with an IBM ServeRAID 1015 controller, will the controller see all 14 drives via the single SAS cable which connects to the SAS backplane and then has two additional SATA cables that connect to the two rear facing drives?

Simple explanation via quick and dirty cell phone video clip:
View: http://youtu.be/r9ITw7Rp21o
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
ANSWER: Yes! The IBM M1015 arrived today. I crossflashed it into IT mode and it recognizes all 14 drives via the single SAS cable. Good to go!
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
...now to figure out the best, most reliable way to configure these 14 drives...
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hate to revive an older thread, but did you ever have any luck with this? What kind of performance do you see through that backplane? At what speed to the SATA drives connect?
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
The system has been performing beautifully, no issues to speak of but it does not get stressed hard. It's a pretty basic file server now only for my IT group at work. Let me know how or what you'd like me to test and I'll post the results.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
The system has been performing beautifully, no issues to speak of but it does not get stressed hard. It's a pretty basic file server now only for my IT group at work. Let me know how or what you'd like me to test and I'll post the results.

Thanks for sharing your experiences :)

How have you set up the drives?

I plan on only using the 12 bays up front (I need the back for a riser card, so that eliminates those bays). In my configuration there will be two 6 drive RAIDz2 vdevs in the pool (so I guess this is equivalent to RAID 60?).

What kind of performance have you gotten out of it? Have you done DD testing?
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
How do I do DD testing?

How should I dump the config to post here on the forum? I can grab screenshots if you'd like.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
How do I do DD testing?

How should I dump the config to post here on the forum? I can grab screenshots if you'd like.

Don't go out of your way for me (unless you are curious yourself). I'm just trying to figure out what to expect, but I have all my parts enroute already as it is, so my costs are already sunk.

A best case estimate of transfer speed can be estimated using dd, and pulling data from /dev/zero (essentially all zeros) and writing them to the drive for write speeds, and pulling it back from the drive to /dev/null for read speed.

It will be a best case scenario as all zeros is highly compressible, and large block sizes approximate sequential transfer speeds, but it does help in give an idea of what kind of performance you have.

Run the following command to test Write speed
Code:
dd if=/dev/zero of=testfile bs=1M count=30000


Run the following command to test Read speed
Code:
dd if=testfile of=/dev/zero bs=1M count=30000


Run these tests directly from the FreeNAS console, otherwise you are just measuring your networking bottleneck if you do it remotely.

The testfile is any file name on your ZFS volume. Specify location as /mnt/volumename/testfile.bin (or whatever you want to call it)

You can change the sizes as you deem necessary. bs is the lock size. 1M is a large enough size to approximate sequential transfers. The count multiplies by the blocksize to determine the total size of the write/read The 30,000 figure I used above will result in a ~30GB file and should be large enough to get a decent approximation of the drive on most systems. Just make sure you make this figure MUCH larger than available ram, or all you are measuring is your write speed to your ARC or cache in RAM, and not your disk speed.

For obvious reasons you have to do the write test before you do the read test (or there won't be a file to read :p )

The output will be in bytes per second. As always, divide by (1024^2) to get MB.

(Commence obligatory argument about 10-base vs binary sizes to which my obligatory response is kibi mibi gibi tibi etc. is bullshit, and the International Standards Organization can suck it 1KB is and will always be = 1024B, 1MB is and will always be = 1024KB, etc, etc.)

Oh, and you obviously want to do this when the server is not in active use, as users will be dramatically slowed down if the access during the test (and the test won't be accurate as a result)
 
Last edited:

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
As an example, my output on my current ghetto homebuilt system with consumer parts looks like this: (I upped the counts to 60,000 to make it more accurate)

Write speed: (approx. 366MB/s)
Code:
# dd if=/dev/zero of=/mnt/RAIDz2-01/testfile.bin bs=1M count=60000
60000+0 records in
60000+0 records out
62914560000 bytes transferred in 164.002705 secs (383619038 bytes/sec)


Read Speed: (approx. 570MB/s)
Code:
# dd if=/mnt/RAIDz2-01/testfile.bin of=/dev/null bs=1M count=60000
60000+0 records in
60000+0 records out
62914560000 bytes transferred in 105.176740 secs (598179408 bytes/sec)


This is with an IBM M1015 SAS controller flashed with IT firmware on AMD FX-8350 with 24GB of RAM and a single 8 drive RAIDz2 vdev on my pool, made up of an even mix of 3TB WD Greens and 4 TB WD Reds.

My speeds with this setup are a little slower than they should be because I chose to build an 8 drive RAIDz2 which is inefficient (ideal size for RAIDz2 is 6 or 10 drives).
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
This is with an IBM M1015 SAS controller flashed with IT firmware on AMD FX-8350 with 24GB of RAM and a single 8 drive RAIDz2 vdev on my pool, made up of an even mix of 3TB WD Greens and 4 TB WD Reds.

My speeds with this setup are a little slower than they should be because I chose to build an 8 drive RAIDz2 which is inefficient (ideal size for RAIDz2 is 6 or 10 drives).


For my HP server I plan on doing it right.

2x 6 disk RAIDz2 vdevs, with a mirrored set of SSD's for a SLOG for the ZIL. I'm just hoping that expander/backplane in the DL180 doesn't sabotage my plans by slowing me down.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
ANSWER: Yes! The IBM M1015 arrived today. I crossflashed it into IT mode and it recognizes all 14 drives via the single SAS cable. Good to go!

Side note: You never updated your signature to reflect this :p
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
[root@GMC-NAS-01 ~]# dd if=/dev/zero of=testfile bs=1M count=50000
dd: testfile: Read-only file system
[root@GMC-NAS-01 ~]#

I want to run it for several reasons. I'd like to see how this platform does perform and I'm a Linux noob so I need to know the how and why at the command line (like the above error message)...
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
[root@GMC-NAS-01 ~]# dd if=/dev/zero of=testfile bs=1M count=50000
dd: testfile: Read-only file system
[root@GMC-NAS-01 ~]#

I want to run it for several reasons. I'd like to see how this platform does perform and I'm a Linux noob so I need to know the how and why at the command line (like the above error message)...

"testfile" needs to be /mnt/volumename/testfilename to make sure it is testing your volume, not the root file system (which is read only by default, as you just discovered).

Otherwise it just tries to write the file in the current directory, which if you just ssh:ed in, is probably the /root folder.

Don't worry, it gets easier with time, and before you know it you'll be just like one of us, who prefer the command line and editing searchable text config files to digging through cumbersome GUI menus to try to set things up. :)

For what it's worth, while the Linux and BSD shells are very similar, there are a number of differences (particularly surrounding command syntax, which is usually VERY CLOSE but ever so slightly different), and I find the more recent Linux shells much easier to use, myself.

Never quite understood why the FreeNAS team decided to build FreeNAS around BSD, and not a choice like Solaris/OpenSolaris/OpenIndiana/OmniOS (which has the latest ZFS revisions AND hot swapping) or Linux.

BSD always struck me as an odd choice due to hardware support, shell useability, ZFS revision, and lack of hot swap) but I like the HTTP config setup, so I keep using it rather than doing it manually with zfs on linux.

After all, the team contributed their time to make it, not me, so I'm not going to complaina bout an awesome free platform :p
 
Last edited:

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
"testfile" needs to be /mnt/volumename/testfilename to make sure it is testing your volume, not the root file system (which is read only by default, as you just discovered)

:)

Oh, and don't forget to remove your testfile when done with the last test, or you'll be wasting (in your case) ~50GB of disk space:

Code:
rm /mnt/volumename/testfilename
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
[root@GMC-NAS-01 /]# dd if=/dev/zero of=/mnt/ZFSVolume/ddtest.bin bs=1M count=50
000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 18.678116 secs (2806964030 bytes/sec)
[root@GMC-NAS-01 /]# dd if=/mnt/ZFSVolume/ddtest.bin of=/dev/zero bs=1M count=50
000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 10.315550 secs (5082501734 bytes/sec)
[root@GMC-NAS-01 /]#
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
I can run more extensive tests for comparison too...I don't have any time to dig into what it takes to tune this setup either. Any suggestions would be wonderful.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
[root@GMC-NAS-01 /]# dd if=/dev/zero of=/mnt/ZFSVolume/ddtest.bin bs=1M count=50
000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 18.678116 secs (2806964030 bytes/sec)
[root@GMC-NAS-01 /]# dd if=/mnt/ZFSVolume/ddtest.bin of=/dev/zero bs=1M count=50
000
50000+0 records in
50000+0 records out
52428800000 bytes transferred in 10.315550 secs (5082501734 bytes/sec)
[root@GMC-NAS-01 /]#


Wow! Those numbers are very high. I'm excited about this setup now! Only thing is maybe doing a 50GB test is a little close to your 48GB of RAM, so that what you are measuring is mostly RAM speed and not disk speed.

What kind of drives are you using, and in what configuration?
 

DoulosGeek

Dabbler
Joined
Sep 16, 2013
Messages
34
Signature updated.

So it looks like 2.80 Gig/sec write and 5.08 Gig/sec read with this setup. Servers has not been rebooted in 90 days either, but it does not get heavily utilized. It's a pretty basic file server.
 
Status
Not open for further replies.
Top