BUILD Dell R720 with PERC H710P Mini - RAID Settings?

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
First of I’m new FreeNAS so please don’t flame if I say something wrong.

I have just inherited 3x Dell R720 that I have configured the following way.

2 x ESXi 6.5 that are connected to a shared iSCSI storage.
1 x Free server that I want to install FreeNAS on and share storage through FC.

I would like to replace my current slow iSCSI solution with the new FreeNAS server and in the same time learn about configuring and managing a fibre channel network.

Since I don’t have much money to spend on additional hardware I would like to use what I have in the servers.
This means that I will use the H710 RAID controller that is included in the server. I also know this is a bad practice to do it this way, but since this is just a test environment that I will use for learning purposes I thought I would get away with it. :)
Also, the servers do have iDRAC that I will use for monitoring the hardware for failures.

My question now is how I should configure the RAID to be the least evil configuration?
  • Should I create 1 large VD and present to FreeNAS or should I carve the available disk up multiple smaller VD’s?
  • Are there any pros and cons doing it either way?
  • Should disable the caching on the RAID controller or leave it on?
  • What is recommended RAID level configuration for this? I was planning on setting this up with a RAID 5 to make the most the available disk space.
  • Are there any other settings I need / should do with this config?
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Welcome to the forums!

While I understand that you're setting up a test environment on a budget, I do want to point out that you could pull the H710, sell it on eBay, and probably purchase 2 LSI SAS2008 based cards (IBM M1015, LSI 9211-8i) for the amount you get for the H710... The H710 is a great RAID card, but it's just not a good choice for FreeNAS. There are a ton of reasons why, but they're covered in depth elsewhere on the forum so I'm not going to bother repeating them all again...

If you're insistent on keeping the H710 - you're going to want to set up a RAID-0 virtual disk for each of your physical hard drives - Do not under any circumstances set up RAID-5 on the H710 and pass that through to FreeNAS! Depending on how many physical drives you have, you'll then want to configure either a mirror or one of the raidZ levels from within FreeNAS. As you would already be making some pretty serious data-integrity compromises at this point, it's probably not going to be any more risky to leave the on-board cache enabled, although I'm not sure how much performance gain you're going to see...

This approach would be fine for testing, but Do not put anything on it you wouldn't be ok with losing!
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
First of I’m new FreeNAS so please don’t flame if I say something wrong.

I have just inherited 3x Dell R720 that I have configured the following way.

2 x ESXi 6.5 that are connected to a shared iSCSI storage.
1 x Free server that I want to install FreeNAS on and share storage through FC.

I would like to replace my current slow iSCSI solution with the new FreeNAS server and in the same time learn about configuring and managing a fibre channel network.

Since I don’t have much money to spend on additional hardware I would like to use what I have in the servers.
This means that I will use the H710 RAID controller that is included in the server. I also know this is a bad practice to do it this way, but since this is just a test environment that I will use for learning purposes I thought I would get away with it. :)
Also, the servers do have iDRAC that I will use for monitoring the hardware for failures.

My question now is how I should configure the RAID to be the least evil configuration?
  • Should I create 1 large VD and present to FreeNAS or should I carve the available disk up multiple smaller VD’s?
  • Are there any pros and cons doing it either way?
  • Should disable the caching on the RAID controller or leave it on?
  • What is recommended RAID level configuration for this? I was planning on setting this up with a RAID 5 to make the most the available disk space.
  • Are there any other settings I need / should do with this config?
Welcome to the (relatively flame-free) Forums!

I realize you're in Sweden and probably don't have access to the good, used, and relatively inexpensive HBA cards such as the LSI 9211/IBM M1015/Dell H200 units available here in the US on eBay... but would you please at least look for one before you do this horrible thing you're planning to do? :smile:

Also, instead of using the H710 RAID card, have you considered simply using the SATA ports on your motherboard? That would be preferable to using the RAID card. I strongly suggest this alternative! While it may give you fewer drives and therefore less capacity, at least it will allow you to configure mirrors and so forth, and doesn't require a cash outlay.

Your proposed configuration isn't going to tell you very much about the performance of FreeNAS as an iSCSI server, because you won't be able to configure it correctly. For block storage you basically want:
  • Mirrors (not any kind of RAID or RAIDZ array)
  • A SLOG device (Intel DC S3700, P3700, or similar)
  • Low space utilization (e.g., if you need 1TB of usable space for VMs then build a system with 2 or 3TB of capacity)
  • A minimum of 32GB of RAM
What is likely to happen is this: you build this mal-configured system; it's performance is awful; you (erroneously) decide that FreeNAS is awful, too. :rolleyes:

Having said all that... regarding the least evil configuration...

If you can pass each individual drive to FreeNAS as JBODs or RAID0 units, do that. If you can't do that and can set up mirrors, do that. If you can't set up mirrors, RAID5 will be fine.

Write caching? Probably best to turn if off.

Again, I will make one last plea to you: please try to find a proper HBA or use your mobo's SATA ports and configure FreeNAS correctly! :D
 

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
Thanks for the quick replies.
It almost sound that you don’t think that it’s a good idea to go down my suggested path :p

By the sounds of it. If I want to have any chance of a stable system I guess I will need to get myself a new HBA.

Since I have a Dell server I would like to stick with Dell hardware so I’m thinking of getting me a H310 Mini to replace my H710 mini. Then place the H310 Mini in JBOD mode.
This way I should be able to use all my 16 drive slots right? Or am I reading the stats wrong on http://www.dell.com/learn/us/en/04/campaigns/dell-raid-controllers.

Currently I only have 12 SAS 176 GB disks so I will not be able to fill up all the slots as of now. If I further down the road get more disks, can I then extend my current volume without breaking it?

Since I actually have a 4th server to with 196 GB RAM in it, I could technically move all of that RAM over to the R720 servers. But if I do that I will not be able to use the 4th server for anything since I don’t have any RAM left in it.
If you had this option, how much RAM would you move into the FreeNAS server?

I’m also thinking of installing the FreeNAS OS on the embedded SD card that is included in the server. What are your thoughts on that?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks for the quick replies.
It almost sound that you don’t think that it’s a good idea to go down my suggested path :p
Ha! What gave you that idea? :rolleyes:
By the sounds of it. If I want to have any chance of a stable system I guess I will need to get myself a new HBA.

Since I have a Dell server I would like to stick with Dell hardware so I’m thinking of getting me a H310 Mini to replace my H710 mini. Then place the H310 Mini in JBOD mode.
This way I should be able to use all my 16 drive slots right? Or am I reading the stats wrong on http://www.dell.com/learn/us/en/04/campaigns/dell-raid-controllers.
Yes, sir, an H310 would work fine. Search the forum and you'll find instructions for flashing it to IT mode.
Currently I only have 12 SAS 176 GB disks so I will not be able to fill up all the slots as of now. If I further down the road get more disks, can I then extend my current volume without breaking it?
Yes. Because you're setting up block storage, you'll want to use mirrors. With 12 drives, you'll start out with a pool of 6 vdevs, with each vdev made up of 2 mirrored drives. This will give you 50% space efficiency; only ~6 x 176GB = ~1056GB of storage. You can expand this pool by adding additional mirrored pairs, and the new drives don't have to be the same size as the relatively small drives you're starting out with. So adding a pair of 1TB drives will drastically increase your pool size. Also, keep in mind that you shouldn't ever use more than 40-50% of your available space when sharing out block storage via iSCSI. So right now, you need to limit your VM storage to 500GB or less.

Also note that you should plan on either installing a SLOG device or turning off synchronous writes on your iSCSI dataset. Performance will be awful unless you do one of these two things. The later isn't recommended for production systems, but you get a pass 'cause you're setting up a lab.
Since I actually have a 4th server to with 196 GB RAM in it, I could technically move all of that RAM over to the R720 servers. But if I do that I will not be able to use the 4th server for anything since I don’t have any RAM left in it.
If you had this option, how much RAM would you move into the FreeNAS server?
FreeNAS loves RAM...
I’m also thinking of installing the FreeNAS OS on the embedded SD card that is included in the server. What are your thoughts on that?
I have no experience with running FreeNAS from an SD card; search the forum for information about this, or perhaps someone who does have such experience will share it with us.

Good luck!
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Yes, sir, an H310 would work fine. Search the forum and you'll find instructions for flashing it to IT mode.
Good luck!

Crossflashing a standard H310 is fairly easy - the mini poses a problem. As soon as you use MEGAREC to clean the flash and reboot the server you'll get an error from the Dell BIOS that an invalid card has been found in the storage controller slot (or something to that effect), and as it's a proprietary formatted card you're going to have a hell of a time figuring out how to install it in something else to complete the flashing process - and I'm still not sure the card will function in the storage slot with IT firmware on it. All of the guides I've seen for crossflashing a H200 or H310 specifically warn against trying to use that on a mini.

Either a H200 or H310 standard PCIe format card should work - you'll just use one of your expansion slots for the HBA...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Crossflashing a standard H310 is fairly easy - the mini poses a problem. As soon as you use MEGAREC to clean the flash and reboot the server you'll get an error from the Dell BIOS that an invalid card has been found in the storage controller slot (or something to that effect), and as it's a proprietary formatted card you're going to have a hell of a time figuring out how to install it in something else to complete the flashing process - and I'm still not sure the card will function in the storage slot with IT firmware on it. All of the guides I've seen for crossflashing a H200 or H310 specifically warn against trying to use that on a mini.

Either a H200 or H310 standard PCIe format card should work - you'll just use one of your expansion slots for the HBA...
Good catch! All I saw was 'H310' and completely missed the 'mini'. :eek:
 

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
Wow, I'm glad i posted my questions before I started buying stuff that's for sure :rolleyes:

iSCSI is my current solution that I have in place and I plan to replace that with FC.
The main reason for moving to FreeNAS is to be able use my server as a block level storage and share that to my ESXi server over FC.
I have a Brocade switch here at home and all the servers have QLogic FC cards in them and I want to learn how to set up and configure a FC network.

Why do I have to use mirror and not some other kind of RAIDZ so that I can get more disk space from the solution in question?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Wow, I'm glad i posted my questions before I started buying stuff that's for sure :rolleyes:

iSCSI is my current solution that I have in place and I plan to replace that with FC.
The main reason for moving to FreeNAS is to be able use my server as a block level storage and share that to my ESXi server over FC.
I have a Brocade switch here at home and all the servers have QLogic FC cards in them and I want to learn how to set up and configure a FC network.

Why do I have to use mirror and not some other kind of RAIDZ so that I can get more disk space from the solution in question?
You can use other topologies besides mirrors for block storage -- I use RAIDZ2 in my home lab (see 'my systems' below) -- but you will get the best performance from mirrors. Why? Because IOPS scale with vdevs, so a pool made up of 6 mirrored vdevs will have 6 times the IOPS of a pool with a single 12-drive RAIDZ2 or RAIDZ3 vdev.

It's a lab... so you could try a RAIDZ3 pool first, to find out if it becomes the bottleneck in your system. I suspect a RAIDZ-based iSCSI datastore will be the bottleneck, given your high speed network. But it won't hurt anything to find out.
 

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
My data on this is not critical but I would rather try to keep it safe and not have to rebuild all my VM’s that I have in lab environment. Hence, I would like to have both some reliability and speed in the solution.

I’m trying to get a hold of a HBA that can present the disks as JBOD right now. However, you also mentioned that I should also go for a SLOG device. I have read some about this and from what I understand I should not use a regular consumer grade SSD like Samsung 840 since they lack the power loss data protection.

I read an interesting post where a guy is using the caching mechanism of a RAID card with battery power as a ZIL.
https://forums.servethehome.com/index.php?threads/poor-mans-diy-zeusram.2712/
Not sure if this is a viable path or not.

I really don’t want to spend my hard-earned money on an enterprise grade SSD for my home lab that will not have that much IOPS anyhow.
The current iSCSI solution that I have in place involves a Windows 2012 R2 servers that is delivering iSCSI disks from a RAID 5 array with 6 disks in it so I just have to beat the performance of that.

Another question that popped into my head was if you know if it’s possible to mix SAS and SATA disks in a VDEV and also in a backplane of HBA on an R720 server? If so then it would be easier to get hold of a couple of larger disks.

You also mentioned that I should keep half of the total storage empty since a block storage will need that. Why is that?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
My data on this is not critical but I would rather try to keep it safe and not have to rebuild all my VM’s that I have in lab environment. Hence, I would like to have both some reliability and speed in the solution.

I’m trying to get a hold of a HBA that can present the disks as JBOD right now. However, you also mentioned that I should also go for a SLOG device. I have read some about this and from what I understand I should not use a regular consumer grade SSD like Samsung 840 since they lack the power loss data protection.

I read an interesting post where a guy is using the caching mechanism of a RAID card with battery power as a ZIL.
https://forums.servethehome.com/index.php?threads/poor-mans-diy-zeusram.2712/
Not sure if this is a viable path or not.
That's not an approach I would take... For lab purposes you can do without the SLOG device if you disable synchronous writes on the iSCSI datasets. There's a small risk in doing this, and you wouldn't want to do this on a production system. But I ran my lab systems this way for a year without incident before adding a SLOG.
I really don’t want to spend my hard-earned money on an enterprise grade SSD for my home lab that will not have that much IOPS anyhow.
The current iSCSI solution that I have in place involves a Windows 2012 R2 servers that is delivering iSCSI disks from a RAID 5 array with 6 disks in it so I just have to beat the performance of that.
You don't need an SSD with large capacity for this: I use 100GB Intel DC S3700 SSD's, which are more-or-less designed to be SLOG devices: they have power protection, low latency, fast writes, and high durability. These can be purchased on eBay in new or good used condition for ~$100US here in the USA, but I understand you may not have access to these kinds of deals since you're in Sweden. Again, you should consider foregoing a SLOG device and simply disabling synchronous writes on your block storage datasets.
Another question that popped into my head was if you know if it’s possible to mix SAS and SATA disks in a VDEV and also in a backplane of HBA on an R720 server? If so then it would be easier to get hold of a couple of larger disks.
Well... Yes, you can mix the drives from different controllers, and you can use the motherboard's SATA ports along with an HBA's SAS ports. FreeNAS doesn't care if drives in a vdev are connected to different controllers. But in your case, I'm not sure how well this would work if some of the disks are really JBOD or RAID0 'devices' while others are directly connected to SATA ports. If I were you, I'd keep the mirrored pairs on the same controller, i.e., if you have 4 available SATA ports on the motherboard, then use them to connect 2 mirrored pairs, separate from your 176GB SAS drives configured as mirrored pairs on the HBA/RAID controller. It would be fine to stripe all of these dissimilar mirrored vdevs together to form your pool.
You also mentioned that I should keep half of the total storage empty since a block storage will need that. Why is that?
Because FreeNAS is a copy-on-write system, and needs free space in order to deliver performance. The documentation states "For performance reasons and to avoid excessive fragmentation, it is recommended to keep the used space of the pool below 50% when using iSCSI." Here are a couple of good articles about this written by an expert here on the forum:

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/
https://forums.freenas.org/index.ph...res-more-resources-for-the-same-result.28178/

Good luck!
 

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
Thanks for all the help so far.
Now I will take a serious thinking if I want to go down the FreeNAS path or stick with Windows 2016 that I already know. :confused:
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Thanks for all the help so far.
Now I will take a serious thinking if I want to go down the FreeNAS path or stick with Windows 2016 that I already know. :confused:
I didn't intend to scare you away! :p

FreeNAS is a great system; I can't praise it highly enough. It's just that you have to pay attention to the details when you're interested in getting the best performance out of it.
 

Vidis

Dabbler
Joined
Jan 25, 2017
Messages
21
Naaaah it's not that that. It's more the lack of flexibility of the ZFS file system that I feel is of a concern. Plus some other things that I don't really feel comfortable with.
However, I think I have found a good solution.
I will install Windows 2016 on the R720 server and have the majority of the disk in a RAID 6 and share it through iSCSI to my ESXi servers.
Then I will install the 4th server, the R710 with FreeNAS and have a couple of smaller disk in there.
This way I can have all my VM's up and running and still have a test environment where I can learn about FreeNAS without worrying about breaking my VMWare environment.

One more thing.
Since the main reason for me to do this was that I wanted to learn about Fibre Channel and Storage Area Networks, do I need to activate iSCSI to be able to work with FC. I have read that on a few forum posts here and there.
 

datashadow

Cadet
Joined
May 6, 2018
Messages
3
Hello Vildis,

You may well have moved on since you first encountered the joy of Dell H310 and H710 controllers in the R720xd, but in case you're still working with them...

Check out the YouTube channel for ArtOfServer. He's figured out how to apply the LSI IT-Mode firmware to the Dell H310 and H710 Mini Monolithic (AKA "Mini Mono" or "mm") cards. As Spearfoot pointed out above they will be bricked if you try the "standard" flashing technique (that does work for regular PCIe cards). ArtOfServer also sells pre-flashed cards on his eBay store (I'm not affiliated in any way).

Here's a link to his channel:
https://www.youtube.com/channel/UCKHE9DEep52XlmwLbZUKvyw

Take care!
 
Top