Hardware Configuration Question

Status
Not open for further replies.

techknight1

Cadet
Joined
Jul 13, 2016
Messages
9
I have chosen hardware based on what i have on hand and what i can get for a good price. My question is about SSD's. I am not really sure what sizes, and about where they should be placed. The following is a list of my hardware.
2x Dell PowerEdge R610's
Dell PERC H200 (In each R610)
LSI 9200-8e (In each R610)
Supermicro 24-Bay 2.5" SASII JBOD Units (From iXsystems) (One connected to each R610)
Intel X520-SR2 (In each R610)
48x 900GB SASII 10K HDD's

I had planned to use the H200's and the Hot-Swap bays in the R610's for any SSD's that would be needed. By the way, this will all be connected to a Dell PowerEdge M1000e with six to eight M710HD Blades. I will be using two Dell PowerConnect M8024-k I/O Modules or two Dell Force10 MXL I/O Modules.

I would prefer to use iSCSI for the storage, but one thing that i am concerned about is that I will have a couple of SQL 2005 Clusters running on these systems. How would I go about taming the fragmentation issue or will there be one?

Any guidance in these matters would greatly be appreciated.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
How would I go about taming the fragmentation issue or will there be one?
You need to search the forum for the name jgreco and fragmentation. He has posted a bunch of times about this. He'll suggest you keep the pool at 50% or less. You'll have to decide if you care more about wasting the space or the performance.

Just in my opinion (though I'm familiar with HP and not Dell), I'm not sure the PERC is the right controller for any drives in the system for FreeNAS. I wanted to use HP's equivalent controller for the same thing once and it only offered RAID modes. Everyone will blow you up saying even a RAID controller that has an HBA mode is a fake attempt to imitate an HBA isn't good enough, and I'm inclined to trust their experience. You might think about getting a regular HBA and swapping it in for the PERC. If it's integrated, I don't know if you have anymore expansion slots in that 1U chassis for an other HBA?

Since you're doing iSCSI and running an assumed decent enterprise workload, make sure you have insane amounts of RAM. People with real systems seems to have anywhere from 128GB all the way to >256GB.
 

techknight1

Cadet
Joined
Jul 13, 2016
Messages
9
I had already seen that in here, to keep the pool utilization down to 50%. Unfortunately, that is not a realistic expectation for any storage provider in the IT industry. Granted, I am building all of this at my home for personal use, but if I went to a Fortune 500 company and told them, here is your storage but you can only use 50%. I would be out on my butt, and quick. My current plans are to keep each of the two pools utilization down to 80%, possibly a little less.

I might try to add another zvol, target, lun and move the database storage to another iSCSI target and lun to see if that helps when performance becomes an issue.

The Dell PowerEdge R610 will only accept Dell hardware in the dedicated storage slot. Plus I will have an Intel X520-SR2 and an LSI 9200-8e HBA. The R610 has two PCIe x8 slots and a dedicated Storage Card slot. I know it is limiting, but it works. I have never had any issues using the PERC H200, granted I have never used it as a RAID Controller because that is not really what it was designed for. It was primarily designed to be used as a SAS II HBA. I have forced several failure and recovery situations with the H200 and FreeNAS and thus far haven't found any issues.

The R610's max out at 192GB RAM and the M710HD's max out at 288GB RAM. When the system gets put into production, everything will have 96GB of RAM installed with extra RAM on hand.

My alternative storage option is Quadstor, but it doesn't have the features or the capabilities of FreeNAS in my opinion. Granted, the cost difference for hardware between the two is very minimal.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
The free space requirements under 50% is specific to ISCSI. That is because it is block storage. I'm pretty sure every Fortune 500 company has SSD arrays for providing iSCSI which works much better. Hell at work we have 6 25 bay Solaris ZFS arrays that are killer. Guess what. They have the same limitations as FreeNASs openZFS formate with of course a few more improvements. And not only do our iSCSI data stores reside on SSDs so do all of our server logs. Then everything else is on spinning discs which are comprised of 25 stripped mirrors. Needless to say it's very fast and can saturate a 10G link. This is only a small server farm costing about 10 million. We recently spent 1 million just in server cooling. We are not a Fortune 500 company. That means we already loss 50% of disc space to mirrors. Then on top of that the spinning discs are not allowed to go over 80% full.

Once a bad switch made a log file that had filled the entire array. We where down for 3 days the backup was also full. That was when the logs moved over to SSD arrays and off our data storage arrays.

H200s are fine. You need to flash them to IT firmware. I think you have a good handle on this is seems.


Sent from my iPhone using Tapatalk
 

techknight1

Cadet
Joined
Jul 13, 2016
Messages
9
Has anyone had any experience with or used the Intel XL710-QDA2 ? I am seriously considering trying out the Dell MXL I/O Modules and Dell Force10 S6000. I found a Supermicro motherboard (X9DRD-7LN4F-JBOD) that has PCI-E v3.0 x8 slots and has an onboard LSI 2308 chipset that is flashed IT Mode. I haven't had any issues with the other configuration I was using, I was just looking into something that might offer a little bit better upgrade down the road. With the Intel XL710-QDA2 I could go up to 40GbE for my storage interconnects and possibly in the future move to an all-flash storage array with PCI-E v3.0 x8 SSD's for caching.

As a side note, I have been able to achieve a maximum of 9.3 and 9.8 Gb/s with the current configuration. That's with 24x 1.2TB 10K SASII HDD's and 5x small SSD's. So, I know that FreeNAS is a very capable option for anyone that might have any doubts. I think the most important objective is: The Correct Hardware.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
+1 for Perc H200 being just fine. I use them as well as the LSI 9200-8e in my Dell C2100/FS12-TY Servers. I do cross-flash my H200s to LSI 9211-8i though.

Currently I am running iSCSI for both Hyper-V and ESXi VMs. I am not running a ton (my small office) but for reference I have 10 x 4TB SAS Drives (5 Mirrors) with 2 x 4TB Hot Spares and 1 x 4TB Cold Spare. Also using an Intel DC S3710 200GB as a SLOG. I intend to never use above 8TB of space.

Don't think you mentioned how much RAM each system was going to have (Sorry if I missed it)? Also, you would want a fast SLOG (maybe even L2ARC).
 

techknight1

Cadet
Joined
Jul 13, 2016
Messages
9
Unfortunately, I am unable to flash the H200 because it is in the storage slot of an R610. If it is flashed with anything other than Dell firmware it will not be accepted in the storage slot and will halt the bios. But, I have not had any stability or corruption issues with the H200, even after a 2 month long harsh testing.

Currently, I have two of these systems operating. Both of them have 96GB of RAM, H200 for L2ARC and SLOG, LSI 9200-8e for external storage, two Intel Xeon X5650 Processors and an Intel X520-SR2. I use two Intel SSD DC S3710 200GB partitioned and mirrored to 50GB for the SLOG. Then, three Intel SSD DC S3610 partitioned and striped for a maximum of 102GB total of L2ARC.

All this storage is used for vSphere 6 VM's that are operated on up to 16x Dell PowerEdge M710HD Blades. Right now my networking consists of M6348's and M8024-k's, but I am thinking about going with Dell Force10 MXL Switches and a Dell Force10 S6000. With the MXL and S6000 and a motherboard in the Supermicro enclosures I could possibly go to 40GbE for my storage. But, I am unsure if FreeNAS will support the Intel XL710-QDA2 Dual-Port 40GbE card.
 

techknight1

Cadet
Joined
Jul 13, 2016
Messages
9
I am not sure if i have missed the answer to this question or not, but I will ask it anyway. If you have both a SLOG and L2ARC in a FreeNAS storage server, which do you want to be faster? I have Intel DC S3710's for the SLOG and Intel DC S3610's for the L2ARC, but I was thinking about trying something like NVMe for the L2ARC. Wouldn't that actually make cached reads faster?
 
Status
Not open for further replies.
Top