Question: Mixing SATA controllers on single zpool?

Status
Not open for further replies.

RedBear

Explorer
Joined
May 16, 2015
Messages
53
I'll try to keep this simple. Basic question is: Is there any reason it may be unwise or somehow detrimental to the performance or ultimate stability of a zpool if some drives are on one SATA controller and others are on a different SATA controller?

More info:

I purchased a new ThinkServer TS440 70AQ000YUX from Amazon recently. The YUX model comes with two four-bay SATA hot-swap backplanes already installed, connected to a "RAID 500" card (a rebranded LSI 9240-8i, apparently). After a day and a half of trying umpteen different online "recipes" for cross-flashing the LSI 9240-8i, I was unable to even get any version of sas2flash, SAS2FLSH, megaCLI, or megaREC either in DOS or an EFI shell to even acknowledge that there was any LSI card installed in the machine, so I gave up and removed the card and replaced it with an IBM ServerRaid M1015 I picked up from eBay, already pre-flashed to IT mode. Seems to be working fine.

That gives me eight drives directly attached to the eight available SATA ports on the M1015, and my data storage needs are such that I really need to put a minimum of 4TB drives in all eight bays. I'm planning to use at least raidz-2. I have a second M1015 but the TS440 motherboard has only one expansion slot capable of supporting x8/x16 cards. So without investing in a SAS expander card that means if I want to add any additional SATA devices I would need to use one or more of the five onboard SATA ports on the TS440 motherboard.

There is space on the case for either a single-bay hot-swap module in the top 5.25" bay, or if I remove the optical drive I could install a three-bay hot-swap module, bringing the total number of drives in the pool to 9, 10 or 11. (I have read that it isn't recommended to go beyond 11 or 12 drives in a single zpool.) I would be moving to raidz-3 if I go to 9, 10 or 11 drives. By my calculations it's too economically inefficient to do raidz-3 with less than 9 drives.

I'm also looking at whether it makes sense to add a good fast small SSD drive (e.g., Samsung 850 Pro 128GB) to the mix as a ZIL device. Again, with the eight bays filled I would need to connect the ZIL to one of the onboard SATA ports, so even if I add no more 3.5" drives I will still be mixing SATA controllers if I add the ZIL.

So, what are the cons of having two different SATA controllers managing a single pool of devices? I mean, besides the most obvious perhaps of having two separate SATA controllers that could possibly fail, but how likely is that? Would I face the possibility of losing the entire array if one of the two controllers died? Or would it be a simple matter of replacing the motherboard and/or M1015 and having the array come right back up?

Will a speed difference (SATA-II vs SATA-III) between the two SATA controllers cause any performance degradation or reliability issues with the zpool?

I appreciate any input anyone can provide on this issue.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Don't worrry about it. Having drives connected to both the onboard SATA connections and a HBA is commonplace. You won't see any peformance degredation by using SATA II vs SATA III.

You didn't say how much RAM you'll be using. In general, add more RAM before adding a SLOG.
 

RedBear

Explorer
Joined
May 16, 2015
Messages
53
Don't worrry about it. Having drives connected to both the onboard SATA connections and a HBA is commonplace. You won't see any peformance degredation by using SATA II vs SATA III.

You didn't say how much RAM you'll be using. In general, add more RAM before adding a SLOG.

I am maxing out the system with 32GB of RAM. It's my understanding that ZFS needs 1GB per TB. It just occurred to me that this system may be incapable of supporting larger disks since 8 x 4TB is already 32TB of raw storage. Or can that 32GB be stretched to handle a few more TB on a low-usage, mostly static data home file server?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
That's a rule of thumb. Once you get to 32GB, you can probably get away with a bit more storage. I can't say where that line is.
 

RedBear

Explorer
Joined
May 16, 2015
Messages
53
Now I'm reading a thread where someone says this:

Optimal for 4K harddrives:
Mirror: all are optimal
Stripe: all are optimal
RAID-Z: 3, 5 or 9 disks (or 17- but that is not safe with RAID-Z)
RAID-Z2: 4, 6 or 10 disks (or 18 - but that is not very safe with RAID-Z2)
RAID-Z3: 5, 7 or 11 disks (or 19)
If this is true it would indicate that I'll use my dollars most effectively by setting up an 11-disk raid-z3, but with 4TB drives that's 44TB of raw storage.

Is there a good thread around here talking about the limitations of RAM vs Terabytes?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's a rule of thumb. Once you get to 32GB, you can probably get away with a bit more storage. I can't say where that line is.

The rule is softer (in BOTH directions!) at higher amounts of RAM. On some systems you might be able to manage 60TB with 32GB while on others only having 32GB for a 30TB pool might mean lots of suffering.

Now I'm reading a thread where someone says this:

Optimal for 4K harddrives:
Mirror: all are optimal
Stripe: all are optimal
RAID-Z: 3, 5 or 9 disks (or 17- but that is not safe with RAID-Z)
RAID-Z2: 4, 6 or 10 disks (or 18 - but that is not very safe with RAID-Z2)
RAID-Z3: 5, 7 or 11 disks (or 19)
If this is true it would indicate that I'll use my dollars most effectively by setting up an 11-disk raid-z3, but with 4TB drives that's 44TB of raw storage.

Is there a good thread around here talking about the limitations of RAM vs Terabytes?

Don't panic too terribly much about optimal sizes. It's significant in a high performance environment, but not so much for the average home user, especially with compression, which is now the default. I can tell you that I like 11 drive RAIDZ3 because it means I can put 12 drives in a 12 drive chassis and get a warm spare, and the 12/24 drive form factor is pleasant to work with. It works out to be very-ZFS-compatible.

We don't suggest going wider than 12. In practice there appear to be some performance problems especially if rebuilding an array.

Sadly, there's no truly reliable guide as to the amount of RAM you need. The amount of RAM you need is the amount of RAM you need. For some workloads, it is going to be a LOT of RAM. VM's, heavy access, large file manipulation, those things favor large RAM. Fuller pools require more RAM. Which leads to one of those "huh???!" things: if you have a 60TB pool that's only 30TB full (50%) on a 32GB system, that's probably going to work better than a 30TB pool that's 24TB full (80%) on that same 32GB system.
 

RedBear

Explorer
Joined
May 16, 2015
Messages
53
Alright. That's good solid info, thanks. Bit vague, but I guess that's the nature of the beast. Since this will be basically a single-user archival storage system, I'll go with the 4TB drives, maybe even 5TB if the prices come down a bit, and we'll see how it goes. Hopefully it will still be significantly better than my Gen1 Drobo even if it ends up being relatively sluggish by FreeNAS user's standards. That Drobo has always been a dog.

Guess you really have to go to a dual CPU board in order to get past the 32GB barrier, at least in the PC world eh? I've been surprised that even a supposed "server" system with a Xeon processor like the TS440 is still limited to 32GB.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Naw. But you do need Xeon E5 to break the 32GB barrier (for now). Something like the E5-1620 or 1650 which are both single-socket CPU's. Avoton will also do 64GB but on the unobtainium 16GB DDR3 DIMM's, and the new Xeon-D is looking to be a real killer platform.

It pays to remember that ten years ago, 4GB was a fair bit of RAM in a server and we were all dancing with glee when 32GB Sandy Bridge stuff came along in 2010. Now the 32GB feels not-so-big. Here, for example, I'd rather run a smaller number of bigger hypervisors, even though I could get by with a bunch of smaller 32GB hypervisors... but for the ZFS side of things, yeah, it's all about how much RAM you can cram.

As long as you're not trying to go stupid-cheap, FreeNAS is probably going to be very pleasant. Most modern boxes can potentially touch gigabit on large file transfers. Our poor little N36L (eater-of-the-backups) often sustains 400 megabits per second of NFS for hours on end, and it is about the slowest thing out there. A better box with more resources is usually just fine.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
And by "modern" I probably mean any Sandy-Bridge-or-more-recent Xeon, though there may be some suitable i3's out there as well.
 

RedBear

Explorer
Joined
May 16, 2015
Messages
53
Ah, I see. I didn't realize that's a limitation of the E3. I thought all 64-bit CPUs inherently supported a couple of TB of RAM, at least theoretically.

The 32GB barrier just seems weird after dealing with Mac Pros already several years old that support at least 48GB with a single quad-core CPU, while the 8-core models support up to 128GB. Then again, Mac Pros and XServes were never meant to be "low end" in any way.

Thanks for the help.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The Mac Pro was never an entry level platform, which is what the E3 is targeted at. While a 64-bit CPU may be able to ADDRESS lots of memory, whether or not that is electrically supported is a totally different issue. I don't recall the exact history of the Mac Pro but if you roll back you'll find it using the equivalent class of what is now E5. The fastest CPU you can throw in the Pro is the Ivy-EP E5-2697v2, now one generation old, which is not provided in configurations greater than 64GB by Apple. The CPU is capable of 768GB of RAM if done right, which Apple doesn't seem to want to do ... but the nice Supermicro hypervisor here has a 2697 with a "mere" 128GB of RAM and it's a great box.
 
Status
Not open for further replies.
Top