Expanding Storage Question

Status
Not open for further replies.

ACGIT

Cadet
Joined
Sep 15, 2012
Messages
8
I'm pretty new to Freenas 8. I tried it way back a few years ago, and decided to roll my own. Upon looking at the new version 8, I believe it's something to get to my biz clients. In any case, I have a question:

In testing in my vm environment I setup a machine that initially had 1 x 4GB drive setup for the OS and 2 x 10GB drives for storage, setup as ZFS Mirror. I added a 3rd 10GB drive to check out the ZFS drive expansion. When adding the new volume, I choose to add it to my 2x10GB storage pool I already setup. However, when I did this and looked under "Volume Status" it showed my 2 drive mirror set, and then below that it showed the 1 new 10GB drive as "Stripe". The overall storage pool capacity did increase as expected, but I wonder if let's say that new 3rd drive were to go bad, and some data was already on that 3rd drive as part of the data array, and I replaced it, would I loose data like in a traditional "Raid-0" stripe?

Thanks for the answers. I look forward to them all.
 

toddos

Contributor
Joined
Aug 18, 2012
Messages
178
The third drive is "striped" with itself (basically in this case "striped" just means "not mirrored"). If the drive dies, it's dead and there's no way to recover it. Worse, if the drive dies, the rest of your zpool is also dead and the data on your mirrored drives will be lost. If you want to expand in this way (adding zdevs to existing pools), you need to at least do it in mirrored groups. So rather than adding 1x10GB drive to a zpool with an existing 2x10GB mirrored zdev, you should add a 2x10GB mirrored zdev.

Ideally, you should properly size your pool up front based on future expectations, if you can afford it.
 

ACGIT

Cadet
Joined
Sep 15, 2012
Messages
8
Toddos, Thanks. I pretty much thought that was the case. So in reality it does act like a Raid0 with no redundancy/parity, but did not know that it would effectively kill the remaining raid1 array. Interesting. Thanks for the fast reply.:cool:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Read the presentation in my sig. It should explain to you what exactly happened when you added the 3rd 10GB drive.
 

ACGIT

Cadet
Joined
Sep 15, 2012
Messages
8
Thanks noobsauce. I had a look at it, very informative.

I guess I will wind up doing a Raid-10. 2 mirrored drives in raid-1, per set. So if I had 6 1TB drives, I could make 3TB storage. This would mean that I could really loose 3 drives before all data is lost, correct? I Understand that there is a loss of space, etc. I could also do a Raid-50 I suppose.
 

toddos

Contributor
Joined
Aug 18, 2012
Messages
178
In a configuration like that (in ZFS-speak, multiple mirrored zdevs), you can lose one drive per mirror group. In your configuration, that means you can lose three drives, but they have to be the right three. If you lose two drives from one mirror, you're done. That mirror, and all the other zdevs in the pool, are now invalid. Because of that, it's best to minimize the number of zdevs (within reason, obviously, as adding zdevs can be useful for growing space). In a 6-drive setup, you should probably go with a RAID-Z2 configuration (4x storage drives, 2x parity). You will only only be able to survive the loss of two drives simultaneously, but they can be any of the two drives. And of course when a drive fails you should replace it ASAP.

Edit: Meant to add that in ZFS-speak, "RAID-50" would be "multiple zdevs of RAID-Z1". It's not 100% exactly the same, but for comparison purposes it's close enough.
 

ACGIT

Cadet
Joined
Sep 15, 2012
Messages
8
Toddos, thanks for the fast reply again.

I thought that that made sense. Of course I am just using the 6 drive scenario as a simple way to understand this system.

What I envision is rolling these machines out to my clients, and having one back here at the office DC for off-site replication via snapshots and rsync. However, growing storage at clients concerns me some, but that should be easy enough, by doing inplace upgrades of the drives to bigger sizes, or adding vdevs to the pools. What really has me concerned is the storage here at the office. I need to grow it on the fly as demand increases. I don't know how well a external JBOD enclosure would work with freenas? I'm thinking along the lines of a raid-6 type setup here (RaidZ2?) if i'm not mistaken? Have any experience here with jbod enclosures and freenas? What raid level would you use here with my scenario?

Thanks in advance!
 

toddos

Contributor
Joined
Aug 18, 2012
Messages
178
First, major caveats: I use FreeNAS as a home server, nothing more. I'm not in the storage business, I've never built enterprise-grade servers, and I can't guarantee any of this would work. Also, I don't know your requirements. That said, here goes.

For clients, depending on the amount of storage needed and the level of resiliency (will you be able to get them a new drive in ~24 hours if one fails?), I'd probably go with a raid-z2 configuration just for the added parity. a 4x3TB z2 setup would give 6TB usable and resiliency for two drive failures, or you could go 6x3TB for 12TB usable and still 2 drive resiliency. Or go with 2TB drives. Just keep in mind that more drives == higher likelihood of failure. For upgrading client storage pool sizes, do in-place drive upgrades when you can (keep in mind all drives have to update, so if you do 6x3TB, your only upgrade option here is 6x4TB) or take downtime, do a backup, rebuild the pool with more drives, and restore the data. Increasing storage should be seen as a major operation from the customer's viewpoint, so they can either size correctly up front or they can pay money+time later.

For the server, it sounds like you need to get into real enterprise-grade hardware. Don't mess with consumer-grade esata or USB JBOD enclosures. Get a rack. Get SAS controllers and either SAS backplanes or SAS-to-SATA backplanes in rack-mountable cases. You should get 4 drives per SAS connection, 2-4 SAS connections per controller card, so with the right hardware it'd be quite easy to have 32 or more drives available. As for setting up those drives, you might be best off waiting for 8.3 (or running it in beta) so you can use raid-z3 (3-drive parity) which is better for really large pools. Or you can create multiple raid-z2 zdevs and put them together in a pool (2 drive parity per zdev, but remember that one zdev completely failing will take down your entire pool). Or you can create multiple raid-z2 zdevs in individual pools and manage them by mount points and rsync jobs. For example, if you had 32 drives available, you could make 4 8x3TB raid-z2 pools for 18TB per pool (72TB total) and split your customers into 4 groups with rsync jobs spread across the different pools. That way if one pool had 3 drives die catastrophically, you'd only lose 1/4th of your data. Of course you'll also need some sort of backup plan for all of this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For commercial applications, I'd suggest that the Norco stuff has a history of being a little dodgy. It's great for a home media server, I'm sure, but that's an environment where if a backplane goes bad, you aren't hurt by needing to offline it for a week or two. By the time you finish outfitting that Norco with a power supply, you're up around $500. For around $900, you can move up to something like the Supermicro CSE-846TQ-R900B, which includes a quality 900 watt redundant power supply and the rack slides too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, though realistically it's about the same thing. Big question is whether or not additional power is necessary or just a waste. We've been building a lot of stuff around here based on Sandy Bridge, with stuff like SuperMicro X9SC* boards and E3-1230's. With an efficient power supply, they'll idle around 50 watts and at full tilt around 100. Extra controllers etc can add a little, but basically you need to figure out what your drives will do to the system.

For the 900, the 5V is rated at 50A, the 12V is 75A, and a Hitachi Deskstar 4TB takes 1.2 and 2.0 respectively; 24 of them means 29A and 48A, so the 900 supply is potentially able to hard-crank an entire system full of these. Note! This is just engineering back-of-napkin-scratching, system builders are still required to do the actual research.

The benefits of an 80-plus gold certified supply that's 30% bigger than a smaller, non-80+ supply might be a wash. I really don't know.
 
Status
Not open for further replies.
Top