zpools

Status
Not open for further replies.

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
Good afternoon,

I have a new 45 drive bay system and would like to use zfs on it. I have another 45 drive bay system (2 tb drives) that currently uses raid6 + lvm. In our current environment, we have 5 x 8 md's (md0, md1, .. etc) with 5 spares. All raid6 systems then raid 0 to form one large system = 60 TB with lvm on top

What im confused about is say i create a zpool of 10 drives raidz2 = 16 tb and later i want to add another 10 drives to expand the system, do i create another zpool and then the zpools get joined together, or do i add the new 10 drives to existing zpool? Doesnt make much sense to me to create one zpool of 40 drives raidz2 where if 3 drives fail im screwed.

Thank you for your answers / explanations and sorry for the noob questions.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You might want to read cyberjock's guide

You are missing the vdev term in your message. pools are constructed out of vdev's. So, the first 10 drives is in one RAIDz2 vdev and forms your pool. To expand, create another 10 drive vdev and add it to the existing pool.
 

bvasquez

Cadet
Joined
May 29, 2013
Messages
7
You might want to read cyberjock's guide

You are missing the vdev term in your message. pools are constructed out of vdev's. So, the first 10 drives is in one RAIDz2 vdev and forms your pool. To expand, create another 10 drive vdev and add it to the existing pool.

Good afternoon,

I appreciate your reply and have read through the cyberjock guide and now am going through the freenas 8.3.1 manual. I am setting up a vm to test it out first and have a quick question. Say i have 12 disks. I want to create 2 raidz2's. Under Storage, volume manager, add volume -- this would essentially be the 'zpool' correct? Then i would select 6 of the drives as raidz2 -- this would be the vdev correct? Please excuse my confusion between 'volume manager' and 'zpool'.

Thanks again for your help.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Yes, that's correct. volume = ZFS pool

Under Storage, volume manager, add volume -- this would essentially be the 'zpool' correct? Then i would select 6 of the drives as raidz2 -- this would be the vdev correct?

Once you create the first vdev (6 disks in RAIDz2), return to "Volume Manager" and insert your volume/pool name in the box - "Volume to extend". Select the next set of 6 disks (in RAIDz2) and extend the volume.

To validate your work, go to the shell and type "zpool status -v". You'll see the pool (volume), and your vdev's (for example, raidz2-0, raidz2-1) and the individual disks listed below each of the vdev's.

BTW, if you're testing in a VM, don't make the disks too small. Each disk has overhead, including a 2Gb for swap. For my test, I made them 10Gb each.
 

Ytsejamer1

Dabbler
Joined
May 28, 2013
Messages
28
Here's what I found most valuable from the Docs (Section 1.4.6): http://www.freenas.org/images/resou...s8.3.1_guide.html#1.4.6 RAID Overview|outline

This has a good, older rough example you can follow: http://www.solarisinternals.com/wik...Configuration_Example_.28x4500_with_raidz2.29. The Thumper has 48 disks, but hey...45 is close and you'll get the idea.

One of the things that really bugs me is that in the ZFS Volume creation process, I can't select/add disks as spares. You can specify log and cache drives, but not spares. The legacy UFS volume manager (I'm running 9.1 Alpha) used to display it. Somewhere in the thumper docs, Sun mentioned for a 40-disk pool, having 2 hot-spares are recommended. You are able to do it via command line once you have your zfs volume (pool) created with multiple vdevs.
Code:
# zpool add zfsvolname spare disk44 disk45

The disk names will be shown in the list of disks. Mine for example are ada0-ada47.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
So if all that came straight out of the manual, why does it have to exist here? It just means there's another copy of things sitting around that won't get updated if the manual gets changed.

Isn't the link to the manual enough?

Somewhere in the thumper docs, Sun mentioned for a 40-disk pool, having 2 hot-spares are recommended. You are able to do it via command line once you have your zfs volume (pool) created with multiple vdevs.

The disk names will be shown in the list of disks. Mine for example are ada0-ada47.

Spares on freenas are (currently) not hot spares. They're more like 'warm' spares. You have to manually initiate the use of a spare.

That being said, the gui should give you the option to add spares to the pool. I don't have a 9.1 vm handy to test with.
 

Ytsejamer1

Dabbler
Joined
May 28, 2013
Messages
28
I agree...it should give you the option to add spares to the pool. Perhaps it's just a slight oversight in Alpha. It would be a nice feature to have (or have back).

It's interesting that you have to manually initiate the use of a spare. Is that a slight difference from Oracle's ZFS and what is currently in place for open source ZFS? When Sun was using the open-source binaries for ZFS (before Oracle shut the doors), they supported hot-spare.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I don't see an option in 9.1 (in the particular version I'm running) to add spares. It's still a work in progress though.

Solaris has hotspares as a result of (I think) some userland tools that are not in freebsd. That functionality will be provided (I think) by zfsd in freebsd. I understand this will still be a while before it's 'production ready' in freebsd, and a little while longer before it's integrated into freenas.
 
Status
Not open for further replies.
Top