2+2 mirror w/o 2 HDDs

Status
Not open for further replies.

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
Good day guys!

I have a problem during NAS initial setup.
There are 2 HDDs by 2 TB each and planned in near future to buy another pair. Prior to buying HDDs I'd like to organize a storage like this (4 TB of available storage, not reserved)

Code:
tank
    mirror-0
        ada0
    mirror-1
        ada1


And after buying HDDs would like to add them to the mirror-1 and mirror-2, respectively. How this can be arranged through the WebGUI (or force the OS to understand what happened after the organization of such structure by command line tools) and without rebuilding the array?

PS: sorry for my english...
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi,

Once created, the number of disks in a vdev can't be modified - you can only replace them.

You can create a 2-drive 2TB mirror now and then add a second 2-drive vdev once you buy the replacements though.

Code:
tank
    mirror=0
        ada0
        ada1
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
I mean something like this:
Code:
zpool create tank mirror ada0 mirror ada1

By this way in each mirror will be one old and one new HDD
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi Yrwn,

So you can actually do this by creating two single-drive vdevs as you originally laid out.

Code:
tank
    mirror-0
        ada0
    mirror-1
        ada1


Converting a single-drive vdev to a two-drive mirror is the sole exception to the "can't change a vdev" rule. I think the code in your situation would be, assuming you add two more disks ada2 and ada3:

Code:
zpool attach tank ada0 ada2
zpool attach tank ada1 ada3


The commands there should inform it to look at the zpool named "tank", find the disk recognized as "ada0" and attach "ada2" as a mirror to it, and the same for ada1+ada3.

You should then end up with:

Code:
tank
    mirror-0
        ada0
        ada2
    mirror-1
        ada1
        ada3
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
That is correct but not recommended.. You are better off mirroring with the first 2 drives and waiting til you can get another 2.. A failure (in either vdev) with 2 single disk vdevs will destroy the entire pool..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That is correct but not recommended.. You are better off mirroring with the first 2 drives and waiting til you can get another 2.. A failure (in either vdev) with 2 single disk vdevs will destroy the entire pool..

Correct. But if for some reason the short-term capacity (OP did say "near future") need outweighs the reliability requirement (and time to resilver 4TB) then he can technically do this.

I don't recommend it either.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Yea, if he's got backups and doesn't mind potentially having to recreate the pool on a failure, maybe the perf advantage of the stripe vs. risk of disk failure is worth it to him.
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
Thanks for answers!

Really, as I understood, it's impossible to do this through Web GUI. Am I wrong?
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Not sure if you can do it from the GUI. I know you can do it from the command line, but it's a little involved. You'd have to properly setup the disk(s) added later (aligned partition, etc.). If you know how to do that, then it's pretty easy to add the drive (partition) with zpool attach as mentioned above. I've done it, so I know it's possible.
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
Yeah, I know how to create zpool with required configuration (and add/manage disks), but don't know anything about proper partitions/labels/etc setup :(
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
Yeah, I know how to create zpool with required configuration (and add/manage disks), but don't know anything about proper partitions/labels/etc setup :(

I think the "easy" way would be this :

(1) put new drives into system
(2) From GUI create a "temp" pool as stripe or mirror with the new drives (this creates the aligned partitions for you)
(3) From GUI destroy "temp"
(4) From command line attach the partition(s) created by FreeNAS on the new disks to the existing vdevs via "zpool attach" using their gptid (rawuuid, you can find this with "gpart list" - run that on your existing drives to see what it looks like, then make sure the uuids of the partitions are in /dev/gptid)
(5) check it worked with "zpool status"
(6) reboot

You'll note that freenas creates two partitions per drive. p1 is created for swap with the size you specify in the advanced settings, then p2 is basically the remainder of the drive. (Unless you set the swap size per disk to 0, then there is only 1 partition created per disk.)
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
Thank you very much! Works fine :) (playing with attach/detach with unused disks =)))

PS: there is 4x 2 Tb disks attached. First 2 is for mirror and second 2 is for data, that not requre reservation - striped pool.
PSS: about "near future". This 4 disks work for about 9 months. I just believe, that no one will fail in near 2 months :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Those zpool attach commands are correct, but not. They are appropriate and valid attach commands, but there's more steps than he included. You also should NOT be attaching disks themselves. If ada0 is suddenly ada1 tomorrow there's a chance your pool will be unmountable. Surely you wouldn't like that. ;)
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
Yeah, it's just an example. We attach ada0p2. ada0p1 used for swap. It's like FreeNAS do.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, it's just an example. We attach ada0p2. ada0p1 used for swap. It's like FreeNAS do.

That's still wrong then. Using partitions for the pool devices is not how FreeNAS uses. It uses GPTIDs. There's more, but this is just a mistake waiting to happen. There's a reason we tell people they are out of their mind if their data is important and they demand to use the CLI. You should use the WebGUI or you should just go to the full fledged FreeBSD. Anything less is a disservice to yourself and your data.

The manual doesn't say ANYTHING about doing any of this from the CLI, and there's no reason not to either.
 

toadman

Guru
Joined
Jun 4, 2013
Messages
619
How do you attach a drive to an existing single drive vdev or mirror vdev from the GUI? (I'm just not aware how this works.)

Agree on gptid, as post #11 told him to use gptid.

- In step 3 there from the GUI he'd actually do a "detach volume" from the active volumes tab (not mark disks as new).
- Specifically, step 4 would be "zpool attach -f [poolname] gptid/xxxxxx gptid/yyyyyyy" as the temp pool name has to be blown away. xxxxx is the uuid of the existing partition, yyyyy is the uuid of the partition being attached. (gptid labels can be matched to the drive with "glabel list")
- Step 5 - can check this in the GUI as well, as freenas will pickup the changes.

I ended up playing with all this for a backup system. I actually had a set of drives left with mismatched sizes. Not wanting to buy another drive for that system, but wanting the most efficient use of drive space, I ended up partitioning one large drive and using the partitions in different vdevs. (No, don't do this in production, this is a home backup system, and not my only backup.) Looks like below with partitions from drive 4 appearing in both vdevs:

Code:
         Drive #
        1 2 3 4 5 6
vdev 1: x x x x
vdev 2:       y y y
 

Yrwn

Cadet
Joined
Mar 20, 2014
Messages
9
1. create partitions on new disks (create and destroy new zpool through GUI) and obtain their gptids
2. attach new disks partitions (ada0p2, ...) to existing zpool through console and check availability through GUI
Code:
zpool pool attach gptid/old-disk-1-partition gptid/new-disk-1-partition
zpool pool attach gptid/old-disk-2-partition gptid/new-disk-2-partition


All as toadman said
 
Status
Not open for further replies.
Top