Add another HD

Status
Not open for further replies.

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
This is a bad idea.

Why not use the 3 TB disks to put the data on just the time to destroy the RAId-Z1 and create the RAID-Z2?

And you don't have external backups? if your data is valuable enough to use a RAID-Z2 then you really should have external backups, RAID doesn't replace backups ;)
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
This is a bad idea.

Why not use the 3 TB disks to put the data on just the time to destroy the RAId-Z1 and create the RAID-Z2?

And you don't have external backups? if your data is valuable enough to use a RAID-Z2 then you really should have external backups, RAID doesn't replace backups ;)

As I mentioned, there's a backup on an external usb disk (via rsync on a linux system).

There are only two disks spare - 1x 2Tb and 1x 3Tb. In order to migrate/preserve the snapshots I need both pools running at the same time.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ah ok, you want to preserve the snapshots, I missed that, sorry.

Well, you can always temporarily put the 3 TB drive alone in a pool to replicate your snapshots to, then replicate them back to the new RAID-Z2 pool ;)
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
Can you actually access the data on a Z1 pool that only has a single disk??
or do you mean to add the 3Tb disk as two disk slices and then remove all the remaining disks to put them in the Z2 pool? The problem with that is that the current Z1 pool is 2Tb in size and so would have to be reduced in size first to accept the new disk (slices/partitions).

I'm a bit worried about this stage as it's putting all your eggs in a single basket. I know that this will be dangerous to my data but I'm hoping to find the safest way thru it.

Thanks for your comments in any case.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I mean use the 3 TB drive alone in a new pool, no RAID-Zx or partitioning or whatever :) you'll be able to use the full 3 TB.

It's far safer than using a degraded pool with files devices... Just burn-in the 3 TB drive before with badblocks (there is a thread or two on this subject) ;)
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
I was doing a bit of playing around and created a RAIDZ2 pool with 3 devices. I didn't think that this was possible. It's not even showing up as degraded. Seems weird.
ada0 is the 3Tb
ada3 is the 2Tb

zpool create -f -m /mnt/mypool mypool raidz2 /dev/ada0p3 /dev/ada0p2 /dev/ada3p2
zpool export mypool


Then I auto-imported in the the gui.

zpool status mypool
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0

errors: No known data errors

it's a bit freaky that it actually worked. If this is ok then I'll do it like this and maybe degrade the Z1 pool by adding a Z1 disk to the Z2 pool asap.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
If you continue on this way you'll do something really wrong and you'll lose all your data; don't say after that I didn't warned you before...
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
If you continue on this way you'll do something really wrong and you'll lose all your data; don't say after that I didn't warned you before...
Actually I don't think you've warned me. You've said this is bad without much of an explanation. You've suggested a single stripe disk which is a single point of failure, so that's clearly not the way to go.

so what is bad? - the use of a file as a device in a zpool? Well I won't use this as I had thought that I needed 4 disks for a RAIDZ2 zpool when in fact I can get away with a 3 disk Z2 pool. It certainly better not to use a degraded pool for copying data. Obviously the capacity is lower but thankfully in this case it is enough. It seems like I've made the decision to migrate just in time.

I'm doing the migration using zfs send/receive and I'm keeping snapshots (it seems to me to be outright dangerous to just throw away your snapshots). There will obviously be points where I have to degrade the pools but that's just what happens when you are swapping disks and you don't have enough.

What I'm considering is to take a single disk out of the Z1 pool - use that as a single disk and copy everything from the Z1 pool to that. Then take the 2 remaining disks in the Z1 pool and add them to the Z2 pool. Finally removing the 3Tb disk (2 partitions) and releasing that for it's intended use.

thanks for the feedback. Even when I don't agree, it helps me consider all the possibilities.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I was doing a bit of playing around and created a RAIDZ2 pool with 3 devices. I didn't think that this was possible. It's not even showing up as degraded. Seems weird.
ada0 is the 3Tb
ada3 is the 2Tb

zpool create -f -m /mnt/mypool mypool raidz2 /dev/ada0p3 /dev/ada0p2 /dev/ada3p2
zpool export mypool


Then I auto-imported in the the gui.

zpool status mypool
pool: mypool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
ada3p2 ONLINE 0 0 0

errors: No known data errors

it's a bit freaky that it actually worked. If this is ok then I'll do it like this and maybe degrade the Z1 pool by adding a Z1 disk to the Z2 pool asap.
Don't use the CLI that type of thing isn't supported with FreeNAS. You should create your pools using the GUI if you don't FreeNAS will run into problems because you are doing things behind its back.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
As an example of what SweetAndLo said, you created the pool using device names, whereas FreeNAS expects gptid's.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Actually I don't think you've warned me. You've said this is bad without much of an explanation. You've suggested a single stripe disk which is a single point of failure, so that's clearly not the way to go.
Yeah, and using multiple partitions on one drive isn't a single point of failure? Moreover you put a RAID-Z1 (which is already not a safe option in itself...) in a degraded state to do what you want to do. So definitely not safer than what I recommended.

Actually I explained why:
It's far safer than using a degraded pool with files devices... Just burn-in the 3 TB drive before with badblocks (there is a thread or two on this subject) ;)

This is the warning:
If you continue on this way you'll do something really wrong and you'll lose all your data
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
Yeah, and using multiple partitions on one drive isn't a single point of failure? Moreover you put a RAID-Z1 (which is already not a safe option in itself...) in a degraded state to do what you want to do. So definitely not safer than what I recommended.

Actually I explained why:

This is the warning:
The idea is the the Z1 pool isn't in a degraded state until the Z2 pool is ready and has all data and snapshots. As for using the CLI, well it's simply not possible to do this via the gui. Of course I'll export it and import it to get it back into the GUI.
As for gptid's, is this applicable in 9.2 or just 9.3? When I look at my existing pool I see the device names rather than gptid's. I think that I created it in 9.1 or even possibly 9.0, so maybe that's the reason. I have created the disks/partitions to be the same as in the z1 pool, ie. swap and data partitions.
And yes the 2 partitions on the same disk aren't ideal but they aren't a single point of failure - it's raidZ2 which permits up to two disk failures (in this case that would be two partitions/1 single disk failure). I see the problem with this configuration which is why I intend to replace this disk in the pool asap. Once the Z1 pool has been migrated, I can immediately add the disks from that to the Z2 pool and remove the 3Tb partitioned disk.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
The gptid's have been around since the 8.x days.

BiduleOhm issued a warning both to you and others that might read the thread in the future. All to often we get users who want to do everything via the CLI because they did it with another OS using ZFS. And, then suddenly it craps out. It's not until we pry more information like "zpool status" from them and see the device names, that we find out that they had been using the CLI for day to day FreeNAS management, whereas they should have been using the GUI.

The GUI also add seatbelts to prevent users from doing stupid stuff from the CLI.

While *you* might be able to create the degraded pool and come out alive, if the average FreeNAS user tried it, it probably end terribly, with all data lost. We try to prevent that from happening.
 

stranger

Dabbler
Joined
Apr 11, 2014
Messages
31
The gptid's have been around since the 8.x days.

BiduleOhm issued a warning both to you and others that might read the thread in the future. All to often we get users who want to do everything via the CLI because they did it with another OS using ZFS. And, then suddenly it craps out. It's not until we pry more information like "zpool status" from them and see the device names, that we find out that they had been using the CLI for day to day FreeNAS management, whereas they should have been using the GUI.

The GUI also add seatbelts to prevent users from doing stupid stuff from the CLI.

While *you* might be able to create the degraded pool and come out alive, if the average FreeNAS user tried it, it probably end terribly, with all data lost. We try to prevent that from happening.

Well I'm not the average user but I wholeheartedly agree with your advice. I'd use the GUI to do this if I could and had the extra disks. I've clearly made a mistake in thinking that I could get by with Z1 and so I know that I will have to use a nonstandard route to get to the right place.
the only things that I ever do by the CLI is using warden to start and stop jails but even then I tend to do that with the GUI's cron jobs. I'd actually prefer if the gui was accessible from the CLI - meaning the python scripts that the GUI calls. That way any CLI operation would stay in sync with the GUI.

I'm currently testing what I have to do so the advice so far has been helpful. I can see my Z2 pool in the gui and it looks good. I'll have to recreate it taking gptid's into account if I can. I know that i could use the GUI completely if I were to remove one disk from the Z1 pool and create the Z2 with 3 disks but then I'd be migrating from a degraded pool which makes me very nervous.

My comment about the warning was that it wasn't clear what the warning was and what the effects would be.

thanks for the feedback, it is genuinely helpful.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I already explained how to do this without using degraded pools and other bad ideas things like the partitions, by destroying and creating the right pools, via the GUI, in my first post on this topic:

1) Create a new pool with your 3 TB disk in it, no RAID-Zx, no partitions, just the plain disk.
2) Replicate your snapshots from the RAID-Z1 pool to this pool.
3) Destroy the RAID-Z1 pool.
4) Create a new RAID-Z2 pool with your old 3x 2 TB disks + the new 2 TB disk.
5) Replicate your snapshots from the 3 TB pool to this new pool.
6) Destroy the 3 TB pool.

NB: burn-in the 3 TB disk before doing this.
 
Status
Not open for further replies.
Top