Disk added as 'stripe' instead of replacing

Status
Not open for further replies.

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
...and there's nothing at all stopping a Linux distribution from supporting ZFS as a default. Yes, licensing issues may keep it out of the kernel, but it doesn't need to be in the kernel to be part of Fred's Special Linux's standard installation.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
True, I know there is a Live Linux ISO floating around that comes with ZFS already installed. Mainly meant for recovery purposes I think though.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
It's very simple for Debian, literally what you see on the web page, 3 commands.

Add ZFS source to the package manager, then update the package list, and then finally run "apt-get install debian-zfs"

This will download all the ZFS code and then compile it against all the kernels you have installed on your system (all automatically) as a DKMS kernel module since it cannot be shipped in the kernel from the start.

On Ubuntu they maintain PPAs so it's also super easy.


The only tricky part is root zpools if you really want one of those. But fortunately Linux also has BTRFS which works great for your root pool and is also really simple :)
If you're going to run Debian, it is probably easiest to just run Debian/kFreeBSD. That will not only solve the root pool problem, but it will also give you Debian userland as well. Basically, best of both worlds out of the box without any tinkering.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What I want is a guide for installing Linux mint 17 to zfs so you can boot from it. Nobody has done a guide yet. :(
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
I still don't get why you can't remove vdevs from a pool. ZFS already knows where all stripes are, so it would be trivial to pull all of them off a vdev and distribute them among the rest provided there is remaining space. It already have algorithms for balancing striped data amongst unequal vdevs, just reverse it and move a few stripes around.
If that is not enough there should be a way so you could make a dataset restricting it to desired vdevs and transfer all data to that dataset, making removal of the empty vdev possible.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I still don't get why you can't remove vdevs from a pool. ZFS already knows where all stripes are, so it would be trivial to pull all of them off a vdev and distribute them among the rest provided there is remaining space. It already have algorithms for balancing striped data amongst unequal vdevs, just reverse it and move a few stripes around.
If that is not enough there should be a way so you could make a dataset restricting it to desired vdevs and transfer all data to that dataset, making removal of the empty vdev possible.

The devil is in the details. Theoretically, every operation imaginable involving disks is possible with ZFS - however, some operations were never written (in no implementation of ZFS). Since ZFS was always meant for sysadmins in large, money-is-no-object corporate settings, little importance was given to convenience features.

The original decision must've been something like this:

Option A: Removing a vdev by copying everything it contains over to the remaining vdevs and doing the required housekeeping takes X amount of time.

Option B: Destroying the pool, creating a new one and restoring everything from backup takes Y amount of time.

X is greater than Y (in the general, worst-case scenario) and involves a risk of some weird bug corrupting data. Resources saved by not implementing the vdev removal function can be transferred toward stabilizing the backup/restore features, further improving option B.

Bottom line: Option A is not to be implemented.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
To be fair, they have been playing with the concept of vdev removal recently. It's something their enterprise customers have been asking them (Delphix) about.

It works by creating a "virtual" vdev (haha yes not the greatest name).

Essentially a virtual vdev is created in the free space of an existing vdev that isn't being removed and all the blocks are moved into that vdev and mapped through the virtual vdev.

So the downside here is there is a little performance hit from ZFS having to follow this extra virtual mapping. But what can be done is that whenever ZFS reads one of these blocks by following the virtual vdev, it can then at that time change the block pointer to the location that ZFS actually found that block on the new real vdev it got moved to. So eventually once you have read all your data the virtual vdev pointers will all be removed and the performance hit of the original vdev removal will be gone.

The main ZFS developer talks about it near the end of this video if you are interested in hearing more about the concept:
View: https://www.youtube.com/watch?v=G2vIdPmsnTI
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Back to the discussion of a warning before doing this, FreeNAS already has one. See this thread and Bug 5868. Don't know how long the warning has been there, but at least since 9.2.1.3.
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
It makes no sense that replacing a drive would cause this.

I believe this is a zpool parse error. Can someone with the problem paste the output of the command "zpool status"
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I believe the OP has fixed the problem already. It was an operator error and we have seen this a few times before.
 

batm0n

Cadet
Joined
Jun 28, 2014
Messages
7
I use a zil and the performance increase is huge and also the recovery is even better
 
Status
Not open for further replies.
Top