Backup to ESATA

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I am aware of the differences between disabled, enabled, and active. Thanks for linking them because I was trying to figure out if you knew the differences and I was looking for those bullets myself.

So I think we can all agree that disabled is backwards compatible. But this is where maybe there's something you know that I don't (or vice versa)...

If a feature is 'enabled' then the pool can now only be mounted but only read-only on an OS that doesn't support that feature flag. Is this incorrect?

If a feature is 'active' then the pool will not be mountable without the OS supporting that feature flag. (I think we both agree this is the case, but feel free to tell me I'm wrong).

Personally, I don't consider mount a zpool as "read-only" to be particularly useful. Most people don't want to access their 25TB of data just to "look and not touch". There are rare circumstances where you might want to do that, but I wouldn't call that the norm.

Looking at http://blog.delphix.com/csiden/files/2012/01/ZFS_Feature_Flags.pdf It's not the best reference because some things have changed (for example v1000 is not the feature flag version.. v5000 is).

Trying to find the previous thread on this topic because I wrote a fairly detailed explanation of all of this and has a link I'm looking for.

Just for the record, I didn't think you were lying. I've seen quite a few people say things that were later in error. I don't consider people to lie as a matter of course. I'm sure they are out there. The problem is that anyone that has worked in this industry will eventually learn that lying gets you nowhere fast. There's too many logs everywhere to deliberately lie about things regularly and get away with it. In my case with all of the available information I thought you were in error. Still trying to make heads or tails of this.. and probably gonna create a VM just because I can.

So let me ask these questions:

1. Do you boot from ZFS on Debian?
2. Why are those 3 features completed but not included by default? Seems silly to not have them included if they are done.
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
If a feature is 'enabled' then the pool can now only be mounted but only read-only on an OS that doesn't support that feature flag. Is this incorrect?

No, from my experience and any literature I have read thus far, the pool will not become incompatible in any capacity whatsoever if the missing features are merely "enabled" and not "active." Where read-only comes into play is that all of these feature flags that make disk-format changes can also specify whether or not they support read-only backward compatibility. For those that do support read-only backward compatibility (like async_destroy), you can still import and read pools that have those features "active". Again, active meaning that some newer software has already written some "async_destroy" metadata to the pool, but it's stored in a location that older software reading your pool simply doesn't know about and can safely ignore. Though the older software cannot write to the pool because it would potentially wreck/destroy existing "async_destroy" metadata.

So let me ask these questions:

1. Do you boot from ZFS on Debian?
2. Why are those 3 features completed but not included by default? Seems silly to not have them included if they are done.

I do not boot from ZFS on my Debian unfortunately, so I cannot provide any advice in that realm. Perhaps in the future I will when the ZFS ioctl interface is stable. Illumos and BSD have the big advantage of having the code built into the kernel, so when you do a kernel update or other large system update, you have no reason to worry about ZFS loading after the update or after reboot. In Linux's current state this is not the case as I'm sure you are aware. There can also still be issues with version mismatches between the Linux ZFS kernel module and the ZFS userland tools during system and ZFS updates. However, over in the Linux repository, some devs are working on what is called the ZFS ioctl interface which is essentially going to be a basic/stable/unchanging interface to interact with ZFS and will be a big crucial step forward to making the Linux ZFS version a lot easier to manage through system upgrades and updates (we won't have to rely on the userland tool version matching anymore and things like that).

Once that is done it will be significantly easier to maintain ZFS on root IMO and I may give it a go then. I took this opportunity in the current state of ZFS on root in Linux to familiarize myself and play with BTRFS which is what I am currently using on my Linux root. It's pretty great and you can do nifty things like rollback entire system changes/patches/updates in seconds. BTRFS snapshots are actually writable which is pretty awesome and I think powerful too, though can sometimes lead to some confusion if you aren't a little careful. Though you can easily make read-only snapshots too :)

As far as to why the 3 features are not really available in Linux? Well there is kind of 3 versions of the Linux kernel ZFS a user can use. The main is the "stable" release version. This is the version that has packages available in package managers across the majority of popular Linux distros. The current version is 0.6.3 and was released on June 12 2014. The second "version" are newer snapshot packages. These are also very easy for the "average" user to use because you can simply install them with a single command from your distros package manager. Just point your package manager to the right repository for the newer snapshot packages and run the same package install command as you would run to get "stable" in general. These "snapshot" packages are what I was using when I tested the FreeNAS 9.3 pool on my Debian. I am not exactly sure how old it was for the one I installed, but it could be a couple days/weeks/months, it just depends on how lazy the maintainer for your distro is :p.

The last and most up-to-date version would be "head" or "master" from the github repository. Any user can clone the repository on their system and then run the build scripts to build and then install the most latest ZFS kernel module code for their Linux distro. This one is for more advanced users no doubt, but the build instructions in my experience are pretty straight forward and it's really just like reading a recipe to follow them (although I'm a Software Engineer by profession so maybe I am biased here). So essentially large_blocks, and filesystem_limits have been pulled from Illumos (where they were originally developed) into the Linux codebase already. When that happens they already go through a suite of "ztests" and the code is more or less the same "stable" code that was already signed off on for Illumos. The tests are really for sanity on the Linux platform more than anything else and just making sure that they interact with the "different" Linux system as expected. The complexities of each feature are likely to need no changes to the code between the platforms so you can have some amount of confidence that it is "stable"

So the reason why I don't see these features in the Debian snapshot builds is because the package maintainer is "lazy" and hasn't made a new packages pulling in those changes yet. Although I wouldn't necessarily call him "lazy" as he is doing it for free out of his spare time. And then also for the main major releases, the people in charge like to do some sort of additional testing to really make sure there is nothing small hidden in the release before they "tag" the repo to the next version release and they seem to do this "when things are done" so it can be 6 months or more between official releases as of the recent history.

As far as the third flag (multi_vdev_crash_dump). I cannot find a pull for this so I believe it's not ready for Linux yet. I do not know why this one hasn't been pulled yet. Perhaps not enough people have wanted or needed it yet. In all fairness it is a pretty specific feature that very few people would really use so perhaps the priority for it has been low. It has though been promised for the next public release (0.6.4) for ZoL, so it's not too far out.

ZoL is still rather young compared to BSD and Illumos and they are still working through some larger project management overhead. As time goes on you can tell the Linux repository and pull process has been getting faster and more streamlined and I don't think it will be too long before it receives even quicker feature pulls and quicker public release builds and things. It was only about a year ago when features added to Illumos took months before someone pulled them into Linux, but recently it's only been days or weeks, so things are looking up and continuing to improve at a fast rate.


You just have to put it into perspective. Even if there are the same number of developers working on each platform, the ZFS developers have had to spend a large portion of their time working through a lot of the Linux specific kinks and less time on keeping up with upstream feature pulls. But once all of these "kinks" are finally worked through, they can focus all their time on upstream pulls like the BSD ZFS developers presently get to do since they were able to work out the BSD kinks years ago. I have every bit of faith that Linux will make it's way there. You can easily see how active the OpenZFS development is these days and that the filesystem is not going away anytime soon.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Interesting. I was really hoping you had experience with booting from ZFS. I'm trying to do this in a VM and despite about 20 independent attempts I've failed every time. No clue what I'm doing wrong. :(

As for the rest, thanks for the info.

I'm still not convinced of this read-only thing. There's a caveat for read-only and feature flags, but I must be confused about it. There's been 2 or 3 times that I've had to do the read-only thing, but it might not apply to the 3 flags in question (and that makes me curious!). Maybe if I get motivated I'll test it out and figure out what that caveat is. /shrug
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I used a similar guide for Ubuntu since I use Linux Mint (https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem). It was maintained by the same person and he was on #zfsonlinux too. He said it should work and I assured him it has never worked for me. He gave some advice, but none of the stuff he mentioned was a problem for me. The problem was with grub not working with ZFS, and he didn't know what was wrong. Never really got around to doing a one-on-one since I've been too bust to play much since. :P
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sadly only the very most general of measures.

I know. But it's still nice to at least have a number. Of course, the number may not mean a whole lot.. I plan to read more this weekend on it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just tried to do ZFS as root on ubuntu again and it failed. GRR. So frustrating to have no clue what is broken with that fU&*$&#*()& guide.
 
Status
Not open for further replies.
Top