Mounting zpools from Solaris

Status
Not open for further replies.
Joined
Jul 13, 2013
Messages
286
In years-ago discussions I was warned that mounting a foreign zpool was risky (and in particular that writing to it was really very risky). That sounded like good conservative safe advice so I never tried it.

Until now.

Having successfully rescued my data, without destroying the old pool, I thought I might as well play around a bit, see if I can learn anything. If I do destroy the pool it won't hurt me any, and it would add to human knowledge :smile:. And if I don't destroy the pool--well, I probably won't try enough to really justify any strong conclusions. But it should at least be fun.

From the best of my records (short of hooking up the boot drives to something again, I guess) the old Solaris server was running OpenSolaris b134.

The FreeNAS box here is running FreeNAS-9.10-STABLE-201605021851 (35c85f7)

The old pool consists of 3 mirror vdevs (of different sizes) plus a hot spare.

Here's the import procedure I settled on after some playing around:

Code:
[ddb@fsfs ~]$ sudo zpool import -o altroot=/mnt/old -o readonly=on 123924226373954006 oldzp1
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.
Unsupported share protocol: 1.


That import command uses the relatively uncommon syntax for renaming the pool (I named my new pool zp1 also), that's the guid followed by "oldzp1" at the end. I don't really need the root override, but I had it set in my head to put the old pool there. And of course for this level of early risky testing we definitely want readonly!

I think the "Unsupported share protocol" errors relate to the filesystems being flagged for in-kernel CIFS sharing on Solaris. It doesn't appear to have interfered with the import.

Pool status after import:

Code:
[ddb@fsfs ~]$ zpool status -v oldzp1
  pool: oldzp1
state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0 in 6h46m with 0 errors on Tue Apr 19 17:53:44 2016
config:

        NAME                                            STATE     READ WRITE CKSUM
        oldzp1                                          ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/42c90c33-6b64-fe4c-c27f-e8eaa66d05a0  ONLINE       0     0     0
            gptid/f14d4128-3e65-64c3-c110-bf5e1f64aaa9  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/f5b56188-ddf4-8742-9c6a-ac8a67a5b0da  ONLINE       0     0     0  block size: 512B configured, 4096B native
            gptid/a6e5c5c3-cc39-b8e0-dc41-c2a9271b5d00  ONLINE       0     0     0  block size: 512B configured, 4096B native
          mirror-2                                      ONLINE       0     0     0
            gptid/9d576fdf-1b3d-ffc4-c32d-e4703a764103  ONLINE       0     0     0
            gptid/2465953f-8ecb-4143-b36d-bda7c2f79960  ONLINE       0     0     0  block size: 512B configured, 4096B native
        spares
          da3p1                                         AVAIL

errors: No known data errors
[ddb@fsfs ~]$


And pool properties:

Code:
[ddb@fsfs ~]$ zpool get all oldzp1
NAME    PROPERTY                       VALUE                          SOURCE
oldzp1  size                           3.63T                          -
oldzp1  capacity                       71%                            -
oldzp1  altroot                        /mnt/old                       local
oldzp1  health                         ONLINE                         -
oldzp1  guid                           123924226373954006             local
oldzp1  version                        22                             local
oldzp1  bootfs                         -                              default
oldzp1  delegation                     on                             default
oldzp1  autoreplace                    off                            default
oldzp1  cachefile                      none                           local
oldzp1  failmode                       wait                           default
oldzp1  listsnapshots                  off                            default
oldzp1  autoexpand                     on                             local
oldzp1  dedupditto                     0                              default
oldzp1  dedupratio                     1.00x                          -
oldzp1  free                           1.05T                          -
oldzp1  allocated                      2.59T                          -
oldzp1  readonly                       on                             -
oldzp1  comment                        -                              default
oldzp1  expandsize                     4.01G                          -
oldzp1  freeing                        0                              local
oldzp1  fragmentation                  0%                             -
oldzp1  leaked                         0                              local
oldzp1  feature@async_destroy          disabled                       local
oldzp1  feature@empty_bpobj            disabled                       local
oldzp1  feature@lz4_compress           disabled                       local
oldzp1  feature@multi_vdev_crash_dump  disabled                       local
oldzp1  feature@spacemap_histogram     disabled                       local
oldzp1  feature@enabled_txg            disabled                       local
oldzp1  feature@hole_birth             disabled                       local
oldzp1  feature@extensible_dataset     disabled                       local
oldzp1  feature@embedded_data          disabled                       local
oldzp1  feature@bookmarks              disabled                       local
oldzp1  feature@filesystem_limits      disabled                       local
oldzp1  feature@large_blocks           disabled                       local


This seems to work, I can access files, etc. (Had to create users to match old permissions and such, mind you.)

However, the pool imported manually isn't visible in the GUI, and the GUI tools aren't powerful enough to import the old pool (no way to deal with the fact that the name duplicates an existing pool). I haven't found a way to actually rename it (in the command-line you can specify an alternate name for import, as I did, but it does not persist).
 
Joined
Jul 13, 2013
Messages
286
Yeah, doesn't surprise me that the GUI doesn't know about the pool. However, that cuts off most of my ways of testing the pool, other than local access from a terminal window.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
You could try upgrading the pool to the latest version: zpool upgrade <pool>, then manually export from the CLI, re-import via the GUI. However, that being said, this is an old Solaris pool so it would be advisable to backup the data first.
 
Joined
Jul 13, 2013
Messages
286
However, that being said, this is an old Solaris pool so it would be advisable to backup the data first.
The data is in multiple safe places, I didn't start playing with cross-mounting until I was ready to wipe the disks.
 
Joined
Jul 13, 2013
Messages
286
then manually export from the CLI, re-import via the GUI
However, that won't solve the naming problem that I can see; the GUI seems to expect pool names to be unique.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Here are a few steps to rename a pool:

Code:
zpool export oldpoolname
zpool import oldpoolname newpoolname
zpool export newpoolname


You should now be able to import from the GUI.
 
Joined
Jul 13, 2013
Messages
286
Here are a few steps to rename a pool:

Code:
zpool export oldpoolname
zpool import oldpoolname newpoolname
zpool export newpoolname


You should now be able to import from the GUI.

Nope, the new name does not persist. If I do a "zpool import" it reports the pool available for importing under the old name, not the new on, and similarly if I try to start an import in the GUI.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Nope, the new name does not persist. If I do a "zpool import" it reports the pool available for importing under the old name, not the new on, and similarly if I try to start an import in the GUI.
What? That should not happen. Honestly, I'm rather baffled by that development. Perhaps a bug report is in order, at least to figure out what's going on.
 
Joined
Jul 13, 2013
Messages
286
I was able to rename a native FreeNAS pool in the way Ericloewe describes. So I apparently do understand what was meant, not that I was in serious doubt.

I don't have the controller in place to actually mount the Solaris pool at the moment; I'm keeping testing this carefully and documenting what happens on my to-do list for when we get done testing what's wrong with that other :( server and I bring the controller back here, and if I get different results I'll file a bug. (I'd expect it to get low priority, but documenting what happens is worthwhile.)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Maybe Solaris pools have some sort of flag indicating they shouldn't be permanently renamed? It's really quite baffling.
 
Joined
Jul 13, 2013
Messages
286
However, a renamed FreeNAS pool does not change the default mount point; it still mounts where the old name mounted. I know I can override that temporarily with altroot on the import, but that's not permanent.

Ah, the actual mountpoint is at the zfs filesystem level; setting a mountpoint that matches the new name in the top-level filesystem does the trick.
 
Status
Not open for further replies.
Top