Cannot receive ZFS stream from Solaris 11.2

Status
Not open for further replies.

Blai Bonet

Cadet
Joined
Apr 19, 2015
Messages
8
Hi all,

I just set up a FreeNAS server in which I'd like to backup ZFS file systems from by Sun box running Solaris 11.2 (the latest version). The idea is to do a "zfs send" from the solaris box and to receive the stream at the FreeNAS box with "zfs recv". However, when doing it I get the following error on the FreeNAS side:

cannot receive: stream has unsupported feature, feature flags = 24

There is not much information on the internet on how to solve this problem. Apparently, the reason is that Oracle (Sun) ZFS uses a new unsupported option for zfs volumes.

Do you have any idea on how to solve this?

One of the main reasons for setting up the FreeNAS box was to be able to backup my Solaris box. I'd be a pity if this cannot be done after all the money and time that I spent setting up FreeNAS.

Thanks!

Blai
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
simple answer: it is not possible.


you might create a new pool with an old zfs feature flag, but this is not smart.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Do you have any idea on how to solve this?
Option 1: as @zambanini suggested, recreate the pool on the Solaris box with feature flags disabled.
Option 2: run Solaris 11.2 on the backup box.
Option 3: find out if any other ZFS flavor supports the feature flags you need and run that, e.g. ZFSonLinux, FreeBSD, illumos etc.
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
Hi all,

cannot receive: stream has unsupported feature, feature flags = 24

Blai


Solaris doesn't support " feature flags ". They were created when the open source version forked and added new features.

If you create the pool using zfs version 28 and a zfs version 5 file system on the Solaris side you should end up with something compatible with both Solaris and freenas.

Make sure you are not using any functionality offered beyond zpool v 28 on the Solaris side. This includes the " new share" command syntax.

PS. Why not use rsync?
 

Blai Bonet

Cadet
Joined
Apr 19, 2015
Messages
8
Thanks all for the answers. I would be able to re-create my zpools in version 28 and recreate the zfs filesystems at version 5.

However, I did an experiment to see if that would be indeed a solution, but the results are not good.

I did the following:

1. Created a zpool at version 28 in solaris with the command "zpool create -o version=28 tank <filename>" where <filename>
is a 100Mb file used as a device to store the zpool (the file resides in a zpool/zfs dataset with versions 35/6 respectively).
I created the file with "dd if=/dev/zero of=<filename> bs=1048576 count=100"

2. Created a zfs dataset tank/backup28 at version 5 with the command "zfs create -o version=5 tank/backup28".

3. Copied one file into tank/backup28

4. Created a snapshot of tank/backup28 with "zfs snapshot tank/backup28@now"

5. Sended tank/backup28@now into the file "zfs-test-stream.bin" with "zfs send tank/backup28@now > zfs-test-stream.bin"

Commands 1-5 all executed in Solaris 11.2

On the FreeNAS side, I executed:

1. "zfs recv tank/backup28@now < zfs-test-stream.bin" where tank/backup28 is a zpool/zfs dataset in the FreeNAS server.

2. I got the same message as before: "cannot receive: stream has unsupported feature, feature flags = 24"

Am I doing something wrong? Please help!

Attached you'll find the *gzipped* zfs stream that I generated in Solaris 11.2.

Thanks!

Blai
 

Attachments

  • zfs-test-stream.gz
    430.7 KB · Views: 308

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
My Solaris 11.2 machine has been runningwith
Thanks all for the answers. I would be able to re-create my zpools in version 28 and recreate the zfs filesystems at version 5.

Blai

First, about rsync, have you tried it ?

Next, very odd indeed. Can you just throw in a disk and try it instead of using files (shouldnt matter, but who knows)?

I recently moved off zfs on linux and back to solaris 11.2 (moving from ubuntu back to Solaris, but have done this with freenas as well).
I figured this flip back to solaris might happen so i intentionally formatted as 28 and 5 knowing the compatability would be there.

Anhow...z01 was obviously from zfs on linux, you can see this via zpool history (notice the linux style /dev/sda references and how i created a v28 pool and a v5 file system).

root@DL160-G6:/export/home/david# zpool history z01|more
History for 'z01':
2014-09-20.16:39:29 zpool create -o version=28 -O version=5 z01 raidz2 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdj /dev/sdk /dev/sdl /dev/sdm
2014-09-20.16:40:00 zfs create -o version=5 z01/fs

zpool status shows the "older version" stuff:


pool: z01
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 10h15m with 0 errors on Sat Mar 14 20:16:10 2015
config:

errors: No known data errors

pool: z02
state: ONLINE
scan: scrub repaired 776K in 16h14m with 0 errors on Fri Mar 20 12:11:44 2015
config:
errors: No known data errors

This "zfsonlinux" pool has been running under solaris 11.2 for around 5 months now without any issues.

I have done this in the other direction as well (created a pool under solaris and imported on zfsonlinux using the same method).
 

Blai Bonet

Cadet
Joined
Apr 19, 2015
Messages
8
David,

Yes, rsync works but I would prefer to do it with zfs since it would allow me
to store several snapshots and not the latest one.

About the incompatibility between Solaris 11.2 and the ZFS implementations,
yesterday I came across a thread in a zfsonlinux discussion forum about this
specific problem. It seems that Solaris released a patch to ZFS that will render
the updated streams incompatible with non-patched ZFS systems (independently
of the version number). Zfsonlinux seems to have a solution for this incompatibility
but I have not seen such discussion on the FreeNAS side (actually, I wonder which
ZFS implements FreeNAS). You can see the discussion here:

http://zfs-discuss.zfsonlinux.narki...up-solaris-10-update-11-zfs-send-incompatible

Thus, I bet that if you generate a stream in your Solaris 11.2 box, you won't be able
to receive it in a FreeNAS box, but it will probably work in zfsonlinux.

For this reason, I am thinking on installing a different ZFS implementation in the
FreeNAS box (perhaps inside a jail), but not sure how to do this or if it would work.
Another option is to replace FreeNAS with something else. Any recommendation?

Finally, I'm using FreeNAS because my preferred choice, Solaris 11.2, will not properly
install on the mob AsRock c2550d4i:

http://www.asrockrack.com/general/productdetail.asp?Model=C2550D4I#Specifications

I had problems with the disks connected to the Marvell SE9230 controller card (before and
after firmware upgrade). The disks were visible but non usable: repeated device names, bad
labels, and I was unable to change the labels of such disks. I have 8 disks, 2 connected to the
SATA3 (native) controller in the mob at 6 Gb/s, 2 connected to the Marvell SE9172 SATA3
controller (6 Gb/s), and 4 connected to the Marvell SE9230 SATA3 controller (6 Gb/s).
The SE9230 controller is hardware raid but I wasn't able to tell Solaris to use it in JBOD
mode.

Thus, I was forced to move away from Solaris, and FreeNAS seemed like a good choice.
But, now, I'm frustrated with the ZFS incompatibilities.

Another option for me is to replace the mob with something else that:

1) MiniITX form factor (so it would fit my case)
2) 8 SATA3 controllers
3) (preferred) accepts the same memory as the AsRock mob
4) works well with Solaris 11.2

Again, any recommendation?

Best,

Blai
 

Paul Morris

Dabbler
Joined
Sep 3, 2014
Messages
14
I was having the same issue. I needed to move some Solaris 11 zfs pools from an expiring SAN. I initially tried the send/receive route but as we all know it don't work. I also went the rsync route. But moving 8TB of data seemed bit daunting for rsync kept getting time outs, symlink problems and incomplete transfers. So, The work around I came up with is quite simple and seems to work very well. If you see any down sides to this please let us know.

On the Freenas I created a block device (zvol) and shared it out using iSCSI. On the Solaris side I attached the Block device and created a zpool on it from there is was all the normal solaris ZFS commands to do the send receive. Here is a quick outline of what I did,

Set up iSCSI from Freenas to Solaris

FREENAS

You must create an iSCSI target.

First this is to create a zvol under storage -> volumes.

Under Sharing create in iSCSI Block Target using the previously created zvol. If you haven’t already created the portal and initiators you must first do that. I didn’t do any security checking because I needed to see if this would work. You can set that up later after you experiment a bit.

Add the extent which references the path to the extent.

Associate the target with the extent.

SOLARIS

verify iscsi initiator is on line

#svcs network/iscsi/initiator

be sure iSNS is enabled

#iscsiadm modify discovery --iSNS enable

make sure you have iSCSI devices

#devfsadm -i iscsi

be sure you initiator node is configured

#iscsiadm list initiator-node

Initiator node name: iqn.1986-03.com.sun:01:0010e03a8720.521fead5
Initiator node alias: orhs11-b07-01.test
Login Parameters (Default/Configured):
Header Digest: NONE/-
Data Digest: NONE/-
Max Connections: 65535/-
Authentication Type: NONE
RADIUS Server: NONE
RADIUS Access: disabled
Tunable Parameters (Default/Configured):
Session Login Response Time: 60/-
Maximum Connection Retry Time: 180/-
Login Retry Time Interval: 60/-
Configured Sessions: 1

add the Freenas as an iSNS server

#iscsiadm add isns-server 192.168.1.199

Add the discovery ip address for the Freenas

#iscsiadm add discovery-address 192.168.1.199:3260

See what the Freenas is advertising as an iSCSI device:

#iscsiadm list discovery-address -v

Discovery Address: 192.168.1.199:3260
Target name: iqn.2005-10.org.freenas.ctl:scratch
Target address: 192.168.1.199:3260, 257

Add the target as a static device

#iscsiadm add static-config iqn.2005-10.org.freenas.ctl:scratch,192.168.1.199

At this point you should have a disk device which you can see with the format command. It will have a specification similar to the following:

c0t6589CFC000000DD1B3935CF051614FBBd0 <FreeBSD-iSCSI Disk-0123-250.00GB>
/scsi_vhci/ssd@g6589cfc000000dd1b3935cf051614fbb


copy the c0t6589CFC000000DD1B3935CF051614FBBd0 part of the specification as the device ID

Using the zfs commands create a new pool

#zpool create mypool c0t6589CFC000000DD1B3935CF051614FBBd0

you now have a solaris style zfs pool on the Freenas.

----------------------------
Like I said, There may be a downside to doing this but I'm not sure what it would be. The Freenas pool is configured as as a raid Z2 so the underlying structure is fairly well protected. However on the Solaris side the pool is sitting on what appears to be a single device. You could set up a mirror situation with another device on a second Freenas but I'm not sure that would be entirely necessary unless you have a huge need for redundancy and fail over...

Anyhow that is how I solved the problem, hope it helps you...
 

_rob_

Cadet
Joined
May 1, 2017
Messages
1
I think I may have found a workaround...

OpenZFS on OSX has a patch to ignore spill blocks: https://github.com/openzfsonosx/zfs/commit/521ea8da0d611c4175ac0a37366527ef087e8297

Haven't done much verification so far, but if you first zfs recv the pool on Mac, then zfs send it to FreeNAS, it appears to work - at least the pool and all snapshots are received without errors.

Of course it would be much better if zfs on FreeBSD could also adopt this patch, since it seems like a pretty common problem for people migrating from Solaris.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Fair chance of succeeding? That's not very encouraging.
 
Status
Not open for further replies.
Top