Importing NexentaStor Drives - GPT rejected

KernelPanic

Dabbler
Joined
Apr 21, 2016
Messages
16
Trying to import a whole heap of Nexenta Drives on the original hardware - that have been working in Nexenta machines for years. Have done this many times before, and never had an issue... However, its always been SATA drive, this time its all SAS, some on redundant backplanes, some not.
This time: Lots of issues...
Dec 29 04:22:37 freenas GEOM: multipath/disk1: corrupt or invalid GPT detected.
Dec 29 04:22:37 freenas GEOM: multipath/disk1: GPT rejected -- may not be recoverable.
Dec 29 04:22:37 freenas GEOM_MULTIPATH: da52 added to disk1
Dec 29 04:22:37 freenas GEOM_MULTIPATH: disk2 created
Dec 29 04:22:37 freenas GEOM_MULTIPATH: da22 added to disk2
Dec 29 04:22:37 freenas GEOM_MULTIPATH: da22 is now active path in disk2
Dec 29 04:22:37 freenas GEOM: multipath/disk2: corrupt or invalid GPT detected.
Dec 29 04:22:37 freenas GEOM: multipath/disk2: GPT rejected -- may not be recoverable.

Most of the Multipath drives refuse to show any GPT. Some of the multipath drives work however:
Dec 29 04:22:44 freenas GEOM_MULTIPATH: disk19 created
Dec 29 04:22:44 freenas GEOM_MULTIPATH: da39 added to disk19
Dec 29 04:22:44 freenas GEOM_MULTIPATH: da39 is now active path in disk19
Dec 29 04:22:44 freenas GEOM: multipath/disk19: the secondary GPT header is not in the last LBA.
Dec 29 04:22:44 freenas GEOM_MULTIPATH: da70 added to disk19

camcontrol devlist shows all the drives as expected.
glabel status shows all of the partitions it can see, as does gpart status - except showing the multipath drives that did load as "corrupt".
For example:
multipath/disk19p1 CORRUPT multipath/disk19
multipath/disk19p9 CORRUPT multipath/disk19

Is there any way I can debug what is going on with multipathing and the GPT detection? Id love to be able to import these drives directly into Freenas.
gpart recover doesnt work, because the GEOM object simply doesnt exist.

Any hints clues or where to look? the drives all work fine on Nexenta - and there doesnt appear to be any difference in the drives, just which sscsi bus they are connected with.
 

KernelPanic

Dabbler
Joined
Apr 21, 2016
Messages
16
Obviously trying to import a pool in such a configuration results in this:
pool: Tier2
id: 5397245177085225795
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://illumos.org/msg/ZFS-8000-EY
config:

Tier2 UNAVAIL insufficient replicas
raidz1-0 DEGRADED
gptid/58536a2d-630d-3662-dada-c53baf2fd104 ONLINE
gptid/ebbaef45-899f-53ee-b844-d00b2daf1d0c ONLINE
10840574378355186738 UNAVAIL cannot open
gptid/70156fee-e352-f565-f7fc-b8a8a421e609 ONLINE
raidz1-1 ONLINE
gptid/5563b019-16b0-906b-c77d-d7b5a326b5f3 ONLINE
gptid/0d3c5c49-f0a7-82cc-b111-f317cedb6c7e ONLINE
gptid/c4de066c-0ad6-24ef-f783-a71d0cb702a7 ONLINE
gptid/a79882c1-db84-164c-99c9-8d86352f7c13 ONLINE
raidz1-2 UNAVAIL insufficient replicas
18000300587577096170 UNAVAIL cannot open
8970661415634425076 UNAVAIL cannot open
15165289728017997711 UNAVAIL cannot open
10801109137424835777 UNAVAIL cannot open
raidz1-4 UNAVAIL insufficient replicas
16877934833990567316 UNAVAIL cannot open
12113081285125575393 UNAVAIL cannot open
12900287710383155655 UNAVAIL cannot open
18163093259318156291 UNAVAIL cannot open
logs
gptid/007e349b-0957-256d-f048-8b418bc84bfb ONLINE
 

KernelPanic

Dabbler
Joined
Apr 21, 2016
Messages
16
Nope. What I have found though:

All of the rejected drives are multipathed at the SAS layer. All of the accepted drives are not.
Except, all of the 2TB drives that are multipathed are recognised.

Its a partition issue. Solaris stores its backup partition at the end of the disk, which is where MultipathD seems to store its multipathing data. Freenas appears not to see the primary partition data, so relies on the backup.
The simple solution for me seems to be, pull the secondary SAS cable to my JBOD, and then I'll be able to import all of the disks.
 
Top