zpool status no longer showing gptid for one disk

djjaeger82

Dabbler
Joined
Sep 12, 2019
Messages
16
Hi everyone,
First of all thanks for this community and product, I've only been using FreeNAS for about 1 year, but its been a fun experience using it and reading thru this community for help along the way. I have an ESXI/FreeNAS all in one box (with proper pci-e sata controller passthru), and 2 different pools. The pool I'm having trouble with (SEA_4TB_RAIDZ2) is made up of 8x STDR4000100 shucked 2.5" SMR drives in raid-z2. Its used only for cold storage (plex media of my dvd/bluray collection). Everything was working great until last night, I powered down the box and migrated the 8 drives to a new 8-bay 2.5" backplane/hotswap double height 5.25" drive enclosure. When I booted up, I saw the pool was degraded, and instead of da1-da8 (8 drives) FreeNAS only reported da1-da7. I immediately powered down, did some troubleshooting, checked cables, swapped drive positions in the enclosure and determined one of the 8 backplanes was DOA. I reverted back to my previous setup, and then powered back on again (no writing was done to the dataset during the degraded time). The pool immediately reported to be healthy, but I ran a scrub just to make sure everything was 100% ok, no problems were found.

The only thing I've noticed that's different now is when I run a zpool status on that pool, 7 of the 8 drives report their gptids, but the one that was problematic from the bad backplane only reports da6p1. I read that using these type of labels instead of gptid can be dangerous in the case of drive order / controller swaps in the future.

1.) Is this really a problem? Is my data at risk at all?
2.) How can I fix it so it matches the others? (I'm a bit OCD about these things...).

From what I've read it sounds as though my only option is to remove this drive, and readd it to the pool (resilvering the data), but I'm not 100% sure. I'd like to avoid that if there's a simpler fix. I've got a replacement enclosure/backplane coming tomorrow, so I was hoping to sort this out before I try moving the drives into the enclosure again.

Here's the zpool status output:
root@freenas:~ # zpool status SEA_4TB_RAIDZ2
pool: SEA_4TB_RAIDZ2
state: ONLINE
scan: scrub repaired 0 in 0 days 06:37:43 with 0 errors on Thu Sep 12 06:14:51 2019
config:

NAME STATE READ WRITE CKSUM
SEA_4TB_RAIDZ2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/352fb20b-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0
gptid/39b42713-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0
gptid/3e29381f-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0
gptid/42af51e9-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0
gptid/47412fd7-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0

da6p1 ONLINE 0 0 0
gptid/51e7ff53-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0
gptid/56af2c24-c126-11e9-a4bb-000c29000147 ONLINE 0 0 0


errors: No known data errors

And here's the gpart list for the drive da6 (note it looks like it still has its proper gptid label here):

Geom name: da6
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da6p1
Mediasize: 4000786944000 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e1
efimedia: HD(1,GPT,4bcbfc62-c126-11e9-a4bb-000c29000147,0x80,0x1d1c0be08)
rawuuid: 4bcbfc62-c126-11e9-a4bb-000c29000147
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 4000786944000
offset: 65536
type: freebsd-zfs
index: 1
end: 7814037127
start: 128
Consumers:
1. Name: da6
Mediasize: 4000787030016 (3.6T)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e2


I should also note that in the new gui of freenas, if i look at the pool status the da6 drive sticks out differently as well:

Pool Status
SCRUB
Status: FINISHED
Errors: 0
Date: Wed Sep 11 2019 23:37:08 GMT-0400 (Eastern Daylight Time)
Name Read Write Checksum Status
SEA_4TB_RAIDZ2 0 0 0 ONLINE
RAIDZ2 0 0 0 ONLINE
da1p1 0 0 0 ONLINE more_vert
da2p1 0 0 0 ONLINE more_vert
da3p1 0 0 0 ONLINE more_vert
da4p1 0 0 0 ONLINE more_vert
da5p1 0 0 0 ONLINE more_vert
/dev/da6p1 0 0 0 ONLINE more_vert
da7p1 0 0 0 ONLINE more_vert
da8p1 0 0 0 ONLINE more_vert


Thanks in advance,
Dan
 
D

dlavigne

Guest
Were you able to resolve this? If not, which version of FreeNAS?
 

djjaeger82

Dabbler
Joined
Sep 12, 2019
Messages
16
Unfortunately I have not resolved it, I'm on 11.2-U5 but just saw the U6 update available this morning. It did mention some general ZFS/FreeBSD bugfixes, any hope that would resolve my issue? Or any other ideas? :-/
 
D

dlavigne

Guest
It doesn't hurt to update. Let us know if the issue persists after the update.
 
Top