gptid or ada0 after replacing a drive

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I have started to replace my drives in my server with the new WD Red drives. I find myself questioning the label of the new drive. Seems I recall way back that the drives needed to be properly listed using the gptid and not the device identifier. After replacing 3 of my 4 hard drives this is what my system looks like...

Code:
[root@freenas] ~# glabel status
                                      Name  Status  Components
                             ufs/FreeNASs3     N/A  da1s3
                             ufs/FreeNASs4     N/A  da1s4
gptid/40310ae1-0d40-11e1-9d47-50e549b78964     N/A  ada1p1
                    ufsid/4f4d8c40f9df124b     N/A  da1s1a
                            ufs/FreeNASs1a     N/A  da1s1a
                            ufs/FreeNASs2a     N/A  da1s2a


[root@freenas] ~# zpool status -v
  pool: FLASH
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        FLASH       ONLINE       0     0     0
          da0p1     ONLINE       0     0     0

errors: No known data errors

  pool: farm
 state: ONLINE
 scrub: none requested
config:

        NAME                                            STATE     READ WRITE CKSUM
        farm                                            ONLINE       0     0     0
          raidz1                                        ONLINE       0     0     0
            ada0p1                                      ONLINE       0     0     0
            gptid/40310ae1-0d40-11e1-9d47-50e549b78964  ONLINE       0     0     0
            ada2p1                                      ONLINE       0     0     0
            ada3p1                                      ONLINE       0     0     0

errors: No known data errors


So my question is... Does the pool need the gptid tags or am I fine the way I'm running? I still have one drive left to replace at which point I will not see the gptid anymore.

I am currently running FreeNAS 8.0.4 but preparing to move to 8.3.0-RC1 and upgrading to ZFS V28.

Thanks,
Mark
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
So my question is... Does the pool need the gptid tags or am I fine the way I'm running? I still have one drive left to replace at which point I will not see the gptid anymore.
It's fine this way. It may switch to GPTID after installing 8.3.0-RC1, but maybe only if you export/import. Worst case ZFS will just taste the disks again and figure out which is which.

I am currently running FreeNAS 8.0.4 but preparing to move to 8.3.0-RC1 and upgrading to ZFS V28.
I thought 8.0.4 used the partition name by default? I never ran it so I very well may be wrong.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So long as it's not going to be a problem. I do have the option of deleting my pool and rebuilding from scratch however I'd rather not if there is no problem.

As for what 8.0.4 uses, I don't know. Never got into that aspect of the software but I do recall there being a possible issue with getting rid of the gptid.

Thanks.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
I feel the need to wake this thread back up because I have the same problem though I do NOT have the option of deleting my pool as I don't have another place to offload 10TB of data.
I was and am running HD204UI Samsung drives which all have pending sector counts and 1 had the click of death so I am in the process of rotating all 11 (that are left) with replacement drives sent to me by Seagate.
Here is my current status'

Code:
zpool status
  pool: storage
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Wed Nov  7 02:32:55 2012
        104G scanned out of 13.6T at 40.7M/s, 96h38m to go
        34.6G resilvered, 0.75% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        storage                                         ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/1eaf24ee-286b-11e2-b71d-003048348d66  ONLINE       0     0     0  (resilvering)
            gptid/d6d096ce-2875-11e2-8c57-003048348d66  ONLINE       0     0     0  (resilvering)
            gptid/48fb2fa7-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0
            gptid/49857218-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0
            gptid/49f37d65-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0
            gptid/7037831f-1829-11e2-8c8f-003048348d66  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            ada0                                        ONLINE       0     0     0  (resilvering)
            ada1                                        ONLINE       0     0     0  (resilvering)
            gptid/f3a5ec46-1829-11e2-8c8f-003048348d66  ONLINE       0     0     0
            gptid/6b19c631-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0
            gptid/6b724a0b-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0
            gptid/6bcbb689-6f5c-11e1-90b1-003048348d66  ONLINE       0     0     0

errors: No known data errors

glabel status
                                      Name  Status  Components
gptid/f3a5ec46-1829-11e2-8c8f-003048348d66     N/A  ada2p2
gptid/6b19c631-6f5c-11e1-90b1-003048348d66     N/A  ada3p2
gptid/6b724a0b-6f5c-11e1-90b1-003048348d66     N/A  ada4p2
gptid/6bcbb689-6f5c-11e1-90b1-003048348d66     N/A  ada5p2
                             ufs/FreeNASs3     N/A  ada6s3
                             ufs/FreeNASs4     N/A  ada6s4
gptid/1eaf24ee-286b-11e2-b71d-003048348d66     N/A  ada7p2
gptid/d6d096ce-2875-11e2-8c57-003048348d66     N/A  ada8p2
gptid/48fb2fa7-6f5c-11e1-90b1-003048348d66     N/A  ada9p2
gptid/49857218-6f5c-11e1-90b1-003048348d66     N/A  ada10p2
gptid/49f37d65-6f5c-11e1-90b1-003048348d66     N/A  ada11p2
gptid/7037831f-1829-11e2-8c8f-003048348d66     N/A  ada12p2
                            ufs/FreeNASs1a     N/A  ada6s1a
                    ufsid/5075d1f77192f9ac     N/A  ada6s2a
                            ufs/FreeNASs2a     N/A  ada6s2a
 

warri

Guru
Joined
Jun 6, 2011
Messages
1,193
I replaced one of my drives back in version 8.0.x (probably 8.0.1), via shell since the GUI functionality did not work back then.
Now my drive is also listed as adaX, and the swap partition on this drive is missing.

Apart from that, I have not experienced any problems with this setup.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
If you can forgo the swap space on the drives because you have enough RAM, do it. It makes replacing a drive so much easier.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Agree with joeschmuck. The swapspace is one of those "nice things to have". I don't go out of my way to not have it, I won't miss 2GB of storage out of 2TB+. But if I have a drive that doesn't have it, I'll live. I always make sure I have no less than 16GB of RAM in any server anyway.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Well I had to change out the last two drives with st2000sl003 drives after having HD204UI and the drives were so slow to respond that my web gui wasn't responding and would time out so had to start the replacing process from command line. However I contact Seagate and let them know that the st2000sl003 is not a acceptable replacement. That I purchased the HD204UI's specifically for compatibility and reliability and they have agreed to take back the st2000sl003 and send me replacement HD204UI
I am replacing drives because of the firmware bug in the early 204's. It has caused me problems and they have agreed to replace the drives because of the pending errors that won't go away.
But yea, anyhow Just wanted to be sure it wouldn't cause any issues and don't care about my swap as I have enough RAM for what my server is used for though I'm sure enough 4GB wouldn't hurt, lol.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
It makes replacing a drive so much easier.
Unless you later try to use the GUI! It makes assumptions.

As long as you are comfortable doing the replacements from the CLI it is easier.

In case you did want swap from the CLI e.g. for disk ada0 with 2G swap:
Code:
gpart create -s gpt ada0

gpart add -i 1 -t freebsd-swap -s 2G ada0

gpart add -i 2 -t freebsd-zfs ada0
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I used the GUI to replace all 4 of my drives, one at a time. I did not have the 2GB swap partition on the originals and the replacements were created without the swap files also, automatically. I cannot complain one bit about how smooth things went. The older drives have since been re-purposed and I eventually added a 5th drive to my pool but had to remake it all from scratch. Kind of sucked having to copy my data back to the NAS but it wasn't that much that I didn't already have saved off on DVD media.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
I used the GUI to replace all 4 of my drives, one at a time. I did not have the 2GB swap partition on the originals and the replacements were created without the swap files also, automatically.
Your new drives were larger though not that you would have had problems in your case.

You can run into issues when you are replacing a drive that's the same size and you used the whole drive to begin with, no partitions. Then it's a few sectors too small. 8.3 may have the ZFS code that deals with slightly smaller drives anyway. In which case it's even less of a problem.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Your new drives were larger though not that you would have had problems in your case.

You can run into issues when you are replacing a drive that's the same size and you used the whole drive to begin with, no partitions. Then it's a few sectors too small. 8.3 may have the ZFS code that deals with slightly smaller drives anyway. In which case it's even less of a problem.
My drives were the same size, 2TB each, same as the originals. I don't know how ZFS deals with it at all. When I replaced the drives, here is the process I went through...

1) Turned off the NAS
2) Removed one drive and installed the new drive (same size in my case)
3) Turned on the NAS
4) Under View Disks I had listed the old drive which was not connected and showed an error of course, and a new drive which was not part of the pool yet.
5) I selected for the new drive to replace the old drive and resilvering started up.
6) Once resilvering completed (several hours later) I removed the disconnected drive from the pool which restored my pool to it complete without error self.
7) I repeated the above steps for each drive until all 4 drives were replaced.

I will not be so bold as to state that this will work for everyone but it did work for me and I expect it to work whenever I have a failure of a hard drive. And the purpose of replacing my drives is because one drive I bought at the same time which was in another computer starting showing problems and when these Red drives came out, well I wanted to try them and this was a good time for me to swap out my drives. Also this was under 8.3.0-RC1 and ZFS V15 (Didn't upgrade at the time).
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
My drives were the same size, 2TB each, same as the originals.
That's what I get going from memory.

I will not be so bold as to state that this will work for everyone but it did work for me and I expect it to work whenever I have a failure of a hard drive.
GUI replacement has worked well since 8.2 at least.

The edge case I was referring to is if you are using drives that have no partitions, i.e. old pool or manually created. The GUI always partitions. Which makes the drives smaller than the existing ones. At bit more than 2GB smaller if you let swap settings at the default.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
The edge case I was referring to is if you are using drives that have no partitions, i.e. old pool or manually created. The GUI always partitions. Which makes the drives smaller than the existing ones. At bit more than 2GB smaller if you let swap settings at the default.
I have changed the defaults in the GUI to a zero sized partition. I do that right up front when I reinstall and do not restore my config file. I also don't care for the "Strongly Discouraged" warning. I think the rationale should be stated behind the warning. I have not used the 2GB swap partition in roughly 2 years, basically since I upgraded from 4GB RAM to 8GB RAM, then I reformatted my drives without the swap partition. I myself am comfortable using the NAS in this configuration in my home. If I could upgrade to 16GB RAM, I would but that would only be to help out the ZFS memory hog, not for system stability as it truly is stable on my computer, no complaints from me right now.

But hey, we got a bit off topic here.
 

xic044

Dabbler
Joined
Oct 24, 2013
Messages
16
Regarding the gptids I am in the same situation as Joeschmuk: 2 of my replaced disks did not received gptids. My zpool seems healthy and working fine although I noticed less capacity is showing .. 6 x 500GB in RAID-Z2 originally was giving me 1.8TB after the replacement shows 1.1TB.

Is there any reason for why these drive did not get gptid s? Are they being really used by the pool?
 
Status
Not open for further replies.
Top