mirror vdev with different sized freebsd-zfs partitions after disk replacement

Status
Not open for further replies.

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
I have just replaced one failed disk of my zpool consisting of one mirror vdev. I managed the whole replacement using the webGUI including the password exchange as my zpool is GELI encrypted. Resilver process finished without errors and zpool status shows that my zpool is OK. Although I feel comfortable on the command line and know the commands how to replace a disk in a mirror vdev, I used the webGUI, because I remember that FreeNAS sometimes does not like, if you configure certain things without using the webGUI.

The replaced disk is ada1 and it has exactly the same geometry as ada2. When I look at the partition details of both disks a got surprised:
Code:
# gpart show ada1 ada2
=>		40  5860533088  ada1  GPT  (2.7T)
		  40		  88		- free -  (44K)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  5856338688	 2  freebsd-zfs  (2.7T)
  5860533120		   8		- free -  (4.0K)

=>		34  5860533101  ada2  GPT  (2.7T)
		  34		  94		- free -  (47K)
		 128	 4194304	 1  freebsd-swap  (2.0G)
	 4194432  5856338696	 2  freebsd-zfs  (2.7T)
  5860533128		   7		- free -  (3.5K)


The replaced disk ada1 has a slightly smaler freebsd-zfs partition though start sector of both partitions are the same.

Although the zpool obviously has no issues I have the follwoing questions:

1.) Do I have to be concerned about the size mismatch of the freebsd-zfs partitions with respect to data security or ZFS performance?
2.) Why do the partition layouts differ at all, if the disks are of the same brand and of identical geometry?
3.) Should I have better replaced the disk using the command line? What things would I have to consider in this case to make FreeNAS happy?

Thanks in advance to any comments on this.
 
Last edited by a moderator:

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
Unfortunately, not here in the forum. I have asked the question in the #freenas IRC channel and my issue could be answered at least partly. The overall different partitioning of ada1 is because FreeNAS has changed the partitioning. My zpool was once created with FreeNAS 9.3 or even before. In particular, first usable sector has been moved to sector 40. The difference of the freebsd-zfs partitioning is still open. But all feedback on IRC confirmed that the zpool is OK. This goes along with zpool status not complaining.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Wait, which of those is the new disk?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
What's the output of diskinfo -v ada1 ada2?
 

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
Both disks have exactly the same geometry:
Code:
# diskinfo -v ada1 ada2
ada1
   512			 # sectorsize
   3000592982016   # mediasize in bytes (2.7T)
   5860533168	  # mediasize in sectors
   0			   # stripesize
   0			   # stripeoffset
   5814021		 # Cylinders according to firmware.
   16			 # Heads according to firmware.
   63			 # Sectors according to firmware.
   ST3000NM0033-9ZM178   # Disk descr.
   Z1Y1KFF9	   # Disk ident.
   Not_Zoned	   # Zone Mode

ada2
   512			 # sectorsize
   3000592982016   # mediasize in bytes (2.7T)
   5860533168	  # mediasize in sectors
   0			   # stripesize
   0			   # stripeoffset
   5814021		 # Cylinders according to firmware.
   16			 # Heads according to firmware.
   63			 # Sectors according to firmware.
   ST3000NM0033-9ZM178   # Disk descr.
   Z1Y3KH79	   # Disk ident.
   Not_Zoned	   # Zone Mode
 
Last edited by a moderator:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
How weird. I know the EFI partition size was changed for boot pools, to better comply with standards, but I haven't heard of anything on data disks.
 

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
Yes, "weird" was exactly my first thought when checking the partition layout after replacement of the failed disk.

It would be nice, if this could be clarified, because I am still in a slight uncertainty with respect to FreeNAS reliability. On the other hand I am a FreeNAS long time user with much trust in FreeNAS.

To be honest, I do not worry about ZFS data security, reliability and performance. As long as zpool status and smartctl do not complain I feel fine.
I have searched for this topic, e.g. for mismatched freebsd-zfs partiton sizes in a mirror vdev but could only find a vague statement that the zpool size should be determined by the smaller disk.

I have replaced several disks in ZFS mirror vdevs on Vanilla FreeBSD machines where I have full control over partition layout. When using the webGUI experienced users are expcluded from the backgrund tasks. I would like to know some more details about gpart invocation from the FreeNAS webGUI. From my understanding a possible slight error in partitioning should be searched for at that point.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It's not a reliability concern at all. You're somewhat lucky that the difference is small enough for the disk to be compatible with the number of metaslabs the other disks have, otherwise it wouldn't have worked. But it did, so it's not a real concern for now, just a curiosity.

Absolute worst-case, a larger disk might be needed as a result of this, but that wasn't the case here.
 

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
Thanks for feedback, but question still remains for me why the webGUI partitioner (with gpart under the hood) created the size mismatch of the freebsd-zfs partitions. The disk have identical geometry - or does it just seem to be the case only. May be newer gpart in combination with newer ZFS calculate different freebsd-zfs partition size. My zpool has been created with FreeNAS-9.3 and I replaced the failed disk under FreeNAS-11.1-U5.

My theory is: If I replace ada2 (the old aka remaining disk of the mirror) with itself, it will just get partitioned like ada1. I cannot test this right now due to lack of time. On the other hand, I expect ada2 to die sooner or later because it is running now for almost 5 years in 24/7 mode. The replaced disk failed after a bit more than 5 year of 24/7 operation. So I will able soon to check my theory.

If I have time before the disk fails, I will try to verify my theory with my QEMU guest system.

BTW: Could you please give me some more details about these "metaslabs". I have not yet heard about it and I am not sure, what you mean with it.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
Thanks for the link but the article is hard to read and understand. I am afraid I have to search for another article to get a better understanding of the metaslaps.

I just verified with my FreeNAS QEMU VM that the size mismatch of the freebsd-zfs partitions after replacing a disk is reproducible. I replaced vtbd1:
Code:
# gpart show vtbd1 vtbd2
=>	   40  167772080  vtbd1  GPT  (80G)
		 40		 88		 - free -  (44K)
		128	4194304	  1  freebsd-swap  (2.0G)
	4194432  163577680	  2  freebsd-zfs  (78G)
  167772112		  8		 - free -  (4.0K)

=>	   34  167772093  vtbd2  GPT  (80G)
		 34		 94		 - free -  (47K)
		128	4194304	  1  freebsd-swap  (2.0G)
	4194432  163577688	  2  freebsd-zfs  (78G)
  167772120		  7		 - free -  (3.5K)


I replaced the disk under FreeNAS 11.1-U5 and the zpool was once created under FreeNAS 9.3. The freebsd-zfs partition of the replaced disk is again shrunk by exactly one sector.
 

pvoigt

Dabbler
Joined
Jan 8, 2015
Messages
12
And I could verify my theory: After replacing the old disk (aka the second) vtbd2 partitioning exactly match again, however according to the new scheme:
Code:
# gpart show vtbd1 vtbd2
=>	   40  167772080  vtbd1  GPT  (80G)
		 40		 88		 - free -  (44K)
		128	4194304	  1  freebsd-swap  (2.0G)
	4194432  163577680	  2  freebsd-zfs  (78G)
  167772112		  8		 - free -  (4.0K)

=>	   40  167772080  vtbd2  GPT  (80G)
		 40		 88		 - free -  (44K)
		128	4194304	  1  freebsd-swap  (2.0G)
	4194432  163577680	  2  freebsd-zfs  (78G)
  167772112		  8		 - free -  (4.0K)
 
Status
Not open for further replies.
Top