Drive replacement not possible - Partitioning different when creating vdev

veikko

Cadet
Joined
Dec 9, 2021
Messages
3
Hi all!
I realized a failed disk the other day and took a replacement drive from the shelf. It did not want to replace, and gave an error "disk is too small". I have 14 identical disks that have not yet added to the pool. They all are checked them all with geom disk list.

Code:
Geom name: da22
Providers:
1. Name: da22
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: SEAGATE ST16000NM002G
   lunid: 5000c500caeb5ab7
   ident: ZL29ZABR0000C1122R4K
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255

Geom name: da24
Providers:
1. Name: da24
   Mediasize: 16000900661248 (15T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   descr: SEAGATE ST16000NM002G
   lunid: 5000c500ca710163
   ident: ZL26F7MN0000C04402UY
   rotationrate: 7200
   fwsectors: 63
   fwheads: 255


It is fairly new system, it has been Truenas CORE 12 all the time. Initially installed at August with version 12.0 U5, currently running 12.0 U7.

I was checking the error and found out that formatting a blank drive creates different kind of partitions than the initial disks. It seems that the newer ZFS version or Truenas (apparently) formats drives with larger first partition, and there is not enough space for the data partition to be as large as the previous. In the following code snip the da22 is the failed drive and the da24 is the replacement drive.

Code:
=> 34 31251759037 da22 GPT (15T) 34 2014 - free - (1.0M)
2048 31251738624 1 zfs-97df2acf8286b812 (15T) 31251740672 16384 9 (null) (8.0M)
31251757056 2015 - free - (1.0M)

=> 40 31251759024 da24 GPT (15T) 40 88 - free - (44K)
128 4194304 1 (null) (2.0G)
4194432 31247564632 2 (null) (15T)


The first is the failed disk that has partition 1 as 15T and partition 9 as 8M, and 1M free in the head.
The replacement drive shows that there is 1 partition with 2G in size, and after that partition 2 with 15T in size, but approg 2G smaller than the failed one. Hence, the "disk is too small" warning.

I also created a hot spare from one of the disks, and it had same type of partition scheme, with 2G first partition taking too much space for the data to be big enough.

I wonder does anyone know what's happening?

I have been running linux boxes with ubuntu and zfs on linux for years, and never ever had problems like this.

Supermicro x10
Dual Xenon CPU E5-2643 v4 @ 3.40GHz
64G ECC ram
Mellanox Connectx3 40G networking
Supermicro 36bay chassis
2x 120G mirror zfs ssd boot pool
1x 512G nvme cache
1x 512G nvme log device
2x12-disk raidz2 16T Seagate Exos SAS3 media drives
12x 16T Seagate Exos SAS3 media drives waiting for pool expansion (after this is sorted out)
2x 16T Seagate Exos SAS3 media drives cold spares
Truenas 12.0 U7

I highly appreciate any input or comments.
Cheers!
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Did you change the swap space size after initial creation of your pool? Looks like you did.

You can always replace a disk on the command line. Need help with that?
 

veikko

Cadet
Joined
Dec 9, 2021
Messages
3
Did you change the swap space size after initial creation of your pool? Looks like you did.

You can always replace a disk on the command line. Need help with that?
With command line "zpool replace -f" I get the same result.
I was going through the zpool history, and there is no modification of the swap size. Anyhow, shouldn't that be valid to only boot pools?
Here is the creation commands of the initial pool.
Code:
2021-05-20.12:47:31 zpool add arkisto5 raidz2 wwn-0x5000c500cae8ecd7 wwn-0x5000c500caeb120f wwn-0x5000c500caeb154b wwn-0x5000c500caeb19ff wwn-0x5000c500caeb5ab7 wwn-0x5000c500caeb5d1b wwn-0x5000c500caeb737f wwn-0x5000c500caeb792f wwn-0x5000c500ca9cc0fb wwn-0x5000c500cac628ab wwn-0x5000c500cafca7cb wwn-0x5000c500cafcb0b3
2021-05-20.12:52:03 zpool add arkisto5 log nvme-eui.e8238fa6bf530001001b444a46ec9f21
2021-05-20.12:52:50 zpool add arkisto5 cache nvme-eui.0024cf014c00bc33
2021-05-20.12:56:50 zfs set compression=zstd arkisto5
2021-06-04.13:29:19 zfs set compression=off arkisto5
2021-06-07.16:47:43 zpool export arkisto5
2021-06-07.17:21:24  zpool import 7962944901382090432  arkisto5
2021-06-07.17:21:25  zfs set aclinherit=passthrough arkisto5
2021-06-07.17:21:27  zfs inherit -r arkisto5


After that, there's only snapshots and destroys.

There is a recordsize change after creation. I changed it back to default 128K and tried to make a new hotspare using those settings, but the partition sizes keep the same.

How could I change the swap size to a pool?
 

veikko

Cadet
Joined
Dec 9, 2021
Messages
3
Sorry, the creation command was incomplete, here it is again:
Code:
zpool create -O encryption=off arkisto5 raidz2 wwn-0x5000c500cafbc6f7 wwn-0x5000c500cafbc823 wwn-0x5000c500cafbd28b wwn-0x5000c500cafbd707 wwn-0x5000c500cafbdb5b wwn-0x5000c500cafbeaeb wwn-0x5000c500cafc859b wwn-0x5000c500cafc9e57 wwn-0x5000c500cafcdeaf wwn-0x5000c500cafcf943 wwn-0x5000c500cafd12bf wwn-0x5000c500cafd13d7
2021-04-29.00:19:57 zpool set autoreplace=on arkisto5
2021-04-29.00:20:03 zpool set autotrim=on arkisto5
2021-04-29.00:20:16 zpool set autoexpand=on arkisto5
2021-04-29.00:21:04 zfs create arkisto5/Arkisto5
2021-04-29.00:21:20 zfs set recordsize=1M arkisto5/Arkisto5
2021-04-29.00:22:28 zfs set compression=zle arkisto5/Arkisto5
2021-04-29.00:30:20 zfs set recordsize=1M arkisto5/Arkisto5
2021-04-30.05:36:50 zfs receive -F arkisto5/Arkisto5
2021-04-30.09:12:17 zfs destroy arkisto5/Arkisto5@%
2021-04-30.09:15:06 zfs rename arkisto5/Arkisto5 arkisto5/Arkisto4
2021-05-09.00:24:14 zpool scrub arkisto5
2021-05-20.12:47:31 zpool add arkisto5 raidz2 wwn-0x5000c500cae8ecd7 wwn-0x5000c500caeb120f wwn-0x5000c500caeb154b wwn-0x5000c500caeb19ff wwn-0x5000c500caeb5ab7 wwn-0x5000c500caeb5d1b wwn-0x5000c500caeb737f wwn-0x5000c500caeb792f wwn-0x5000c500ca9cc0fb wwn-0x5000c500cac628ab wwn-0x5000c500cafca7cb wwn-0x5000c500cafcb0b3
2021-05-20.12:52:03 zpool add arkisto5 log nvme-eui.e8238fa6bf530001001b444a46ec9f21
2021-05-20.12:52:50 zpool add arkisto5 cache nvme-eui.0024cf014c00bc33
2021-05-20.12:56:50 zfs set compression=zstd arkisto5
2021-06-04.13:29:19 zfs set compression=off arkisto5
2021-06-07.16:47:43 zpool export arkisto5
2021-06-07.17:21:24  zpool import 7962944901382090432  arkisto5
2021-06-07.17:21:25  zfs set aclinherit=passthrough arkisto5
2021-06-07.17:21:27  zfs inherit -r arkisto5
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The swap size is in the UI at System > Advanced, upper right.

It is the size of the swap partition created on any new disk. If that is larger than before, less of the rest of the disk can be used for ZFS.

From your first post it looks like you created the pool without any swap partitions on the disk at all - which is not recommended, but should not concern us just now. The new disk shows a swap partition of 2 G which is the default.

Set the size to 0 in the UI and then try the replacement through the UI again. You will get a warning about that size, but let's get that disk replaced, first.

Afterwards the output of swapinfo would be interesting. And a zpool status, too.
 
Top