SOLVED Unable to create new vdev

groenator

Dabbler
Joined
Sep 21, 2021
Messages
44
Hi,

I have created my ZFS pool with 4x8TB HDD's using raidz1. Yesterday, I added four new hard disks into my NAS to create a new vdev. The HDDs capacity are 4x6TB, I click to add new vdevs, select all HDDs, then I get the following error:

Code:
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 382, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 418, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1092, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1182, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool.py", line 906, in do_update
    enc_disks = await self.middleware.call('pool.format_disks', job, disks, {'enc_keypath': enc_keypath})
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1305, in call
    return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1262, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 56, in format_disks
    await asyncio_map(format_disk, disks.items(), limit=16)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 16, in asyncio_map
    return await asyncio.gather(*futures)
  File "/usr/lib/python3/dist-packages/middlewared/utils/asyncio_.py", line 13, in func
    return await real_func(arg)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/format_disks.py", line 32, in format_disk
    devname = await self.middleware.call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1305, in call return await self._call(
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1273, in _call
    return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
  File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1177, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
  File "/usr/lib/python3.9/concurrent/futures/thread.py", line 52, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/disk_/disk_info_linux.py", line 98, in gptid_from_part_type
    raise CallError(f'Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sdg


Can someone advise me on how to create a new vdev? I run a full wipe on all hard disks. I thought it might be because I have old data on them and I need to wipe them.

Regards,
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Are you sure you indeed wiped all the disks? The error indicates one of your new disks sdg wasn't able create a ZFS partition.
 

groenator

Dabbler
Joined
Sep 21, 2021
Messages
44
Yes, I am very sure. I wipe all the disks with full zeros. The job just finished a few hours ago.

Should I wipe them again? At least the one that is complaining about the partition.

I checked the partition on all the disks and ZFS can create the partition even if is erroring.

Code:
Disk /dev/sdg: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B31B850A-A82E-4718-BDBC-362B8E767784

Device       Start         End     Sectors  Size Type
/dev/sdg1      128     4194304     4194177    2G Linux swap
/dev/sdg2  4194432 11721045134 11716850703  5.5T Solaris /usr & Apple ZFS
 

groenator

Dabbler
Joined
Sep 21, 2021
Messages
44
Here is the partition layout for the rest of the disks.

Code:
Disk /dev/sdf: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5E7111FC-5D24-48AE-8427-90823B45587C

Device       Start         End     Sectors  Size Type
/dev/sdf1      128     4194304     4194177    2G Linux swap
/dev/sdf2  4194432 11721045134 11716850703  5.5T Solaris /usr & Apple ZFS

Disk /dev/sdh: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 4533B7EB-86F8-46FD-9724-8F9CBACF0445

Device       Start         End     Sectors  Size Type
/dev/sdh1      128     4194304     4194177    2G Linux swap
/dev/sdh2  4194432 11721045134 11716850703  5.5T Solaris /usr & Apple ZFS


Disk /dev/sdi: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CCAC71FB-83FE-4287-AF1F-395EBC1C8300

Device       Start         End     Sectors  Size Type
/dev/sdi1      128     4194304     4194177    2G Linux swap
/dev/sdi2  4194432 11721045134 11716850703  5.5T Solaris /usr & Apple ZFS



 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, I am very sure. I wipe all the disks with full zeros. The job just finished a few hours ago.

Should I wipe them again? At least the one that is complaining about the partition.

I checked the partition on all the disks and ZFS can create the partition even if is erroring.

Code:
Disk /dev/sdg: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B31B850A-A82E-4718-BDBC-362B8E767784

Device       Start         End     Sectors  Size Type
/dev/sdg1      128     4194304     4194177    2G Linux swap
/dev/sdg2  4194432 11721045134 11716850703  5.5T Solaris /usr & Apple ZFS

sdg looks OK, but it won't hurt to wipe it again.
 

groenator

Dabbler
Joined
Sep 21, 2021
Messages
44
Looks like the wipe option is missing now.

Screenshot_20210929_183306.png
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
How are you connecting the 4 new drives? I recently had a disk in my pool that would throw scrub errors on read. This ended up being a bad slot in my backplane.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399

groenator

Dabbler
Joined
Sep 21, 2021
Messages
44
Hi,

I manage to fix the issue. The hard disks had the old RAID 5 configuration, therefore, they were always busy in Truenas.

I forgot to mention that these hard disks were from my old QNAP. I used mdadm to stop the device from being busy. Wiped all the partitions, and then created a new vdev.

Everything is in order now. I can see the new vdev, and my pool was expanded as well.

Thanks for your help.
 
Top