OK, so if I look at pool status in the UI it shows my single disk pool is on nvme0n1. Let's copy the partition table to nvme1n1:
Code:
truenas# sgdisk /dev/nvme0n1 -R /dev/nvme1n1
The operation has completed successfully.
truenas# sgdisk -G /dev/nvme1n1
The operation has completed successfully.
The first command copies the partition table, the second command recreates random UUIDs for the partitions. We will need these just as in CORE to manage the ZFS vdev components.
Code:
truenas# zpool status zfs
pool: zfs
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
0bda215e-b9de-4c8c-884d-ed95beb81790 ONLINE 0 0 0
errors: No known data errors
This shows the UUID of the partition already in the pool (on nvme0n1).
Let's find the UUID for the second disk:
Code:
# Find the partition number
truenas# sgdisk -p /dev/nvme1n1
Disk /dev/nvme1n1: 488397168 sectors, 232.9 GiB
Model: Samsung SSD 970 EVO Plus 250GB
[...]
Number Start (sector) End (sector) Size Code Name
1 128 4194304 2.0 GiB 8200
2 4194432 488397134 230.9 GiB BF01
# Now print detail for partition 2
truenas# sgdisk -i 2 /dev/nvme1n1
Partition GUID code: 6A898CC3-1DD2-11B2-99A6-080020736631 (Solaris /usr & Mac ZFS)
Partition unique GUID: 3EBE4772-FC42-474F-9605-8731D6ED3FF9
[...]
Now attach as usual:
Code:
truenas# zpool attach zfs 0bda215e-b9de-4c8c-884d-ed95beb81790 /dev/disk/by-partuuid/3ebe4772-fc42-474f-9605-8731d6ed3ff9
truenas# zpool status zfs
pool: zfs
state: ONLINE
scan: resilvered 64.2M in 00:00:00 with 0 errors on Sun Aug 8 07:25:24 2021
config:
NAME STATE READ WRITE CKSUM
zfs ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
0bda215e-b9de-4c8c-884d-ed95beb81790 ONLINE 0 0 0
3ebe4772-fc42-474f-9605-8731d6ed3ff9 ONLINE 0 0 0
errors: No known data errors
If SCALE behaves like CORE in that regard, the swap space will be detected and activated automatically on next reboot.
HTH,
Patrick