Make DDT partition bigger

Sprint

Explorer
Joined
Mar 30, 2019
Messages
72
So I know most peoples attitude to duplication is "don't bother", I know the risks, I have the horse power, I wanted to try it, so I am... I'm using it for iSCSI storage to my Proxmox servers (I have two nodes)

The Pool in question is 6x1Tb 860 Evo SSDs in a two RaidZ1 vdevs (I considered mirrors but the performance of a single vdev was able to saturate my 10Gb links so decided I wanted the extra capacity).

I'm also running 2x 280Gb 900p Optane drives, But i didn't want to waste these entirely for DDTs, as I also wanted a slog for each pool. Now again, I know people are going to say "you shouldn't use the same drive for more than one purpose", but there is more than enough IOPS and throughput to these optane drives, they can handle it, and until recently I had a single Optane doing 3 SLOG partitions and 200Gb of L2ARC, and it worked superbly! (Plus I've not run out of PCIE lanes)

So, I partitioned up each Optane drive identically with 2x15G partitions and 1x30Gb.
The plan was to assign 15Gb (mirrored) to my main spinning rust pool, and another 15Gb (mirrored) to my SSD array. (thats all working great)
The two 30Gb partitions were for my DDT.

Here are the commands I used to add the partitions to the pools for reference
Optane drive 0 partitions
gpart create -s gpt nvd0
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd0
gpart add -t freebsd-zfs -s 15G nvd0
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd0

# Optane drive 1 partitions
gpart create -s gpt nvd1
#Create Slogs
gpart add -t freebsd-zfs -s 15G nvd1
gpart add -t freebsd-zfs -s 15G nvd1
# create DDT partitions
gpart add -t freebsd-zfs -s 30G nvd1

...and the commands i used to add the partitions to the pools[/FONT]

zpool add Primary_Array log nvd0p1 nvd1p1
zpool add SSD_Array log nvd0p2 nvd1p2
zpool add SSD_Array special mirror nvd0p3 nvd1p3

(L2ARC is now on its own NVME)

Anyway, its working superbly, speeds are great, seeing 1.3 Dedup ratio and I've only loaded a handful of VMs into it, but looking at the capacity, I think I should have set them to more than 30Gb. I'm using 9Gb and the pool is only 7% full (see below, under "special" vdev under "SSD_Array"

---------------------------------------------- ----- ----- ----- ----- ----- ----- [B]capacity [/B]operations bandwidth pool [B]alloc free [/B]read write read write ---------------------------------------------- ----- ----- ----- ----- ----- ----- Primary_Array 91.1T 25.2T 0 0 0 0 raidz2 57.6T 596G 0 0 0 0 gptid/ae89d119-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/af47c6f6-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b0210bb3-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b0092041-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b05786d4-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b0324733-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b0c01156-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 gptid/b0e656cc-5e38-11eb-a3ef-000c291d8b0c - - 0 0 0 0 raidz2 33.5T 24.6T 0 0 0 0 gptid/0819d83e-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/08566e8c-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/087bdb7d-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/08c528f0-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/09097e7a-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/096a562f-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/0978ab4b-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 gptid/09ccbd73-5165-11ec-84c2-000c29f20725 - - 0 0 0 0 logs - - - - - - nvd0p1 40K 14.5G 0 0 0 0 nvd1p1 128K 14.5G 0 0 0 0 cache - - - - - - nvd2p1 200G 92.3M 0 0 0 0 ---------------------------------------------- ----- ----- ----- ----- ----- ----- SSD_Array 228G 5.21T 0 2 0 160K raidz1 109G 2.60T 0 0 0 0 gptid/653f3d1e-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 gptid/65594328-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 gptid/65865955-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 raidz1 109G 2.60T 0 0 0 0 gptid/637fef3b-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 gptid/65485f99-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 gptid/65b01095-5c72-11ec-b267-ac1f6b781c6e - - 0 0 0 0 special - - - - - - mirror [B]9.79G 19.7G[/B] 0 0 0 0 nvd0p3 - - 0 0 0 0 nvd1p3 - - 0 0 0 0 logs - - - - - - nvd0p2 1.02M 14.5G 0 1 0 97.0K nvd1p2 1.02M 14.5G 0 0 0 63.4K ---------------------------------------------- ----- ----- ----- ----- ----- ----- boot-pool 1.19G 94.3G 0 0 0 0 mirror 1.19G 94.3G 0 0 0 0 ada0p2 - - 0 0 0 0 ada1p2 - - 0 0 0 0



So my question itself is simple, can i extend the partitions (as they are followed by unallocated space) with them in situ, and they'll register the extra space once both are done (bit like when i swapped all the drives in a pool out for bigger drives)...

or

do I remove one partition from the pool at a time, delete it, recreate it, re add it, allow it to resilver, then repeat....

or

Do i need to delete back all the data up, destroy the pool, delete the partitions, and rebuild the pool before restoring the VMs?

Didn't want to break it, or do the 3rd set if there was a smarter way todo it

Thanks in advance

2x Xeon E5-2630-V4
256Gb DDR4
16x8Tb (8x WD Reds in a vDev, 8x WD Golds in another vDev)
6x 1Tb Samsung 860Evo SSDs in twin 3 drive RaidZ1 vDevs
Intel X520 10Gb NIC
PCIE 16x ASRock 4xM.2 slot card (board bifurcated 4x4x4x4)
200Gb Crucial M.2 L2ARC
2x 280Gb Optane 900p connecting via M.2
3x 9207-8i HBAs
2x 120Gb Boot SSDs
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Unfortunately your use of Z1 I believe precludes the removal of the mirrored special or log. If everything was mirrored (or single drives) then some vdevs can be removed (it does take a while copying data - I can confirm that this does work, on mirrors)
So:
"So my question itself is simple, can i extend the partitions (as they are followed by unallocated space) with them in situ, and they'll register the extra space once both are done (bit like when i swapped all the drives in a pool out for bigger drives)..."
No idea - but if you try make sure you have a very good backup, preferably 2 - I can see a toasted pool in your future

"do I remove one partition from the pool at a time, delete it, recreate it, re add it, allow it to resilver, then repeat...."
Same answer I am afraid. You would be resilvering a small(er) partition to a large(er) partition. Would that even work?

"Do i need to delete back all the data up, destroy the pool, delete the partitions, and rebuild the pool before restoring the VMs?"
This will work - but it is a ballache.

I actually like your first plan (followed by 3rd plan when it all goes wrong). Power everything off, take the disks to another system and carefully extend the partition, then put it back and power back on again and see what happens. But make sure you have the backups up to date for option 3 (which I suspect is where you will end up). Its worth a try though - but "Here Be Dragons". If the system does work after the partition extension look very carefully to see if the extended partitions are recognised
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
"do I remove one partition from the pool at a time, delete it, recreate it, re add it, allow it to resilver, then repeat...."
Same answer I am afraid. You would be resilvering a small(er) partition to a large(er) partition. Would that even work?
Why wouldn't it work? That's how vdevs are expanded: By replacing older drives with larger ones and resilvering.
@Sprint may further take this as an opportunity to re-add the partitions by gptid, in case the device names are ever reshuffled.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
That's a very good point. I do however think that having a very good backup available is important if (when) it all goes wrong
 

QonoS

Explorer
Joined
Apr 1, 2021
Messages
87
If your zpool has "autoexpand=on" set then the pool will grow automatically when the underlying block devices have grown in size.

You can do it also manually with:

zpool online [-e] pool device
Brings the specified physical device online. This command is not
applicable to spares.

-e Expand the device to use all available space. If the de-
vice is part of a mirror or raidz then all devices must
be expanded before the new space will become available to
the pool.

You can check it with ( https://www.freebsd.org/cgi/man.cgi?zpool(8) ) :

Code:
     Example 15: Displaying expanded space on a	device
	   The following command displays the detailed information for the
	   pool	data.  This pool is comprised of a single raidz	vdev where one
	   of its devices increased its	capacity by 10GB.  In this example,
	   the pool will not be	able to	utilize	this extra capacity until all
	   the devices under the raidz vdev have been expanded.
		 # zpool list -v data
		 NAME	      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP	 HEALTH	 ALTROOT
		 data	     23.9G  14.6G  9.30G	 -    48%    61%  1.00x	 ONLINE	 -
		   raidz1    23.9G  14.6G  9.30G	 -    48%
		     sda	 -	-      -	 -	-
		     sdb	 -	-      -       10G	-
		     sdc	 -	-      -	 -	-
 
Top