DrFranken
Cadet
- Joined
- Dec 9, 2019
- Messages
- 2
FREENAS 11.3-U6 in play.
Supermicro chassis and X9DR7/E-(J)LN4F MoBo with LSI SAS2308_1 SAS Controller. Disks are attached to a SAS2X36 backplane.
A couple old E5-2609 v2 CPUs with 256GB of memory running the show.
Dual Intel x520 10Gbe network interfaces for user access, chiefly NFS, some CIFS. 1Gbe interface for management.
I have the following zpool:
NAME STATE READ WRITE CKSUM
POOL7.2K1000s ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/aeb77bdd-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b067f30e-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b1f29aff-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b372d998-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b5225971-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
gptid/b6cd84ff-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b854906d-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b9e18dda-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/bb8bf699-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/bd30e3dd-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
gptid/bed1d4c3-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c0e5c059-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c286519d-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c45e557a-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c5e1128e-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
gptid/c7ad8918-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c93f2eb0-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/cb127a2b-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ccd01ecc-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ce8098c4-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
gptid/e542fbb1-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/e72e5d84-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/e8cd1e37-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ec480b7c-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/efd07446-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
spares
gptid/f85d640d-ede4-11e9-ba24-002590c571fc AVAIL
gptid/fc215fd4-ede4-11e9-ba24-002590c571fc AVAIL
Each of these drives are identical Seagate 1TB SAS drives, ST1000NM0023.
I would like to increase storage in the pool by replacing some of the 1TB drives with 3TB drives. Depending on what I read either this is legal or not legal. *sigh*
CLEARLY I will gain no capacity unless ALL drives in a ZVol are replaced, I am certain of this and it is logical.
When I prompt the 'extend' option in the GUI I get this message: "Extending the pool adds new vdevs in a stripe with the existing vdevs. It is important to only use new vdevs of the same size and type as those already in the pool. This operation cannot be reversed. " (Emphasis mine.)
However viewing the presentation "ZFS Storage Design and other FreeNAS information" from Cyberjock It states
ZFS allows for a zpool to expand in only two ways.
Option 1: Replace all of the hard disks in a VDev with larger hard drives (aka autoexpand)
Option 2: Add additional VDevs.
If I read that correctly it says 'all of the hard disks in a VDev' NOT in a zpool.
Documentation refers only to a discussion of replacing one disk at a time by powering down, swapping the drive, powering up and then resilvering. It would seem that I could replace one of my 1TB hot spares with a 3TB unit, then replace that way rather than power down for each drive.
So that is the conundrumdrum: Do I need to replace all drives a ZPOOL or all drives in a VDEV to gain additional capacity???
Supermicro chassis and X9DR7/E-(J)LN4F MoBo with LSI SAS2308_1 SAS Controller. Disks are attached to a SAS2X36 backplane.
A couple old E5-2609 v2 CPUs with 256GB of memory running the show.
Dual Intel x520 10Gbe network interfaces for user access, chiefly NFS, some CIFS. 1Gbe interface for management.
I have the following zpool:
NAME STATE READ WRITE CKSUM
POOL7.2K1000s ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/aeb77bdd-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b067f30e-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b1f29aff-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b372d998-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b5225971-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
gptid/b6cd84ff-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b854906d-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/b9e18dda-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/bb8bf699-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/bd30e3dd-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-2 ONLINE 0 0 0
gptid/bed1d4c3-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c0e5c059-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c286519d-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c45e557a-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c5e1128e-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-3 ONLINE 0 0 0
gptid/c7ad8918-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/c93f2eb0-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/cb127a2b-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ccd01ecc-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ce8098c4-e7e1-11e9-ba24-002590c571fc ONLINE 0 0 0
raidz1-4 ONLINE 0 0 0
gptid/e542fbb1-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/e72e5d84-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/e8cd1e37-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/ec480b7c-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
gptid/efd07446-ede4-11e9-ba24-002590c571fc ONLINE 0 0 0
spares
gptid/f85d640d-ede4-11e9-ba24-002590c571fc AVAIL
gptid/fc215fd4-ede4-11e9-ba24-002590c571fc AVAIL
Each of these drives are identical Seagate 1TB SAS drives, ST1000NM0023.
I would like to increase storage in the pool by replacing some of the 1TB drives with 3TB drives. Depending on what I read either this is legal or not legal. *sigh*
CLEARLY I will gain no capacity unless ALL drives in a ZVol are replaced, I am certain of this and it is logical.
When I prompt the 'extend' option in the GUI I get this message: "Extending the pool adds new vdevs in a stripe with the existing vdevs. It is important to only use new vdevs of the same size and type as those already in the pool. This operation cannot be reversed. " (Emphasis mine.)
However viewing the presentation "ZFS Storage Design and other FreeNAS information" from Cyberjock It states
ZFS allows for a zpool to expand in only two ways.
Option 1: Replace all of the hard disks in a VDev with larger hard drives (aka autoexpand)
Option 2: Add additional VDevs.
If I read that correctly it says 'all of the hard disks in a VDev' NOT in a zpool.
Documentation refers only to a discussion of replacing one disk at a time by powering down, swapping the drive, powering up and then resilvering. It would seem that I could replace one of my 1TB hot spares with a 3TB unit, then replace that way rather than power down for each drive.
So that is the conundrumdrum: Do I need to replace all drives a ZPOOL or all drives in a VDEV to gain additional capacity???