SOLVED New larger drives in ZFS pool but size did not increase

Status
Not open for further replies.

gilgha

Dabbler
Joined
Aug 24, 2016
Messages
15
Hello everyone,

I have a FreeNAS home server box running perfectly since more than a year now. At the time of building this box, I filled it up with the drives I had laying around which were:

- 2x 1 TB drives
- 2x 500 GB drives

I created a single ZFS pool with those four drives which resulted in the following capacity:

[root@freenas] ~# zpool list zfs-volume
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs-volume 1.80T 1.20T 616G - 31% 66% 1.00x ONLINE /mnt


However, I am currently in the process of building a new server and I decided to buy 2 new 2 TB drives for the FreeNAS system and salvage the 1 TB ones for the new server.

So, I replaced both 1 TB drives with new 2 TB drives (one at a time) using the 'zpool' command line utility. The process worked fine and I was able to do this without even interrupting the system. My ZFS pool is now back to a healthy and redundant state:

[root@freenas] ~# zpool status zfs-volume
pool: zfs-volume
state: ONLINE
scan: resilvered 72K in 0h0m with 0 errors on Sat May 13 10:16:32 2017
config:

NAME STATE READ WRITE CKSUM
zfs-volume ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/42eaa8df-3e21-11e6-b4a2-9cb654066b8b ONLINE 0 0 0
gptid/440b1dbe-3e21-11e6-b4a2-9cb654066b8b ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0

errors: No known data errors


However, the total ZFS pool size did not increase.

I don't understand why I don't get a larger pool size since the 'autoexpand' property of the pool was properly set to 'on':

[root@freenas] ~# zpool get autoexpand
NAME PROPERTY VALUE SOURCE
freenas-boot autoexpand off default
zfs-volume autoexpand on local


I also tried to manually expand the pool with the following commands but the result is still the same:

[root@freenas] ~# zpool online -e zfs-volume gptid/42eaa8df-3e21-11e6-b4a2-9cb654066b8b
[root@freenas] ~# zpool online -e zfs-volume gptid/440b1dbe-3e21-11e6-b4a2-9cb654066b8b
[root@freenas] ~# zpool online -e zfs-volume ada0
[root@freenas] ~# zpool online -e zfs-volume ada1


What is wrong with my configuration? Shouldn't it be possible to expand the pool size to use the additional space provided by the new 2 TB drives without completely destroying and rebuilding the pool? I really don't want to copy all the data back and forward to the system since I don't have any easy solution to do so.

Any help would be highly appreciated :smile: !

Thanks !
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
The size of your pool didn't increase because you replaced the largest drives in your pool. If you had replaced your 500GB drives your pool size would increase.
 

gilgha

Dabbler
Joined
Aug 24, 2016
Messages
15
The size of your pool didn't increase because you replaced the largest drives in your pool. If you had replaced your 500GB drives your pool size would increase.

This is exactly what I was afraid of... But I don't understand, if the pool total storage is defined by the smallest drive, how did I got a total of 1.80 TB when I was using the following configuration?
  • 1 TB
  • 1 TB
  • 500 GB
  • 500 GB

The solution might be to split the pool in two different pools:
  • RAIDZ - zfs-volume - 2x 2 TB
  • RAIDZ - zfs-small - 2x 500 GB
But how can I do this without loosing any data from the current pool?
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
Something doesn't add up. zpool status shows that you have raidz1 with two drives, however, you stated that you had all four drives in the pool. If that's the case, then before you replaced the drives you would have only had ~716GiB usable space. And even after replacing the 1TB drives, your space would not have increased because the 500GB drives were not replaced either.
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Total size of the pool with parity is 1.8Tib not 1.8TB.

There's a handy pool size calculator available in the resources section at the top of the page that will show you what I mean.

But how can I do this without loosing any data from the current pool?

You don't. The only way to expand your pool without destroying it and starting over is to replace your 500GB drives.
 

gilgha

Dabbler
Joined
Aug 24, 2016
Messages
15
Alright I understand... I was only using 4x500 GB from the beginning even with my two 1 TB drives ! I now need to think over the drive repartition again :-( ...

Thanks for your explanation.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
To add to this, for future use, don't use the CLI to replace disks. Use the GUI instead. It's all in the manual.
 

gilgha

Dabbler
Joined
Aug 24, 2016
Messages
15
To add to this, for future use, don't use the CLI to replace disks. Use the GUI instead. It's all in the manual.

What's wrong with configuring the pool with the CLI? I feel more confident doing so because I can actually see exactly what the system is doing with the pool and drives.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
What's wrong with configuring the pool with the CLI?
Two big things: (1) it doesn't partition the replacement disks correctly (FreeNAS puts a small swap partition on the disks by default), and (2) it identifies the disks by adan/dan rather than by gptid. It also often ends up with the GUI being out of sync with what's actually going on in the system.
 

gilgha

Dabbler
Joined
Aug 24, 2016
Messages
15
Two big things ...

Alright you convinced me :-D ! I will re-replace the two new 2 TB drives with themselves from the web gui to rebuild the correct partitioning. Thanks for letting me know.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
FYI, if you reconfigured your pool as striped mirrors, you'd get more available storage out of your existing 2x2TB and 2x500GB drives. It would be nominally 2.5TB, before accounting for overhead, where what you have right now is nominally 1.5TB.
 

zoomzoom

Guru
Joined
Sep 6, 2015
Messages
677
Something doesn't add up. zpool status shows that you have raidz1 with two drives, however, you stated that you had all four drives in the pool.
There's four, it's just two are specified by adaX and two are specified by gptid
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Use mirrors and stop using the cli you think you know exactly what it's doing but as you can see you don't really have a clue what is happening. Disks should be partitioned correctly and get gptids.

Sent from my Nexus 5X using Tapatalk
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
There's four, it's just two are specified by adaX and two are specified by gptid
Ah. Thanks. I was looking at it on my phone and assumed the gptid was referring to a corresponding ada.
 

wernisman

Cadet
Joined
Sep 7, 2014
Messages
3
I have used the GUI to resilver my drives and I am in the situation where the ZFS pool is not expanded. I have moved from using 1TB drives, to 3TB drives however I am not showing any increased storage.

I used an external drive for the resilvering as I have an HP N36L, so I have 4 internal drives. The steps i followed where:
1. attached the external drive,
2. resilvered using the external drive
3. powered off the system and replaces the offlined drive with the one from the external dock
4. powered on the system and checked everything was running.
5. rinse repeat for all 4 drives.

However i am not seeing the extra space in the system now, is there a way from the GUI to expand the ZFS pool considering i have replaced all the drives?

thanks
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Well, let's see what's going on with the pool. Show the output of zpool status, zpool list, and zpool get autoexpand poolname (replacing "poolname" with the name of your pool).
 

wernisman

Cadet
Joined
Sep 7, 2014
Messages
3
@danb35 thanks for the response, the outputs are as follows

Code:
[root@freenas ~]# zpool status
  pool: ZFS_Root
state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: resilvered 745G in 0 days 13:04:02 with 0 errors on Tue Dec 25 21:27:222018
config:
        NAME                                            STATE     READ WRITE CKSUM
        ZFS_Root                                        ONLINE       0     0 0
          raidz1-0                                      ONLINE       0     0 0
            gptid/2a4298c7-05d5-11e9-8ca7-3cd92b06d324  ONLINE       0     0 0  block size: 512B configured, 4096B native
            gptid/e5f313f4-0652-11e9-a8b5-3cd92b06d324  ONLINE       0     0 0  block size: 512B configured, 4096B native
            gptid/9da1f5f9-06ff-11e9-9477-3cd92b06d324  ONLINE       0     0 0  block size: 512B configured, 4096B native
            gptid/1ec1ddb9-07c2-11e9-9718-3cd92b06d324  ONLINE       0     0 0  block size: 512B configured, 4096B native

errors: No known data errors
  pool: freenas-boot
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          da0p2     ONLINE       0     0     0

errors: No known data errors

[root@freenas ~]# zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
ZFS_Root      3.63T  2.91T   736G        -     7.28T    23%    80%  1.00x  ONLINE  /mnt
freenas-boot  14.5G   763M  13.8G        -         -      -     5%  1.00x  ONLINE  -

[root@freenas ~]# zpool get autoexpand ZFS_Root
NAME      PROPERTY    VALUE   SOURCE
ZFS_Root  autoexpand  off     default


So i can see that autoexpand is off, I can only assume this is FreeNAS default as I cant find a setting for this?
i run zpool set autoexpand=on ZFS_Root however nothing changed, i could run zpool online -e ZFS_Root <gptid> but this seems like the wrong thing to do?
 
Last edited:

wernisman

Cadet
Joined
Sep 7, 2014
Messages
3
Interestingly enough, after I set the autoexpand to be true i could see

Code:
[root@freenas ~]# zpool get expandsize ZFS_Root
NAME      PROPERTY    VALUE     SOURCE
ZFS_Root  expandsize  7.28T     -



So after a bit of reading on how online works (and im still reading) I ran the online command for all drives. zpool online -e <pool> <gptid>

and now when check my expandsize and run the zpool list see
Code:
[root@freenas ~]# zpool get expandsize ZFS_Root
NAME      PROPERTY    VALUE     SOURCE
ZFS_Root  expandsize  -         -
[root@freenas ~]# zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
ZFS_Root      10.9T  2.91T  8.00T        -         -     7%    26%  1.00x  ONLINE  /mnt
freenas-boot  14.5G   763M  13.8G        -         -      -     5%  1.00x  ONLINE  -


thanks for everyones help, it was much appreciated. It seems there is no way to do this from the GUI, you need to run this from the CLI.

NOTE: for anyone reading what I did to resolve this. ZFS_Root is my pool name, please change this to be the name of your pool
 
Last edited:
Status
Not open for further replies.
Top