Cannot replace disk via GUI, only CLI.

Status
Not open for further replies.

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
This might be a GEOM related issue. So basically I had a zpool with three 12 disk raidz3s (all 4TB disks) and I went to add another 12 disk raidz3 to it (this one with 6TB disks). zpool add storage01 da51 da52 da53 etc,.

It added them without issue although instead of "GPTID" I get just "DAXX".

Anyways I went to replace a failed 6TB disk from the gui and it failed saying "Apr 8 15:32:18 fs05 manage.py: [middleware.exceptions:38] [MiddlewareError: Disk replacement failed: "cannot replace da54 with gptid/1c18ceeb-de3f-11e4-b100-d4ae528f1d5b: device is too small, "]

I SSH'd in and did a zpool replace /dev/da54 /dev/da55 and off it went ( see S/S below).

upload_2015-4-8_15-45-52.png
 

Attachments

  • upload_2015-4-8_15-46-42.png
    upload_2015-4-8_15-46-42.png
    53.7 KB · Views: 239

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
It's not working via the GUI, since you created it in the CLI, using whole disks.

FreeNAS is an appliance and it expects you to create the vdevs using the GUI, according to it's rules. For example, it would create a 2GB swap file on each disk and create the vdev using gptid's. With a swap file on each disk, should you need to replace a drive with one that's a few bytes larger, it can adjust the swap file to make it work.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
It's not working via the GUI, since you created it in the CLI, using whole disks.

FreeNAS is an appliance and it expects you to create the vdevs using the GUI, according to it's rules. For example, it would create a 2GB swap file on each disk and create the vdev using gptid's. With a swap file on each disk, should you need to replace a drive with one that's a few bytes larger, it can adjust the swap file to make it work.

Yeah after doing more research I came to the same conclusion actually.. but now that I have done this, will it cause any issues and can I fix it?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
It will probably cause problems and you can fix it by rebuilding your pool.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Basically, at your next opportunity, you need to blow away that zpool and recreate it. Until you do, there is a possibility that you might do a reboot, upgrade, etc (something very minor) and suddenly your vdev is trashed and the pool is gone. :(
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
Basically, at your next opportunity, you need to blow away that zpool and recreate it. Until you do, there is a possibility that you might do a reboot, upgrade, etc (something very minor) and suddenly your vdev is trashed and the pool is gone. :(

Lol whoops! Well the good news is all the data is replicated off site.

The bad news is that file server has literally 138TB of data on it, Basically as soon as I made a share I was forced to put it into production.

If it dies I'll re-create it and reverse the replication, thanks for input!
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Lol whoops! Well the good news is all the data is replicated off site.

The bad news is that file server has literally 138TB of data on it, Basically as soon as I made a share I was forced to put it into production.

If it dies I'll re-create it and reverse the replication, thanks for input!
If you are managing that much data you really need to read the FreeNAS manual a couple times.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
If you are managing that much data you really need to read the FreeNAS manual a couple times.

In a perfect world yes but If I had to read the entire manual for every product I used here all I would do is read manuals. Honestly I did all the testing and reading I could until it was taken out of my hands and they started dumping data on it. In the end it's not live production data, it's backups and they are further replicated off site.

I manage petabytes of storage this is actually a rather small and non critical slice (to start). Once I figure out all the kinks and caveats I'll dump more important stuff on it.
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
If so, by my calculations you're at about 85% utilization, which is a ZFS no-no. Did I miss something?

Nope, 79% which is 41TB available. The whole reason for expanding the pool was to keep some headroom.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I had a zpool with three 12 disk raidz3s (all 4TB disks) and I went to add another 12 disk raidz3 to it (this one with 6TB disks).
I make that raw capacity roughly 162TB, of which 138TB is about 85%. What did I miss?
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147
I make that raw capacity roughly 162TB, of which 138TB is about 85%. What did I miss?

196 TB raw

raidz3-0 @ 12x4TB 36TB
raidz3-1 @ 12x4TB 36TB
raidz3-2 @ 12x4TB 36TB
raidz3-3 @ 12x4TB 36TB
raidz3-4 @ 12x6TB 54TB

upload_2015-4-9_19-45-10.png
 

fullspeed

Contributor
Joined
Mar 6, 2015
Messages
147

congrats
 
Status
Not open for further replies.
Top