1TB to 1TB replacement "Device too small"

Status
Not open for further replies.

oOFishbowlOo

Cadet
Joined
Apr 22, 2015
Messages
5
Hi,

This is annoying me no end and i'm pretty sure it's something simple:

Details are:
  • OS Build: FreeNAS-9.3-STABLE-201504152200 64bit
  • CPU: AMD Athlon II Neo N36L / 1.3 GHz
  • RAM: 8GB Kingston (unsure what model)
  • MB: No idea, something proprietary
  • Array: 4x 1TB WD Caviar Green, 1x 2TB WD Red (RAIDZ2)
Had a WD Caviar Green 1TB fail on me:

Code:
hippo# zpool status tank
  pool: tank
 state: DEGRADED
status: One or more devices has experienced an error resulting in data
    corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
    entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub repaired 284K in 3h30m with 1 errors on Tue Apr 21 22:38:35 2015
config:

    NAME                                            STATE     READ WRITE CKSUM
    tank                                            DEGRADED     0     0     1
     raidz1-0                                      DEGRADED     0     0     2
       ada0                                        ONLINE       0     0     0
       3179613386919869858                         OFFLINE      1     0    54  was /dev/ada1
       ada2                                        ONLINE       0     0     0
       ada3                                        ONLINE       0     0     0
       gptid/f574d7c0-e77d-11e4-8fbc-3c4a9277b6d7  ONLINE       0     0     0


So that was unfortunate. Failed drive smartctl looks like this:

Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Green (AF, SATA 6Gb/s)
Device Model:     WDC WD10EZRX-00A8LB0
Serial Number:    WD-WCC1U3332544
LU WWN Device Id: 5 0014ee 2b3395687
Firmware Version: 01.01A01
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Wed Apr 22 10:52:28 2015 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)    Offline data collection activity
                    was completed without error.
                    Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)    The previous self-test routine completed
                    without error or no self-test has ever 
                    been run.
Total time to complete Offline 
data collection:         (12540) seconds.
Offline data collection
capabilities:             (0x7b) SMART execute Offline immediate.
                    Auto Offline data collection on/off support.
                    Suspend Offline collection upon new
                    command.
                    Offline surface scan supported.
                    Self-test supported.
                    Conveyance Self-test supported.
                    Selective Self-test supported.
SMART capabilities:            (0x0003)    Saves SMART data before entering
                    power-saving mode.
                    Supports SMART auto save timer.
Error logging capability:        (0x01)    Error logging supported.
                    General Purpose Logging supported.
Short self-test routine 
recommended polling time:     (   2) minutes.
Extended self-test routine
recommended polling time:     ( 144) minutes.
Conveyance self-test routine
recommended polling time:     (   5) minutes.
SCT capabilities:           (0x30b5)    SCT Status supported.
                    SCT Feature Control supported.
                    SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   198   197   051    Pre-fail  Always       -       56741
  3 Spin_Up_Time            0x0027   135   132   021    Pre-fail  Always       -       4241
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       35
  5 Reallocated_Sector_Ct   0x0033   186   186   140    Pre-fail  Always       -       597
  7 Seek_Error_Rate         0x002e   200   139   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   082   082   000    Old_age   Always       -       13807
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       35
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       51
193 Load_Cycle_Count        0x0032   106   106   000    Old_age   Always       -       283902
194 Temperature_Celsius     0x0022   116   109   000    Old_age   Always       -       27
196 Reallocated_Event_Count 0x0032   075   075   000    Old_age   Always       -       125
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       62
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       69
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       58

SMART Error Log Version: 1
No Errors Logged


Ordered a replacement disk (WD Red 1TB), however when I try to run the replacement procedure:

  • zpool offline tank ada1
  • *power down chassis. replace disk. power on chassis*
  • In the GUI, Go Storage -> Volumes -> Volume Status -> hit "Replace" on the disk formerly known as ada1
I get the "Device is too small" error. I have checked the sector size of the new drive and it is identical to the old green:

Code:
User Capacity:    1,000,204,886,016 bytes [1.00 TB]


So i'm thinking this is due to the size of the swap partition (reading through the forums). I can't remember what version of FreeNAS I created this pool on (can someone tell me?), but it would have been around 8.x or maybe 9.0, so maybe i've got a 1GB swap on the old drive, and trying to create a 2GB swap on the new? I also had to do some initial partition jiggery-pokery in order to get the 2TB disk to work with the rest of the 1TB drives, but that should have only affected the 2TB disk. When I look at gpart show I don't see the 1TB drives at all, which I take to mean either they have no partition map, or they were initialized with the GUI?

So I guess my 3 questions are:

  1. Any suggestions about what might be the issue? (i'll get the smartctl output for the new drive tonight if that'll help)
  2. Is there any way to work out what version of FreeNAS the pool was created under?
  3. Is there any way to look at the current partition map for the failed device if it's not showing up in gpart?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
  • Your array is not RAIDZ2, it's RAIDZ1.
  • Nothing in the documentation tells you to offline the failed disk via the command line.
  • ada0, 2, and 3 have no swap partition (indeed, no partition table) at all--either you disabled this completely in the GUI before you created the pool (not recommended), or you created the pool from the CLI (not recommended). Since you removed the failing disk by 'zpool offline tank ada1', it appears this is the case for ada1 as well.
  • If you're very lucky, you might be able to get this drive replaced by disabling swap entirely (set it to 0 in the GUI).
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
At a minimum, ada0, ada2, and ada3 were added to the pool as whole disks without any swap file. More than likely from CLI (jiggery-pokery stuff?), since FreeNAS uses gptid's to identify the drives when creating vdev's from the webGUI. It even did so, back in the 8.x days.

My guess, is that the new drive isn't exactly the same size as the old one. And, the old one is one of the drives, without a swap file.

At this point, I'd backup your data, blow away the pool and start over, doing things correctly from the GUI.
 

oOFishbowlOo

Cadet
Joined
Apr 22, 2015
Messages
5
  • Your array is not RAIDZ2, it's RAIDZ1.
  • Nothing in the documentation tells you to offline the failed disk via the command line.
  • ada0, 2, and 3 have no swap partition (indeed, no partition table) at all--either you disabled this completely in the GUI before you created the pool (not recommended), or you created the pool from the CLI (not recommended). Since you removed the failing disk by 'zpool offline tank ada1', it appears this is the case for ada1 as well.
  • If you're very lucky, you might be able to get this drive replaced by disabling swap entirely (set it to 0 in the GUI).

  • You're right, I typo'd
  • In fairness this is the second time i've offlined this drive. The first time I did do it with the GUI
  • That's probably right, because I had to use the CLI to get the 2TB drive to work, but i'm now stuck with it until I can afford to replace all the disks with 2TBs.
  • I don't know how to do this with the GUI. I think I know how to do this with the CLI but i'm getting the distinct impression that people round here don't like other people messing around with the CLI...
...
At this point, I'd backup your data, blow away the pool and start over, doing things correctly from the GUI.

That's what i'd like to do but I don't have a backup of some of my non-critical stuff (although losing it wouldn't be a complete write-off). What i'd like to do is get this drive replaced so i'm not running degraded, then build a new pool in a new server the correct way with same-sized drives, and migrate the data across...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You can disable swap creation in the web GUI by going to System -> Advanced and setting the swap size to 0. Even with that, though, FreeNAS is still going to create a partition table, which is going to use up some space on the disk*. Try it, but I doubt it will work. Most likely you will need to do the replacement via the CLI (zpool replace tank /dev/ada1 should do it, I think). In the worst case, you may need to get a larger disk to use instead.

The reason that using the CLI for pool management is so strongly discouraged is because of situations exactly like this--it's easy to get yourself into a situation that isn't recoverable through the GUI. In this case, it looks like you created the pool from the CLI in the beginning, either not knowing or not caring about the "non-standard" stuff that FreeNAS does when creating a pool (like creating partition tables and swap partitions). The result is that you've (probably) broken the ability of the GUI to replace disks in the pool.

*for the sake of testing, I just built a pool in a VM with the swap size set to 0. Here's the output of gpart show:
Code:
[root@freenas] ~# gpart show
=>      34  16777149  ada0  GPT  (8.0G)
        34      1024     1  bios-boot  (512k)
      1058         6        - free -  (3.0k)
      1064  16776112     2  freebsd-zfs  (8G)
  16777176         7        - free -  (3.5k)

=>      34  62914493  ada1  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada2  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada3  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada4  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)
 

oOFishbowlOo

Cadet
Joined
Apr 22, 2015
Messages
5
You can disable swap creation in the web GUI by going to System -> Advanced and setting the swap size to 0. Even with that, though, FreeNAS is still going to create a partition table, which is going to use up some space on the disk*. Try it, but I doubt it will work. Most likely you will need to do the replacement via the CLI (zpool replace tank /dev/ada1 should do it, I think). In the worst case, you may need to get a larger disk to use instead.

The reason that using the CLI for pool management is so strongly discouraged is because of situations exactly like this--it's easy to get yourself into a situation that isn't recoverable through the GUI. In this case, it looks like you created the pool from the CLI in the beginning, either not knowing or not caring about the "non-standard" stuff that FreeNAS does when creating a pool (like creating partition tables and swap partitions). The result is that you've (probably) broken the ability of the GUI to replace disks in the pool.

*for the sake of testing, I just built a pool in a VM with the swap size set to 0. Here's the output of gpart show:
Code:
[root@freenas] ~# gpart show
=>      34  16777149  ada0  GPT  (8.0G)
        34      1024     1  bios-boot  (512k)
      1058         6        - free -  (3.0k)
      1064  16776112     2  freebsd-zfs  (8G)
  16777176         7        - free -  (3.5k)

=>      34  62914493  ada1  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada2  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada3  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

=>      34  62914493  ada4  GPT  (30G)
        34        94        - free -  (47k)
      128  62914392     1  freebsd-zfs  (30G)
  62914520         7        - free -  (3.5k)

That's great, thankyou :) I think I was vaguely aware, but as my data isn't that important in the long run, I probably didn't really care at the time (plus I was young and reckless...)

I'll give it a go via the CLI without swap, but otherwise i'll have to either buy a bigger drive, or trash the pool and recreate it.

Out if interest, If i'm going to try and do it via the CLI, I think i'll have to trash the partition table on the new drive as it will have attempted to make one with the GUI, right?
So in that case, I think I would need to do:

  • gpart delete -i 1 ada1
  • gpart destroy ada1
And then try to add it flat from there? Is there anything else I need to do? (like initialize the disk in any way?)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I'm not familiar enough with gpart to say whether those commands would destroy the partition table, but if you're doing the replacement from the command line I don't think that would be necessary anyway. If I understand the command syntax correctly, you should be able to just do 'zpool replace tank ada1' and be on your way. If that doesn't work, 'zpool replace tank 3179613386919869858 ada1' should do it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's great, thankyou :) I think I was vaguely aware, but as my data isn't that important in the long run, I probably didn't really care at the time (plus I was young and reckless...)

I'll give it a go via the CLI without swap, but otherwise i'll have to either buy a bigger drive, or trash the pool and recreate it.
I would trash the pool and recreate it.
 

oOFishbowlOo

Cadet
Joined
Apr 22, 2015
Messages
5
I would trash the pool and recreate it.

I plan to later this year, but I would prefer that to tie in with me upgrading to a RAIDZ3 pool, which at the moment I have no capacity for. If I can get away with lightly "hacking" the replacement drive in the pool and get myself out of a degraded state for a couple of months, albeit in a bit of a flakey setup, then I'd prefer I think that to days/weeks of moving data around in order to recreate the pool as is to then have to do that same action again later in the year when I move to a more substantial setup.

Worst case scenario, i'll trash the pool trying to add this drive in via the CLI anyway, in which case i'll have no choice but to recreate the pool ;)
 

oOFishbowlOo

Cadet
Joined
Apr 22, 2015
Messages
5
I'm not familiar enough with gpart to say whether those commands would destroy the partition table, but if you're doing the replacement from the command line I don't think that would be necessary anyway. If I understand the command syntax correctly, you should be able to just do 'zpool replace tank ada1' and be on your way. If that doesn't work, 'zpool replace tank 3179613386919869858 ada1' should do it.

Command worked perfectly, thankyou, resilver is going on now :)
 
Status
Not open for further replies.
Top