New disk automatically included in volume!??!

Status
Not open for further replies.

audix

Dabbler
Joined
Jun 11, 2011
Messages
36
FreeNas 8.0.1-Beta2-amd64
I am new at freenas. :)

A newly installed disk got somehow automatically added to a volume. How can this happen and how can I remove it (I want to create a new volume just for this disk)?

I had 6 disks in a raidz2. One went bad. (http://forums.freenas.org/showthrea...ear-after-scrub.-Need-help-understanding-logs.)
Clicked "replace" in the gui for the disk (not sure if this was correct :confused: ).
Removed the disk.

Added another disk. Created a volume on it and nfs exported. Worked fine.

Added another disk. When I tried to create a new volume there were no disks/devices to choose from. Choosing "view disks" for my original volume showed that the new disk was included in this volume.

(The two added disks are both smaller than those in the original volume.)

zpool status for the pool shows the disk as Unavail and it says in the end "was /dev/gptid/<the broken disks uid>".

I have tried to remove it by "zpool detach <id>" and "zpool remove <id>".

There has been a change in the device names for some of the disks. I thought this would not happen in 8.0.1?

How can this happen? I am sure I did something wrong... :rolleyes:



BTW, where can I find the correct procedure to replace a broken disk? I have read the documentation and searched the forums.
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
I am exactly in the same position! I had a drive fail as well a week or so ago, and a 2nd shortly after that. :(

Yesterday I got my first replacement disk and I stuck into the slot left by the 1st dead drive.

I go to Storage -> Volume -> show disks -> ada5 (Replace) -> Pop up replace with ada5/***XX (in place) -> OK
System does something for a while (at least the UI is locked), but after that the drive is still reported as being unavailable and the array degraded, rebooted the box etc. nothing changes the new drive still appears to be unavailable.

After some more digging around the thing which is really weird is that FreeNAS sees the new drive as /dev/NONE which of course is wrong, the new drive which is an WD20EARS just like the old was its jut not recognise in the system, its UNKNOWN, which also causes gpart to make 512Byte sectors on it rather than 4KB.

At boot time the new drive is recognised properly:
Jun 26 09:33:21 Shrek kernel: ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
Jun 26 09:33:21 Shrek kernel: ada3: <WDC WD20EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
Jun 26 09:33:21 Shrek kernel: ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
Jun 26 09:33:21 Shrek kernel: ada3: Command Queueing enabled
Jun 26 09:33:21 Shrek kernel: ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
 

neubert

Dabbler
Joined
Jun 24, 2011
Messages
26
Your observation is similar to mine (see thread Stresstesting RAIDZ failes (FreeNAS 8.0.1-BETA2-amd64) ). Being unable to replace a broken drive seems to be a severe problem. Only that I am unsure whether the problem is in FreeNAS or in front of the keyboard in my case. Should we file a defect in SourceForge's bug tracker?

Boris
 

audix

Dabbler
Joined
Jun 11, 2011
Messages
36
I tested to remove the last added disk and inserted the old "broken" disk. Now it all looks ok again (the bad disk seems to work ok just after a reboot, but will start to show errors after a while (or after a scrub)). Will add the last disk again and see what happens.

It seems that you must not add a new disk while having a degraded pool and a removed disk. Hmm...

The "broken" disk got a new uuid. Is there anywhere I can read up on how freenas/freebsd works with devices and disks in detail? I am googling and reading a bit more each day but so far have not found the really good stuff... :)
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
Do not replace drive when pool is degraded?

It seems that you must not add a new disk while having a degraded pool and a removed disk. Hmm...
The above sounds like the antithesis of a RAID setup :D replacing a drive when the pool is OK is not something I plan on doing, however replacing a drive when the pool is degraded is something that I am most likely to do. ;)

Anyway I do concur with your observation, replacing drives in FreeNAS appears broken to say the least. :confused:
 

audix

Dabbler
Joined
Jun 11, 2011
Messages
36
Tekkie, just to be clear, what I meant was to add a new disk that is not going to be part of the raid-set. To add a disk to be added in the place of the broken one must of course be possible. :)

It worked as it should when I added one disk, the other somehow got mistaken as an replacement...

Anyway, of course it should work adding disks whenever, maybe I did something wrong, maybe a bug...
Hmm, what exactly happens when you press "Replace" in the GUI?
 

Tekkie

Patron
Joined
May 31, 2011
Messages
353
When I press replace, the systems humms away for a few seconds and then comes back as if nothing happened. In the log I can see that the new drive is being identified as /dev/NONE which blows the system.
 
Status
Not open for further replies.
Top