Significant issues considering RC2 status

Status
Not open for further replies.

avmar

Cadet
Joined
Sep 21, 2011
Messages
3
Greetings,
I am posting here what should be an easy to replicate test that shows the current RC2 status of the software still has some serious shortcomings....


Test environment:

vmware esxi5 virtual machine
FreeNAS-8r8022-amd64 (latest nightly build)

[Create the ZFS raid volume]
Create 4 disk ZFS-Raid-X2: (Disks da1, da2, da3, da4)
Create 2 disk spares for raid: (Disks da5, da6)

[probably not necessary for the test]
Create a ZFS Volume (zvol) for future iSCSI device extent use

Using Firefox browser:
Examine the volume, view the disks, check the zpool status, observe the healthy green alert button.

All is well.


Now from esxi vmware console:
Using the edit settings menu for the FreeNAS vm, remove 2 of the disks that are being used in the raid (since it is a ZFS-Raid-X2 it will survive intact.)

Observe the FreeNAS console immediately shows the two drives that were deleted as (ex. da1: ... lost device, da2... lost device)

... so far so good as the raid continues to stay intact and functional.



Now using the Firefox browser again (log out and log back in just to be sure nothing needs refreshed):
Observe the Alert button is still green and convinced that all is well (it will never change until a reboot of FreeNAS, which does not seem a resonable requirement to me. I would expect the raid to continue to run and be available for rebuilding in the background... especially since I have predefined a couple of spares just for this purpose.)

Note that "View Disks" shows that the two drives that were deleted now show as having no name (ie. the da1, and da2 names are blank).

Also note that selecting "zpool status" gives the all too familiar "Sorry, an error has occured" (It does not seem to me that software of this complexity, that is post RC2 status should be giving out such useless error messages, especially considering how critical the nature of the software is.)

So finally, since we know the raid is suffering from the two removed drives (even though the Alert status is still green), we should (since there are two spares available) be able to replace the dead drives.

From "View Disks", select one the missing drives (whose name will be blank), and click on "Replace".... and you will get ... "Sorry, an error has occured"



FreeNAS appears to be an amazing piece of software and maybe the previous version was more robust, but as a potential new user, an important requirement should be high quality data avalability, failure detection and recovery, with hopefully minimal impact (ie.. not having to reboot the FreeNAS server and all of the implications of that) on the users.:(

I am assuming the shortcomings of browser based management of the raid does not exist from the command line management (I haven't got that far in the testing) but I understand the status of the project is to be able to do the management from the browser.

I hope these kinds of issues can be addressed prior to the official release of the software.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
@avmar, I completely agree and think it is too early for an official release when these types of issues are still present. I hope our friendly developers don't take offense and these important details are fixed before an official release.
 
D

delphij

Guest
The lost device issue can not be solved in the 8.0.1 time frame but we aim to solve that in 8.1 and we are working on a project called zfsd on FreeBSD side, I'll port a few patches to FreeNAS to get the functionality. The underlying problem is more complicated than it looks like.

As a workaround, for now, it's advised to do "graceful" hotpull instead of doing it brute force, i.e. the user needs to tell the system that the disk is going to be removed (see ticket #886). This is not ideal but would prevent the need of offlining the whole system to pick up new disks, which is unacceptable. Basically what to be done are:

zpool offline <pool> <device>p2
swapoff <device>p1
(pull off the disk)
(insert a new disk)
gpart create -s gpt <device>
gpart add -b 128 -s 4194304 -t freebsd-swap <device>
gpart add -t freebsd-zfs <device>
swapon <device>p1
zpool replace <pool> <device>p2
zpool detach <pool> <device>p2/old
 
Status
Not open for further replies.
Top