Freenas 8 Issues - Space Error

Status
Not open for further replies.

hmark

Cadet
Joined
Aug 31, 2011
Messages
4
I have been using an older version of freenas for years with no down time or issues. After recent purchase of new drives I upgraded to 8 and am sorry I did. I can't find a way to view individual disk status, status of the raid, etc. (using ZFS with 3 2tb drives and RaidZ). On reboot of Freenas I get the "error getting --- space" error on the volume and have to destroy and recreate it (after completely wiping the disks which takes a long time). I have set up and the volumes and wiped the disks 5 time and still have same problem on reboot.

I'm going back to the old version unless someone can point out something I'm doing wrong. I set up everything exactly according to the documentation and videos.:(
At least it was dependable.

thanks,
h mark
 

t3h0th3r

Dabbler
Joined
Jul 29, 2011
Messages
14
Hello i had the same issue when migrating from 0.72 to 8.0 and from 8.0 to 8.0.1.
There seems to be an error when importing pools. For some reasons FreeNAS "forgets" that it's supposed to mount ZFS pools on /mnt and tries to mount them on / which isn't writeable. To fix this you need to SSH into the Box and set the correct mountpoint manually (i don't remember if / needs to be writeable, so i added the commands for it, just to make sure).

Code:
mount -uw /
zfs set mountpoint=/mnt yourpool
zpool export yourpool
zpool import yourpool
mount -ur /


Now your pool should show up correctly in the WebGUI. These changes should persist even after a reboot. To make sure, reboot the box and you should be all set.

Hope that helped.
 

hmark

Cadet
Joined
Aug 31, 2011
Messages
4
thanks for the info. After trying Openfiler and MS Home Server 2011 as well as Freenas 7, I'm back to 8 for another go at it.

thanks,
hmark
 

hmark

Cadet
Joined
Aug 31, 2011
Messages
4
I rebuilt freenas with the latest release and still having same problem. I did check the /mnt directory and the mount is there originally but then disappears after a few hrs even without reboot. It also disappears after reboot. Running the commands you provided fails on the second one with an error of 'cannot open raid620: dataset does not exist' raid620 being the name of the pool. But like I said, it is there after pool creation. Once the pool is gone then the only way I can resolve is wiping disks and do a complete reinstall - but the issue is repeated. After latest rebuild I created the zfs raidz volume with 3 320gb drives and then ran the commands right after pool creation and before reboot but the pools still disappear from the /mnt directory after a few hrs (even without reboot. reboot just makes it happen faster).

mount -uw /
zfs set mountpoint=/mnt raid620
zpool export raid620
zpool import raid620
mount -ur /

another quick note - once the pool disappears, the disks cannot be seen in the admin console to recreate the pool.

any help is appreciated
tx,
hmark
 
Status
Not open for further replies.
Top