FreNAS 8.2.0-RELEASE

Status
Not open for further replies.
J

jpaetzel

Guest
The FreeNAS development team is pleased to announce the immediate
availability of FreeNAS 8.2.0-RELEASE.

FreeNAS 8.2.0-RELEASE is the first release on new branch of code
that incorporates tighter integration between the ZFS command
line and the FreeNAS GUI. This release also features the ability to
run arbitrary services and interact with them through the FreeNAS GUI
in a FreeBSD jail. This jail allows a wide range of third party software
to be run on top of FreeNAS, using the PBI format from PC-BSD or FreeBSD
packages or ports, as well as official FreeNAS plugins.

Additional features include:

Support for iSCSI target reload.
GUI support for SAS and FC multipath hardware.
Webshell accessible from the FreeNAS web interface.
ZFS scrubs are configurable from the GUI.
A newer web toolkit is used in the GUI, enabling use of mobile browsers.
An autotuning script tunes ZFS for the hardware it's running on.

Getting FreeNAS:
================

For 64 bit capable hardware
===========================

The FreeNAS images are at:
https://sourceforge.net/projects/freenas/files/FreeNAS-8.2.0/RELEASE/x64/
The plugins and plugin jail are available at:
https://sourceforge.net/projects/freenas/files/FreeNAS-8.2.0/RELEASE/x64/plugins

For 32 bit hardware
===================

The FreeNAS images are at:
https://sourceforge.net/projects/freenas/files/FreeNAS-8.2.0/RELEASE/x86/

The plugins and plugin jail are available at:
https://sourceforge.net/projects/freenas/files/FreeNAS-8.2.0/RELEASE/x86/plugins

Documentation:
==============

http://doc.freenas.org has been updated with the finished 8.2.0 documentation,
A PDF/HTML version will be available Tuesday July 24th.

Release Notes:
==============

http://sourceforge.net/projects/freenas/files/FreeNAS-8.2.0/RELEASE/README/download
 

wirehub

Cadet
Joined
Mar 21, 2012
Messages
7
This version has issues loosing pool when rebooted. When detach , import all ok but when rebooting lost ZFS pool harddisks
 

Durkatlon

Patron
Joined
Aug 19, 2011
Messages
414
This version has issues loosing pool when rebooted. When detach , import all ok but when rebooting lost ZFS pool harddisks
I just had the same issue, upgrading from 8.2.0-RC1 to 8.2.0-RELEASE. When the system was done rebooting, my ZFS pool was gone. I couldn't install RC1 again from the GUI upgrade because there was no place to put it since my pool was gone.

I reinstalled RC1 from the ISO and restored my old config, and all was good again, but of course now I'm not able to go to the RELEASE build. Is the way around this to install RELEASE from the ISO ?
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Got the same thing when upgrading from 8.2RC1 x64 to 8.2 x64, lost pool. zpool import showed the pool was available though.
 
J

jpaetzel

Guest
Ok, that is an issue that hits going from RELEASE to RELEASE-p1.

Open the webshell and do:

# cd /var/tmp/firmware
# cat firmware.img | sh /root/update && touch /data/need-update && reboot
 

ctowle

Dabbler
Joined
Apr 18, 2012
Messages
14
Ok, that is an issue that hits going from RELEASE to RELEASE-p1.

Open the webshell and do:

# cd /var/tmp/firmware
# cat firmware.img | sh /root/update && touch /data/need-update && reboot

can i just copy and paste that?
 
J

jpaetzel

Guest
Anyone upgraded from 8.0.4 x64 to 8.2.0 x64 p1? I did not upgraded 8.0.4 to any patch version so I wonder if is safe to move straight from my version to 8.2.0 p1 or upgrade first to 8.0.4 p3 and then upgrade to 8.2.0 p1. Thanks guys.

It's safe to directly upgrade.
 
J

jpaetzel

Guest
And 8.2.0-RELEASE has a fairly nasty bug that crept in at the last minute....I removed it from sourceforge and uploaded an 8.2.0-RELEASE-p1, but it turns out that has a very small issue as well....the GUI upgrade from 8.2.0-RELEASE to 8.2.0-RELEASE-p1 fails because of a version check.

So, if you haven't upgraded to 8.2.0-RELEASE you're fine, just do a normal upgrade to 8.2.0-RELEASE-p1.

If you have upgraded to 8.2.0-RELEASE then you've probably noticed a very annoying bug, it doesn't import ZFS pools at boot time.

What you'll want to do is detach your ZFS volume from the GUI. There are two checkboxes, one for delete shares, one for mark disks as new. Make sure these are both unchecked.

After the volume is detached run the auto-importer and reimport your pool. From there go to system -> settings -> advanced and do a firmware upgrade to 8.2.0-RELEASE-p1. This will fail due to the version check, open the webshell and type in the following commands:

# cd /var/tmp/firmware
# cat firmware.img | sh /root/update && touch /data/need-update && reboot

When the system comes back up confirm it is running FreeNAS 8.2.0-RELEASE-p1
 

trevize

Cadet
Joined
Jul 6, 2012
Messages
5
Cool beans. Thanks jpaetzel. Worked great for me. running 8.2.0-p1 now and zfs volumes imported fine.
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,041
It's safe to directly upgrade.
Thank you, I upgraded successfully. There few things I noticed on the new version.

Issue 1
This issue is related to the RSA private key, it looks like there are some wrong perms to the /.rnd directory:
Code:
Generating a 1024 bit RSA private key
...........................................++++++
.........................++++++
unable to write 'random state'
writing new private key to '/etc/ssl/freenas/CA/private/cakey.key'

Usually, you get a "unable to write 'random state'" when the /.rnd dir is not associated to a proper user? Not sure if this can be considered as bug, please let me know.

Issue 2
Not really an issue, but I noticed that the previously present "ataidle: the device does not support advanced power management" message is now gone. My drives still show with no APM support, even they actually are:
Code:
# ataidle ada1
Model:                  WDC WD20EARS-00MVWB0                    
Serial:                 WD-WCAZA6765493
Firmware Rev:           51.0AB51
ATA revision:           ATA-8
LBA 48:                 yes
Geometry:               16383 cyls, 16 heads, 63 spt
Capacity:               1863GB
SMART Supported:        yes
SMART Enabled:          yes
Write Cache Supported:  yes
Write Cache Enabled:    yes
APM Supported:          no
AAM Supported:          no

# diskinfo -v /dev/ada1
/dev/ada1
        512             # sectorsize
        2000398934016   # mediasize in bytes (1.8T)
        3907029168      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        3876021         # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        WD-WCAZA6765493 # Disk ident.

# gpart show ada1
=>        34  3907029101  ada1  GPT  (1.8T)
          34          94        - free -  (47K)
         128     4194304     1  freebsd-swap  (2.0G)
     4194432  3902834703     2  freebsd-zfs  (1.8T)

# zdb | grep ashift
                ashift=12

I wonder if there is a recent fix for those WD20EARS disks to properly report the 64 sectors. I had to force the sector size to 4096k, when I originally created the ZFS RaidZ2 array.

Issue 3
This issue is related to the network interfaces. I have dual NIC's, each connected to different networks (192.168.1.1 and 192.168.2.1). Before, I used to get 2 URL's for the web interface:
https://192.168.1.6/ and
https://192.168.2.6/

Now, I can see only one link available: https://192.168.1.6/ and I cannot access https://192.168.2.6/ anymore. Is this the normal behavior or is it something related to an issue with the Nginx config file? At boot time, MAGIC displays the proper info for each network:
Code:
# ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC>
        ether 00:25:90:38:2e:1e
        inet 192.168.1.6 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=219b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,TSO4,WOL_MAGIC>
        ether 00:25:90:38:2e:1f
        inet 192.168.2.6 netmask 0xffffff00 broadcast 192.168.2.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=3<RXCSUM,TXCSUM>
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3 
        inet6 ::1 prefixlen 128 
        inet 127.0.0.1 netmask 0xff000000 
        nd6 options=3<PERFORMNUD,ACCEPT_RTADV>

For kicks, I deleted the em0 interface through GUI and left only em1. When I rebooted, I had only the https://192.168.2.6/ available. Then, I deleted em0 interface also and after reboot I was presented with https://192.168.1.6/ link.

My current network info, in GUI:

freenas-network.png

freenas-interfaces.png


Issue 4
This issue is related to weird timeouts on certain slots:
Code:
Jul 20 23:41:04 pluto kernel: ahcich0: Timeout on slot 17 port 0
Jul 20 23:41:04 pluto kernel: ahcich0: is 00000000 cs 00020000 ss 00020000 rs 00020000 tfd 40 serr 00000000
Jul 21 02:21:13 pluto kernel: ahcich0: Timeout on slot 6 port 0
Jul 21 02:21:13 pluto kernel: ahcich0: is 00000000 cs 000003c0 ss 000003c0 rs 000003c0 tfd 40 serr 00000000

I did not see these timeouts before the upgrade.

Issue 5
This issue is related to a manual scrub. After getting the above slot timeouts, I decided to actually perform a manual scrub. When I clicked on the Go button, it displayed the Please Wait... message for a while and then presented me with this screen:

freenas-scrub.png


Rebooting the system would print this message:
Code:
30 second watchdog timeout expired. Shutdown terminated.
Jul 21 02:54:33 pluto init: /bin/sh on /etc/rc.shutdown terminated abnormally, going to single user mode
Jul 21 02:54:53 init: some processes would not die; ps axl advised
Jul 21 02:54:33 pluto syslogd exiting on signal 15

I had to forcefully shutdown the box. Upon reboot, I was presented with a bunch or ZFS related disk errors, all marked as SALVAGED (thank God, as I was sweating bullets when I saw them). The default scheduled scrub is present, with stock settings. I started again the manual scrub, this time the GUI command was executed properly and the scrub is actually being performed as we speak.

Thank you for the great work, looking forward to some advice for the above listed issues. :)
 

iposner

Explorer
Joined
Jul 16, 2011
Messages
55
Upgrade from 8.0.4

How long should it take? It's just that I've been sitting here looking at the GUI upgrade screen for 15 minutes now...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It should take about 1-2 minutes.
 
Status
Not open for further replies.

Similar threads

J
Replies
63
Views
32K
J
Replies
27
Views
11K
J
Replies
0
Views
3K
jpaetzel
J
J
Replies
0
Views
2K
jpaetzel
J
Top