Upgrade from 9.2.1.7 to 9.3 stable fails

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Nope, that "zfs list" doesn't look correct to me. It shows you have 5.17TB of used storage however you look at the list of sub-directories and it doesn't add up, like you are missing a huge folder somewhere. If you have access to any data on this system, I'd back it up, although just over 5TB of data is a lot to backup. Yes, see what you get when you boot from FreeNAS 9.3, hopefully it will show where that 5TB of data reside. If you get the same results, I think you will be destroying your pool however someone might chime in with a solution but I'm a bit skeptical.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Crikey. I don't think those are sub-dirs, though. I had assumed they were other filesystems mounted in from somewhere.

I've never used the "zfs list" command before, so I'm not sure what I'm seeing. When I set this up, I put all the space into a single pool "Vol1" - only recently have I come to appreciate this as short-sighted. All the data (5.17Tb) sits in the Vol1 pool, I'm not sure what the others are, but I assume they're system created.

I do have a full backup, so I can destroy and re-create. This would be preferable if I've made some terrible design decision :)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
In Vol1 you should have a folder/sub-directory in which you store all your data. For instance my pool is called "farm" where yours is called "Vol1". I have a sub-directory called "backups" which I created via the GUI and in zfs list is will show:
Code:
[root@freenas] ~# zfs list
NAME                                                    USED  AVAIL  REFER  MOUNTPOINT
farm                                                   4.83T  2.19T  3.01G  /mnt/farm
farm/.system                                           42.1M  2.19T  3.29M  legacy
farm/.system/configs-5321dce6b8f542d387c8dd7e6bdde0ed   288K  2.19T   288K  legacy
farm/.system/cores                                     1.44M  2.19T  1.44M  legacy
farm/.system/rrd-5321dce6b8f542d387c8dd7e6bdde0ed      34.7M  2.19T  34.7M  legacy
farm/.system/samba4                                    1.05M  2.19T  1.05M  legacy
farm/.system/syslog-5321dce6b8f542d387c8dd7e6bdde0ed   1.34M  2.19T  1.34M  legacy
farm/Removable_Disk                                     305G  2.19T   305G  /mnt/farm/Removable_Disk
farm/backups                                           4.00T  2.19T  4.00T  /mnt/farm/backups
farm/data                                               137G  2.19T   137G  /mnt/farm/data
farm/ftp                                               10.7G  2.19T  10.7G  /mnt/farm/ftp
farm/iTunes                                            36.9G  2.19T  36.9G  /mnt/farm/iTunes
farm/jails                                             6.20G  2.19T   583K  /mnt/farm/jails
farm/jails/.warden-template-VirtualBox-4.3.12           896M  2.19T   895M  /mnt/farm/jails/.warden-template-VirtualBox-4.3.12
farm/jails/.warden-template-pluginjail--x64             597M  2.19T   597M  /mnt/farm/jails/.warden-template-pluginjail--x64
farm/jails/.warden-template-standard--x64              3.10G  2.19T  3.01G  /mnt/farm/jails/.warden-template-standard--x64
farm/jails/plexmediaserver_1                           1.63G  2.19T  2.21G  /mnt/farm/jails/plexmediaserver_1
farm/madyson                                           53.5G  2.19T  53.5G  /mnt/farm/madyson
farm/movies                                             212G  2.19T   212G  /mnt/farm/movies
farm/music                                             42.4G  2.19T  42.4G  /mnt/farm/music
farm/photos                                            40.3G  2.19T  40.3G  /mnt/farm/photos
freenas-boot                                           1.29G  5.92G    31K  none
freenas-boot/ROOT                                      1.06G  5.92G    31K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506232120       100K  5.92G  1022M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506292332      1.06G  5.92G  1022M  /
freenas-boot/grub                                       223M  5.92G  11.2M  legacy

Note how large the backup folder is as that is where most of my data is stored. Lines 4 though 9 are system created and lines 24 though 28 are the new boot device format for 9.3 since it becomes a ZFS format.

What I don't see in yours are all these self created folders and maybe you are storing the data in the root folder of "Vol1". I would recommend you create another folder and move your data into it so it's a little bit more organized, but I don't think it's a requirement, just personal preference.

Since you do have a backup of your data, it might be best to just save time and start from scratch using 9.3.1. Make sure that in 9.2.1.9 you destroy your pool before proceeding if you go down this path.

Good luck.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
From the little bit of research I've done, "zfs list" shows the ZFS datasets, not the directory structure within them.
This post helped me understand the most.

When I first setup this server, I didn't see the point in having multiple datasets, so I just created one to encompass all of the drive capacity on the server. This is a home server, not much security is needed. I have a single snapshot that runs every hour, and is destroyed six hours later. I only have one NAS, so I can't replicate anywhere else directly (although I have backed up with robocopy and/or rsync).

I do have an extensive dir structure within Vol1, but it is not shown by "zfs list".

Or am I getting confused by the nomenclature? Directories vs datasets....

Is there some error in my config which is preventing the upgrade procedure from completing?

Thanks again!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I think you are understanding it well. I have a dataset established to separate my own data from the FreeNAS files, call it personal preference. There is nothing wrong with what you are doing as far as I know.

So, the output you have is for FreeNAS 9.2.1.9, we have yet to see FreeNAS 9.3.1, right? If 9.3.1 looks the same then I think all you need to do is configure your basic settings and shares and you are done.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
So, the output you have is for FreeNAS 9.2.1.9, we have yet to see FreeNAS 9.3.1, right? If 9.3.1 looks the same then I think all you need to do is configure your basic settings and shares and you are done.

Thanks Joe, I haven't forgotten or lost interest, but work got a bit manic today and I didn't get a chance to look at it. Hopefully I will tomorrow!
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Oops, real life got in the way a bit there - sorry... :)

So, the output you have is for FreeNAS 9.2.1.9, we have yet to see FreeNAS 9.3.1, right? If 9.3.1 looks the same then I think all you need to do is configure your basic settings and shares and you are done.

So I booted back into 9.3.1 and did "zfs list";
Code:
[root@bitbucket] ~# zfs list
NAME                                USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                        521M  6.70G    31K  none
freenas-boot/ROOT                   514M  6.70G    25K  none
freenas-boot/ROOT/Initial-Install     1K  6.70G   510M  legacy
freenas-boot/ROOT/default           514M  6.70G   511M  legacy
freenas-boot/grub                  6.79M  6.70G  6.79M  legacy


Not too encouraging - it hasn't imported the volume.
After I import the volume from the web GUI I get;
Code:
[root@bitbucket] /mnt/Vol1# zfs list
NAME                                                   USED  AVAIL  REFER  MOUNTPOINT
Vol1                                                  5.15T      0  5.15T  /mnt/Vol1
Vol1/.system                                          53.4M      0   209K  legacy
Vol1/.system/cores                                    24.0M      0  24.0M  legacy
Vol1/.system/rrd-277b94c280184a779514e6fd3f0cca44      209K      0   209K  legacy
Vol1/.system/rrd-587e094a2524450e802b560a401fce7b      209K      0   209K  legacy
Vol1/.system/samba4                                   8.41M      0  8.41M  legacy
Vol1/.system/syslog-277b94c280184a779514e6fd3f0cca44  2.34M      0  2.34M  legacy
Vol1/.system/syslog-587e094a2524450e802b560a401fce7b  18.0M      0  18.0M  legacy
freenas-boot                                           521M  6.70G    31K  none
freenas-boot/ROOT                                      514M  6.70G    25K  none
freenas-boot/ROOT/Initial-Install                        1K  6.70G   510M  legacy
freenas-boot/ROOT/default                              514M  6.70G   511M  legacy
freenas-boot/grub                                     6.79M  6.70G  6.79M  legacy


So the volume has imported and mounted. I can browse data from the mounted volume through an SSH command line.
But, I get the following error in the web GUI;
Code:
Request Method:    POST
Request URL:    http://bitbucket.grey-area/storage/auto-import/
Software Version:    FreeNAS-9.3-STABLE-201509022158
Exception Type:    IOError
Exception Value:  
[Errno 28] No space left on device: '/var/db/system/nfs-stablerestart'
Exception Location:    /usr/local/lib/python2.7/shutil.py in copyfile, line 83
Server time:    Fri, 9 Oct 2015 14:30:28 +1000


I think if I reboot now with the 9.3.1 boot media in, I'll need to re-import the volume again.
What I think is happening is that config info is failing to write somewhere; that line about "No space left on device".

Any ideas? Recreating the volume and restoring 5Tb is gonna take days....

Thanks again!
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'm certain it has to do with the fact that ZFS list is showing you the pool is in a "legacy" format. Do some research on this before you proceed.

Go ahead and give me the output for "zpool upgrade -v" while running both FreeNAS versions, I'm more interested in 9.3.1 but the 9.2.x may help so why not post it. I'm thinking your pool may need to be upgraded but there is risk in that so read below.

So this is new ground for me, I've never seen the "legacy" message before so I will warn you to ensure you at least have a backup of the data you cannot live without before proceeding...

1) Ensure you backup all your data that you need to retain.
3) If you are brave enter "zpool upgrade -a" and this will upgrade your pool to the current version but you WILL NOT BE ABLE TO USE 9.2.x after this upgrade. Cross your fingers.
3) If this doesn't work, destroy the pool and recreate it, then restore your data.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Hmmm, when I upgraded to 9.2, I was left with a message in zpool status indicating that some ZFS features would not be supported until the pool was upgraded. I did this after the upgrade from 9.2.1.7 to 9.2.1.9. Since then, my zpool status (under 9.2.1.9) shows no necessary upgrades.

And the output of the zfs list command for both our pools seem the same. The pool name shows a mountpoint of /mnt/[PoolName] for both of us, but all the stuff under .system has a mountpoint of legacy. You just have a bunch of additional datasets under your pool showing with mountpoints within the pool.

I have to say, I'm reluctant to upgrade the pool - there doesn't appear to be an issue importing the pool into 9.3.1, just having that import written to the config so it happens automatically at boot. As you say, once the pool is upgraded, there'll be no rollback to 9.2.1.9.

I'll check my backups and see if I can find anything else that looks relevant.

Cheers for the assistance!
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Just noticed another difference. On the zfs list performed under 9.3.1, available shows as 0, but on the zfs list under 9.2.1.9 it shows a few gigs available.
I am running pretty close to the limit - does 9.3 enforce some sort of free capacity limit?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Unless I'm missing something obvious, the roots of your problem seem pretty clear:
  1. Your pool is 100% full.
  2. ZFS is a copy-on-write filesystem, i.e. even deleting something requires free space.
  3. The .system dataset, which is required for FreeNAS to function, is on your pool, and therefore any attempt by FreeNAS to modify it will fail.
Unless you can free up a significant chunk of space on the pool (think along the lines of echo > [somelargefile]), you will be unable to make progress.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I think 9.3 is just more conservative about how much it allows for metadata etc, and reports a smaller free space. This may well be the cause of your problem. How about backing up and removing a few hundred gigabytes from the pool and retrying the update?

Edit; if you can still import and manipulate the pool in 9.2 then this should not need any special measures, just deletion as you normally do it. The only problem is if, because of snapshots, deleting a file does not actually free up much space.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Thanks to Robert and Roger for the replies. If I boot in 9.2.1.9, I (currently) see about 30Gb of free capacity, so I have room to delete stuff. Then I'll need to clear down the snapshots and try the upgrade again. I'll aim to have about 500Gb (~10%) of free capacity.

This could well be the problem. I'll try that next.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
30 GB out of 5 TB is less than 1% free space. This is very very bad even with earlier versions of FreeNAS. Bringing the pool to 10% free should allow it to import just fine, but you really want the pool to be no more than 80% full. It's time for either some serious cleaning, or some larger hard drives (or more drives, but I don't think you have room for those in an N40L). Note that RAIDZ1 with larger drives doesn't give you great data security.

As @rogerh noted, if you have snapshots running, deleting files won't actually free up any space on your pool--you'd need to delete the associated snapshots too.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
30 GB out of 5 TB is less than 1% free space.
ARG! When I looked at the remaining data I thought it said 18TB. Yup, I overlooked the remaining data for the 9.2.x output. I definitely saw it in the 9.3 output as being zero.

Houston, I think we found the problem.

Thanks guys for stepping in.
 

andyl

Explorer
Joined
Apr 20, 2012
Messages
76
Dang. I must admit, I saw the "keep 20% free" rule ages ago and thought "I can't afford that". I bought a 5Tb NAS with the full intention to use all 5Tb. But that's just my irrelevant whinging :)

Now it's become untenable. I'll need to delete a Tb or so (including snapshots) to drop it under 80% full and then attempt the upgrade again.

Will I be able to limit the used capacity to 4Tb (80%) in the future by using datasets?
If so, will I need to remove the existing data and copy it back? I think I probably will.

Thanks to all (esp Joe) for all the assistance!

Out of interest, is there any doco explaining why the 80% limit is necessary? I'm not disputing it - I'd just like to know why...
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Will I be able to limit the used capacity to 4Tb (80%) in the future by using datasets?
If so, will I need to remove the existing data and copy it back? I think I probably will.

No need to destroy everything, just delete some data and/or add more drives (be careful, you can't add one drive to the pool and keep redundancy, read Cyberjock's guide (link is in my signature) if you want more info) and/or replace the drives by bigger drives.

Out of interest, is there any doco explaining why the 80% limit is necessary? I'm not disputing it - I'd just like to know why...

It's because at 90 % ZFS will switch from speed optimization to space optimization, and as an aside if you hit 100 % you'll be in big trouble.

80 % is a good value that let time to add more drives before hitting 90 % ;)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Will I be able to limit the used capacity to 4Tb (80%) in the future by using datasets?
On 9.3 and later you'll be warned when you reach 80%. Some people use space reservation to set aside a chunk in an empty dataset that they can then delete to free up space in an emergency, but it really isn't something that should ever be necessary if you're paying attention. It also won't help if you reach 100% full, for the reasons outlined earlier (copy on write requires free space even for deletion).
 
Status
Not open for further replies.
Top