11.3-RC1 Now Available!

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
yes and it looks like they're accumulating, moving to next versions.. ironically, i am now in a state every geli unlock triggers panic (@HolyK )
Solaris(panic): blkptr at 0xfffffe0024f2f780 has invalid TYPE 101
what i did before, (in order to give up on replication), i deleted replication target datasets which caused previous panics..
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
653
yes and it looks like they're accumulating, moving to next versions.. ironically, i am now in a state every geli unlock triggers panic (@HolyK )
Solaris(panic): blkptr at 0xfffffe0024f2f780 has invalid TYPE 101
what i did before, (in order to give up on replication), i deleted replication target datasets which caused previous panics..
Not good. Could you please update the existing ticket (if it is the same issue) with recent situation from your end? I've quickly checked the four tickets mentioned in the other thread but i did not found info related to GELI unlock. (I might miss that?). Or please raise a new one for 11.3 RC1.
Oh and thanks for heads up!
 

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
Not good. Could you please update the existing ticket (if it is the same issue) with recent situation from your end? I've quickly checked the four tickets mentioned in the other thread but i did not found info related to GELI unlock. (I might miss that?). Or please raise a new one for 11.3 RC1.
Oh and thanks for heads up!
tried to repair it instead of copying from zpool import -o readonly=on POOL, but gave up after 10 hours. the only panic-free and fast repair is supposed to be txg game of luck sysctl vfs.zfs.recover=1; sysctl vfs.zfs.spa_load_verify_data=0; zpool import -T $(zdb -e POOL|grep best uberblock|awk '{print $11}') rather than zpool import -FX POOL and similar, but even rollback to old txg panics. 11.3master and Truenas12 also panics. On top of that, now TCG Opal drive fails to load every time during dozens of reboots experienced. Surely every last zdb record, every msgbuf error was replication related. Tend to think people don't use it that's why only not so popular topic, and i'm disappointed about this, this makes BTRFS jokes pretty obsolete.
 

appliance

Explorer
Joined
Nov 6, 2019
Messages
96
i copied the panic HDD pool, and created a new one. changed RAIDZ to RAID10. i still tried to run replication task out of curiosity, and this time it panicked after job started as usual, and after reboot it panicked upon geli unlock straight away.
so i deleted the HDD pool, and created a new one, and created an interactive password like always. quickly realizing i'm not sure about the password (as the GUI does NOT ask to type it twice, which is totally wrong), i canceled the action in <1sec and checked the password with eye icon and confirmed the action again. However the previous action continued to run in background in parallel, and i've got two encryption keys to download. so i deleted the pool thinking it might be in weird state, just to find a completely unrelated pool was deleted in the process ('UNUSED', '1 devices failed to decrypt'). I swear i didn't type that pool name for deletion (this UI protection is good). Meanwhile geli status says SSD is encrypted (it was never encrypted). Oh and it's a swap (2gb). So during some enumeration, pools were shifted. You can't make this up. In the end, HDD pool is panicking, SSD pool doesn't load at boot, and innocent NVME pool gets wiped. So i'm left with nothing. Oh, actually a CD-ROM. Read-only, so nothing can happen to it.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,543
Appliance, I looked through the Jira tickets in which you posted your panic. Alexander stated
MCA errors, part of which you are showing are usually caused by hardware failures. Look what else it reported there and whether you have anything in BMC/BIOS logs.
At this point you've posted your issue on three separate previously-existing Jira tickets in addition to posting it here. Please feel free to create a new Jira ticket for your issue and work with the developer there. Repeatedly posting it in multiple tickets and unrelated forum posts just creates confusion.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
Except none of those issues are related to 11.3-RC1. All of the systems in those tickets are 11.1 or 11.2.
Not really, there is something smelly going on with replication that is introducing corruption in the target pool and it's still there in 11.3. I did only upgrade the target (backup) system so both sides, recreated the pool and resynced. Panics are still going strong... (My latest ticket number covering this is NAS-103126)
ps: my pools/disks are not encrypted
 

Scareh

Contributor
Joined
Jul 31, 2012
Messages
182
so i've noticed in the release notes that the warden system is gone completely now.
How about:


I understand it's not the core bussiness, but the solution is handed literally on a silver platter (for sab) and still not implemented? Besides the lack of all plugins warden being available as iocage?
I always understood that updates added functionality (and bugs :p) but not remove them.
And sure there'll be comments about create your own jail and create it from ports. But why should i do that as it is offered as a plugin in warden?
 

ThreeDee

Guru
Joined
Jun 13, 2013
Messages
698
But why should i do that as it is offered as a plugin in warden?
...because they are offered as a plugin in IOCage?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,456
I always understood that updates added functionality (and bugs :p) but not remove them.
  • VirtualBox
  • Old GUI
  • UFS support
Need I go on?
But why should i do that as it is offered as a plugin in warden?
Why should iX maintain two separate jail managers?
 

systemofapwne

Dabbler
Joined
Oct 6, 2019
Messages
16
I have been "forced" to switch to the 11.3 train due to 11.2 not incorporating this VirtIO fix. Without that fix, my VMs will lock up due to IO being blocked.

I have to say, for an RC, it is already looking great. But there are some caveats ofc.
E.g. the completely rewritten replication engine needs more work. Upgradind from 11.2 failed when it tried to import my replication tasks and the system would not initialize middleware anymore. So I installed 11.3 from scratch and tried to import my backup config, just to find out, that the import failed. So I manually reconfigured everything.

In terms of replication, local replication does not seem to work for me: Freshly created snapshots on the main pool will not move over to my local backup pool: It simply stays at "No snapshots sent yet".
But I can replicate to my offsite system in legacy mode successfully. Yet I can no longer chose a time windows, when it will be run (Legacy only!).

Speaking of time windows: There seems to be a new rule enforced for automatic snapshot tasks, which forbids to set "Begin" to be greater than "End". I generally put "Begin" to "07:00" and "End" to "02:45", so there will be no auto-snapshots between 02:45 and 7:00. In that time, I generally sync against my offsite system and I do not want new snapshots being created during that sync-time. No big deal, but that rule is a bit annoying.

Anyway, I respect that this is not a finaly version though and for that state, it already runs great!
 

Junicast

Patron
Joined
Mar 6, 2015
Messages
206
VLAN network interfaces when configured do not show their parent interface, neither in the overview nor when clicked on edit in RC1.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Not sure if it's been reported, but I've noticed that the UPS monitoring service fails to connect to my SNMP UPS when the system is rebooted/powered up; until the service is restarted.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Also tried to get replication working between 11.2U7 and 11.3RC1; I gave up. Not sure if it's just me missing a few things, but, I couldn't get it working in the time I had to chip at it.
 

systemofapwne

Dabbler
Joined
Oct 6, 2019
Messages
16
Also tried to get replication working between 11.2U7 and 11.3RC1; I gave up. Not sure if it's just me missing a few things, but, I couldn't get it working in the time I had to chip at it.
When streamin from 11.3-RC1 to 11.2, it worked for me this way:
  • Direction: Push
  • Transport: LEGACY (any receiving end below 11.3 needs legacy!)
  • SSH Connection: "Select your preconfigured SSH connection"
  • Source Dataset: /SourcePool/YourDataset
  • Destination: /DestinationPool/
Additional notes
Be aware: Before 11.3, replication was bound to an auto-snapshot task. The naming convetion for those pre-11.3 tasks was auto-%Y%m%d.%H%M-7d for a 7d retention snapshot task. In 11.3, this changed to a naming policy of auto-%Y-%m-%d_%H-%M So adapt your auto-snapshot tasks accordingly, to replicate the old naming convention. Background: It seems like, with the new naming convention, freshly created snapshots of the new naming convention seem not to inherit from previous snapshots of the old naming convention (either that, or I had a really strange glitch). Thus, syncing 11.3-RC1 autosnapshots to a 11.2 system might work without changing the naming convention, but will cause FreeNas to create a new snapshot linking to all data (ignoring all previous snapshots). Thus, you might run into a situation, where all data from your production system will be copied over to your backup system one by one. (At least, that was what happened to me first).
 

emarj

Dabbler
Joined
Feb 7, 2018
Messages
23
Just updated to 11.3 RC1. Is ZeroTier gone or am I missing something?
I am relying on it to replicate between two sites so it's kinda disappointing.

EDIT:
Apparently it is: https://jira.ixsystems.com/browse/NAS-104015
I would have appreciated to read it on the release notes to not find myself without replication
 
Last edited:

Junicast

Patron
Joined
Mar 6, 2015
Messages
206
VLAN network interfaces when configured do not show their parent interface, neither in the overview nor when clicked on edit in RC1.
I found another bug, which is not only cosmetic.
In my situation I have a LAGG, over that LAGG I have created two vlan interfaces (5 and 13). Onto those vlan interfaces I have created two bridges, one for each vlan. When I created the bridge I set an IPv4 address for the bridge interface.
Now I cannot change the IP address at all after I applied the configuration. When I edit the interface, enter another IP and hit save, the old IP will stay there, no matter what I do.

Edit:
Deleted, since after a reboot it worked again.
 
Last edited:

Junicast

Patron
Joined
Mar 6, 2015
Messages
206
Another thing. I get this every time.
When I unlock my encrypted pool I get this error, even after a reboot:
Error Unlocking
[EFAULT] bhyve process is running, we won't allocate memory
Code:
Error Unlocking
[EFAULT] bhyve process is running, we won't allocate memory
Error: Traceback (most recent call last):

  File "/usr/local/lib/python3.7/site-packages/tastypie/resources.py", line 219, in wrapper
    response = callback(request, *args, **kwargs)

  File "./freenasUI/api/resources.py", line 953, in unlock
    form.done(obj)

  File "./freenasUI/storage/forms.py", line 2827, in done
    }, job=True)

  File "./freenasUI/storage/forms.py", line 2827, in done
    }, job=True)

  File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 509, in call
    return jobobj.result()

  File "/usr/local/lib/python3.7/site-packages/middlewared/client/client.py", line 272, in result
    raise ClientException(job['error'], trace={'formatted': job['exception']})

middlewared.client.client.ClientException: [EFAULT] bhyve process is running, we won't allocate memory

The volume gets unlocked anyhow.
 

Attachments

  • Bildschirmfoto vom 2020-01-10 21-41-35.png
    Bildschirmfoto vom 2020-01-10 21-41-35.png
    949.2 KB · Views: 338
Joined
Nov 18, 2019
Messages
3
Fresh installed the 11-3-RC1 yesterday on a Dell R540 and PERC 730P. Didn't achieve a boot in UEFI mode, so reinstalled with BIOS mode enabled and it was flawlessly. Added a storage pool with 4 disks 2x mirrored. Just fine, as intended.
 

majerus

Contributor
Joined
Dec 21, 2012
Messages
126
Any idea when Rc2 is going do drop ? Additionally anyway to test RC2 then switch to stable branch when it comes out?
 
Joined
Nov 18, 2019
Messages
3
Hi,
After creating a pool and extending it - 2 x mirror - the GUI is showing the pool and the datasets listed in cascade (pool1/iocaje) , but there's no way to add a dataset directly to the pool. But, I'm able to insert a samba sharing directly in to the pool. Is this the correct way to deal with datasets - as the pg 175 of the 11.3-RC1 documentation shows the storage page with the "add" button to dataset adition, exactly tge same as the videos I'm watching about dataset new entries.

Sorry if it seems akward, but that's the way it is, and plus that's my first FreeNAS install,

Cheers,

Renato
 
Top