FreeNAS 9.2.1.7 is now available

Status
Not open for further replies.
S

scotch_tape

Guest
This appears to be a 9.2.1.7 bug, it showed up after updating. On a CIFS share, in the windows file security dialog box, when changing the security on a file, the root user now shows as "account unknown"....

In fact the GUID of the user "root" has changed, so in the security it looks like a deleted account....

Can somebody have a look into this?
 
Last edited by a moderator:

doverosx

Dabbler
Joined
Jan 4, 2013
Messages
33
I chuckled at the feud that was building up on an update thread. It sounded more like both of you were mostly postulating about version numbering rather than the content of the codebase. 9.2.1.7 code base *can* be TAGGED as 9.3 if JK wanted it to be. Anyhow, that's all gitty stuff so now I must figure out why my TrueNAS/Proxmox setup is suffering IO delay!
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
"To Upgrade Or Not To Upgrade" ... i am a bit scared of Bug #5821 since i have encrypted pool ...
 

panz

Guru
Joined
May 24, 2013
Messages
556
I did some tests with an encrypted pool (n. 6 3TB Reds in a RAIDZ2 config). If you follow - step by step - the user guide rules for replacing a drive in an encrypted pool, it goes well.

I also tried destroying the metadata of a drive (booting the server from a FreeBSD CD) and rebooting FreeNAS: after a bit of work (mainly to discover which drive has corrupted metadata) the process of restoring the drive was quite simple.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Considering this is the first case I've seen in the 9.2.1 series where you couldn't replace a disk I tend to think that the user made some kind of mistake. I haven't tried to reproduce the error but I will be in coming weeks.
 

pjc

Contributor
Joined
Aug 26, 2014
Messages
187
In any case, starting with 9.3 there will be no more point releases. There will be updates you can apply via the update manager, but new ISOs probably won't happen except on major release boundaries. With any luck, 9.2.1.7 will be the last point release FreeNAS users ever see. :)
I'm a little confused. In the current (ISO-based) model, it's easy to roll back to the previous version if there's a problem with an upgrade. How will rollbacks work under the new model?

Also, how will we be able to update file servers that aren't connected to the Internet? Even the documentation notes that "In many cases, a FreeNAS® configuration will deliberately exclude default gateway information as a way to make it more difficult for a remote attacker to communicate with the server."

Will every new install have to download a bunch of additional patches as soon as it starts up?

You're thinking old-school. When a user installs OS X "Mavericks" (and the use of a name vs an arbitrary number is intentional) they don't think about point releases. They install something with a name, and then when the update app says "Hi, we have updates for you" they just indicate their willingness to apply them (or not). ... we don't want users having to know or care which "point release" they're running anymore, just that they're up-to-date. That's the value of not having point releases. To anyone who's not a developer or skilled with git, they don't have any real meaning.
Perhaps I'm misunderstanding, but that seems a strange example to use. Apple does in fact release point releases.

Yes, they release interim updates to various components, but then they release a point release that rolls up interim changes into a single (presumably well-tested) package. That's why they're up to 10.9.4 and testing 10.9.5. I go to "About this Mac" and right under the "Mac OS X" title, it says "10.9.4". In the App Store it says the current version is 10.9.4.

And even git has the notion of "tags" so that it's clear when something is released.

I can definitely imagine some benefits to allowing individual components to be updated more easily (and perhaps without rebooting), but I'm having a hard time seeing how more frequent updates are compatible with the notion of a stable server that changes relatively infrequently? This seems like you're inviting a configuration management nightmare.

I must be missing something, but what? What's the actual motivation for going to all this effort? The ability to verify the authenticity of packages?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Well those seem like actually pretty intelligent questions, anyone know?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I'm a little confused. In the current (ISO-based) model, it's easy to roll back to the previous version if there's a problem with an upgrade. How will rollbacks work under the new model?

Also, how will we be able to update file servers that aren't connected to the Internet? Even the documentation notes that "In many cases, a FreeNAS® configuration will deliberately exclude default gateway information as a way to make it more difficult for a remote attacker to communicate with the server."

Will every new install have to download a bunch of additional patches as soon as it starts up?


Perhaps I'm misunderstanding, but that seems a strange example to use. Apple does in fact release point releases.

Yes, they release interim updates to various components, but then they release a point release that rolls up interim changes into a single (presumably well-tested) package. That's why they're up to 10.9.4 and testing 10.9.5. I go to "About this Mac" and right under the "Mac OS X" title, it says "10.9.4". In the App Store it says the current version is 10.9.4.

And even git has the notion of "tags" so that it's clear when something is released.

I can definitely imagine some benefits to allowing individual components to be updated more easily (and perhaps without rebooting), but I'm having a hard time seeing how more frequent updates are compatible with the notion of a stable server that changes relatively infrequently? This seems like you're inviting a configuration management nightmare.

I must be missing something, but what? What's the actual motivation for going to all this effort? The ability to verify the authenticity of packages?

9.3 will use ZFS in the boot device, which solves the rollback issue via snapshots (maybe using a basic boot repair mode like Windows has, in this case allowing the ZFS filesystem to be rolled back to a known-good snapshot).

As far as I know, the goal being pursued is to allow easy and painless application of security patches (feature patches being still taken care of during beta testing - at least the majority of them).

Last I checked, the plan was to simply check the versions of all packages as a sort of version number. I do agree that something like service packs would be a nice addition to the scheme, particularly if a version is going to be supported for a while (Please note I'm not saying this is going to happen. It's just a what-if.).
 

pjc

Contributor
Joined
Aug 26, 2014
Messages
187
9.3 will use ZFS in the boot device, which solves the rollback issue via snapshots
That's a nice solution. Will it still boot off of USB stick like that, though? (I didn't get the world's fastest-writing USB stick, since I didn't expect a lot of writes to the boot device...)

As far as I know, the goal being pursued is to allow easy and painless application of security patches (feature patches being still taken care of during beta testing - at least the majority of them).
Ah, that makes sense. Is the plan for "updates" to be only security updates then? It makes sense to make those easy. And then configuration management is relatively easy, since only full releases have different features.

One thing people do care about is uptime and whether the pending updates warrant taking the machine or services down. How will people be able to do that (particularly with an isolated file server) without some sort of human-readable version number or timestamp?

I'm still curious how this will work both in terms of an initial install and then subsequent updates if the file server doesn't have access to the Internet.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
I sure hope it stays on USB devices, my system has no SATA ports left and I'm not pulling a disk just for the OS.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
No reason why it shouldn't work with USB drives.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Except we have been told to never use ZFS over USB.

No personal experience, just going by what other people said.

We're talking about wildly different things.

The biggest problem with USB hard drives are the USB-SATA bridge chips - which are thankfully not present in (the vast majority of) USB flash drives. It's not a problem with ZFS, it's a limitation of USB HDDs.
 

Robert Smith

Patron
Joined
May 4, 2014
Messages
270
We're talking about wildly different things.

The biggest problem with USB hard drives are the USB-SATA bridge chips - which are thankfully not present in (the vast majority of) USB flash drives. It's not a problem with ZFS, it's a limitation of USB HDDs.

Good to know. I may be tempted to build a USB-flash-drive-only NAS, for kicks and giggles; or as a kind of super-safe flash-drive.

Or add a poll of USB-flash drives to an existing NAS on something like a multi-port hub.


Thank you.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Sorry but I'm going to lock this thread. We're not talking about the topic at hand (9.2.1.7).

Thanks to all that participated.
 
Status
Not open for further replies.
Top