TrueNAS 12.0-BETA1 Release Announcement

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,448
FreeNAS (and now TrueNAS) Fans! I'm pleased to announce the availability of our first BETA1 for the upcoming 12.0 TrueNAS CORE / Enterprise release. It includes many notable new features and improvements, some of which include:
  • Major ZFS upgrade to the upcoming OpenZFS 2.0
  • Support for ZFS Async Copy on Write
  • Improved hardware support for AMD Ryzen CPUs and a variety of network cards
  • Performance Improvements across many areas of the software stack, including CPU, Samba, ZFS and more
  • Native ZFS dataset encryption
  • Support for upcoming TrueCommand Cloud Connections
  • Support for meta-data only vdevs (AKA: Fusion Pools)
  • API Keys for scripted control of TrueNAS
  • ZFS User Quota Support
In addition to this new release, I'm pleased to announce that we've taken the opportunity to refresh and renovate our documentation for the products. Our new documentation hub is easier than ever to author content, translate. and will allow both IX and our community to be more responsive in providing up to date information on using TrueNAS CORE & Enterprise.

Existing 11.3 users can update by changing to the new 12.0-BETA train via the update UI. New users, hit up the download link below to grab an ISO image.

On behalf of the entire iXsystems crew, welcome to TrueNAS CORE, and as always please let us know if you run into issues along your journey.

Docs Site:
https://www.truenas.com/docs/

Release Notes:
https://www.truenas.com/docs/hub/intro/release-notes/tn-12_0-beta1/

Download:
https://www.truenas.com/download-truenas-core/
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Is there a "what's new" page describing the new features in a bit more detail, yet? If not, could there be?

Most of the pages I can find, just list them by label, but a bare label like "Async Copy On Write" isnt very informative if you don't know what it's intended to fix, or do better. Ditto the rest of OpenZFS 2.0 compared to 11.3?

For such a major update, is there any chance of an informative "what's new under the hood" guide for those migrating from 11.3, that actually takes the announcement bullet points and explains them, and how they can be leveraged by existing users?

It would be very helpful, especially for 12.0 Beta but also for future beta/RC releases, to do that. Mere bullets like "Async COW" or "OpenZFS 2.0" don't actually explain much to those not already in the know.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
It occurs to me that with the introduction of the special VDEV, I will most likely want all of my spinning pools to have metadata on SSD, which will now be possible, so, great...

But I immediately wonder if I will again find myself feeling quietly guilty for "breaking the rules" by using the CLI to construct partitions on at least 2 or maybe as many as 4 (given the pool data being at risk if metadata is lost) SSDs in order to present partial disks to a pool as a mirrored special VDEV, allowing me to use the same SSDs for more than one pool's metadata (since I imagine the actual space requirement will come nowhere near a reasonably small SSD of 256 or 500 GB).

That makes me want to raise the question... do folks see value in being able to have TrueNAS offer that as an option in the GUI?

I imagine it would go like this...

Go into storage | pools then find a "Special metadata" "pool" at the top or on a 3 dots menu to the side.

From there, being able to select either the list of disks already allocated for this or to be able to allocate new disks. There may be some logic to allowing multiple groups of disks in larger systems.

Then with a group, the ability to allocate partitions of a specified size up to the mirrored capacity of the group.

From there, back to storage, you can now extend any of the pools by adding one of the partition/groups as a special metadata VDEV (that might need some fancy trickery from the coders to make it make good sense, since the partitions should sort of all be presented like individual disks in one sense, but you really want to just offer the group of partitions as a unit to be added as the metadata vdev).

Any opinions?
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
It occurs to me that with the introduction of the special VDEV, I will most likely want all of my spinning pools to have metadata on SSD, which will now be possible, so, great...

But I immediately wonder if I will again find myself feeling quietly guilty for "breaking the rules" by using the CLI to construct partitions on at least 2 or maybe as many as 4 (given the pool data being at risk if metadata is lost) SSDs in order to present partial disks to a pool as a mirrored special VDEV, allowing me to use the same SSDs for more than one pool's metadata (since I imagine the actual space requirement will come nowhere near a reasonably small SSD of 256 or 500 GB).

That makes me want to raise the question... do folks see value in being able to have TrueNAS offer that as an option in the GUI?
Im already doing this on 12-BETA1. Twin 480GB Optanes as a special mirror vdev for metadata + dedup... but partitioned to grab 50GB of each, for SLOG as well. So the end result is like this:

MIRROR
HDD1 - 8 TB
HDD2 - 8 TB
MIRROR
HDD3 - 10 TB
HDD4 - 10 TB
SPECIAL METADATA MIROR
OPTANE1p1 - 400GB of 480 GB
OPTANE2p1 - 400GB of 480 GB
LOG MIROR
OPTANE1p2 - 50GB of 480 GB
OPTANE2p2 - 50GB of 480 GB

Works beautifully. The performance is night and day, from 11.3. It was litewrally unable to cope with 10GB+ file transfers with dedup due to 4k read demand of DDT from HDD under 11.3, because the 4k DDT reads and write amplification IO, stalled the IO pipeline. in a bad way Under 12 with bare default config, I'm getting 270-370 MB/sec every which way solid and consistent on 1.2 TB file copies, random data, large files, small files, mixed RW, ....

The breaking issue was 4k IO which HDDs can't keep up with, and mixed IO, which Optane dominates all other SSDs at (both top enthusiast SSD like Samsung Pro, and top datacentre devices like Intel 750/P3700).
YcM2HAufKE84Nap3Qg7hJo-650-80.png

The ZFS IO demand, and the Optane's speed for mixed IO, is such that I found putting metadata and slog on the same device is still a much faster SLOG and dedup, than anything that's not an Optane mirror :)

Tl;dr - There are definitely cases where you'd want to partition a disk and use the partitions in ZFS for good reasons. Other examples are SSDs that have more capacity than needed and plenty of free IO capability, are one example. Another is systems that have run low on device connections/slots, where a budget enthusiast can economise by buying 2 good SSDs and partitioning them, rather than having to find a way to connect 4 SSDs and use whole devices. Especially as some SSD uses don't need much space, so a whole device is overkill.

I imagine it would go like this...

Go into storage | pools then find a "Special metadata" "pool" at the top or on a 3 dots menu to the side.

From there, being able to select either the list of disks already allocated for this or to be able to allocate new disks. There may be some logic to allowing multiple groups of disks in larger systems.

Then with a group, the ability to allocate partitions of a specified size up to the mirrored capacity of the group.

From there, back to storage, you can now extend any of the pools by adding one of the partition/groups as a special metadata VDEV (that might need some fancy trickery from the coders to make it make good sense, since the partitions should sort of all be presented like individual disks in one sense, but you really want to just offer the group of partitions as a unit to be added as the metadata vdev).

Any opinions?
I'd do it this way.

  1. An option under ""Disks" to handle a very basic "create partition" + "delete partition" within unused space on a disk. No need for resize or move, that's beyond scope. (You can always remove disk from pool, delete, and create new partitions of the new size. Don't need move/resize!)
  2. When creating/modifying a pool, an option for any disk that are partitioned, "show disk as partitions." The pool create/modify GUI then remains identical (which is important for dev time!), except that for devices with a check mark, the list of devices you can add to a vdev shows individual partitions, not the entire disk, for any disks with a checkmark.

So the process for your use case would be like this:

  1. You have disks HDD1~4, to create 2 mirrored pools. You also have SSD1 and SSD2 (both say 250GB) and want to make a metadata mirror for both.
  2. Go into "storage -> DISKS". Select SSD1. Wipe it. Create a 125GB partition in the empty space. Create a 2nd 125 GB partition in the remaining empty space. Do the same with SSD2.
  3. Go into "storage -> POOLS". Locate SSD1 and SSD2. The UI recognises they are partitioned and offers a checkbox for each, "Show as partitions". Check the checkbox.
  4. The devices list now lists HDD1, HDD2, HDD3, HDD4, SSD1p1, SSD1p2, SSD2p1, SSD2p2. You can create POOL1 using HDD1, HDD2, SSD1p1, and SSD2p1.
  5. Now create a 2nd pool. SSD1 and SSD2 are part used so the whole disk can't be offered, so the list of devices available will be HDD3, HDD4, SSD1p2, and SSD2p2. Use those to create pool2.
Done.

Other uses are 2 different roles on the same pool, such as a mirrored metadata vdev + slog, which was my use case.

UI capabilities required:
- create/delete partition in unallocated space
- option to show partitioned disks as a disk, or as individual partitions, in the device lists for pool manipulation.

Beyond that. almost nothing is needed, so its low on dev work to get this benefit. With metadata devices and SSDs, its well worth it. But if accepted, may have to be a 12.1 feature as 12 is feature complete.
 
Last edited:

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Im already doing this on 12-BETA1. Twin 480GB Optanes as a special mirror vdev for metadata + dedup... but partitioned to grab 50GB of each, for SLOG as well. So the end result is like this:

MIRROR
HDD1 - 8 TB
HDD2 - 8 TB
MIRROR
HDD3 - 10 TB
HDD4 - 10 TB
SPECIAL METADATA MIROR
OPTANE1p1 - 400GB of 480 GB
OPTANE2p1 - 400GB of 480 GB
LOG MIROR
OPTANE1p2 - 50GB of 480 GB
OPTANE2p2 - 50GB of 480 GB

Works beautifully. The performance is night and day, from 11.3. It was litewrally unable to cope with 10GB+ file transfers with dedup due to 4k read demand of DDT from HDD under 11.3, because the 4k DDT reads and write amplification IO, stalled the IO pipeline. in a bad way Under 12 with bare default config, I'm getting 270-370 MB/sec every which way solid and consistent on 1.2 TB file copies, random data, large files, small files, mixed RW, ....



UI capabilities required:
- create/delete partition in unallocated space
- option to show partitioned disks as a disk, or as individual partitions, in the device lists for pool manipulation.

Beyond that. almost nothing is needed, so its low on dev work to get this benefit. With metadata devices and SSDs, its well worth it. But if accepted, may have to be a 12.1 feature as 12 is feature complete.

Thanks for the excellent write-up. Could you make this a "suggestion" in the bug tracker?

Optane makes it a perform much better than small flash devices... which is why it wasn't done previously. The question would be whether doing this on bad flash devices will cause more problems for people.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Is there a "what's new" page describing the new features in a bit more detail, yet? If not, could there be?

Most of the pages I can find, just list them by label, but a bare label like "Async Copy On Write" isnt very informative if you don't know what it's intended to fix, or do better. Ditto the rest of OpenZFS 2.0 compared to 11.3?

For such a major update, is there any chance of an informative "what's new under the hood" guide for those migrating from 11.3, that actually takes the announcement bullet points and explains them, and how they can be leveraged by existing users?

It would be very helpful, especially for 12.0 Beta but also for future beta/RC releases, to do that. Mere bullets like "Async COW" or "OpenZFS 2.0" don't actually explain much to those not already in the know.

Its not an unreasonable request, but not something we currently do systematically. Ideally these community forums have been a way for people to ask questions and get answers. The new TrueNAS 12.0 documentation site also provides scope for more detailed doumentation and user contributions. By the time TrueNAS 12.0 is RELEASE, the documentation should be more complete. Should we add a paragraph on each feature to the Release notes or perhaps allow users to do the same?

"Async COW" is a very shorthand description for a way of improving sequential writes which are unaligned with the ZFS record or block size. It reduces the number of disk I/Os needed to perform the Writes. It should improve write throughput and allow the use of larger record/block sizes with less performance penalty. Specific workloads will see acceleration. Others may not notice the difference.

For now, lets just use this forum to resolve any other questions. Would the paragraph above be sufficient in the release notes or would we need a full technical description (in which case, each new feature may need a document)?
 

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Its not an unreasonable request, but not something we currently do systematically. Ideally these community forums have been a way for people to ask questions and get answers. The new TrueNAS 12.0 documentation site also provides scope for more detailed doumentation and user contributions. By the time TrueNAS 12.0 is RELEASE, the documentation should be more complete. Should we add a paragraph on each feature to the Release notes or perhaps allow users to do the same?

"Async COW" is a very shorthand description for a way of improving sequential writes which are unaligned with the ZFS record or block size. It reduces the number of disk I/Os needed to perform the Writes. It should improve write throughput and allow the use of larger record/block sizes with less performance penalty. Specific workloads will see acceleration. Others may not notice the difference.

For now, lets just use this forum to resolve any other questions. Would the paragraph above be sufficient in the release notes or would we need a full technical description (in which case, each new feature may need a document)?
Seriously, that's more than ample. Yes please!
  • Gives me a sense I understand the change
  • Lets me get a quick eyeball idea which changes might be interesting or relevant to me. I can then quickly see if there are any that I want to look up, or need to know about or ask about on the forum?
  • What might have changed that I need to take account of?
  • If I'm not technical per se, it gives me enough to get the gist and dismiss what's irrelevant or explore what's interesting, without an hours googling.
  • Im a TrueNAS/ixSystems enthusiast. I want to know what you've done with the platform's new release, or at least have some idea! Let me share the feels!
The paragraph you did is plenty enough to understand roughly and triage the changes for things that matter to me. So its good enough. Any items that I decide from a quick skim, might matter to me - because I can use them, or need to be aware of them - I can focus on and ask about in the forums or on Google.

As an example, this is how I was on Async COW:

  • Initially: I like good performance. What's this item about? I don't understand it, it's "!just a phrase". so whatever it might be, I need to look it up. But its hard to find details of cutting edge changes to ZFS because its still newish, and because its not clear what it means on BSD vs on say Linux. So... much google..... because it might matter. Also because I like ZFS/TrueNAS and I want to learn?
  • With your information: Okay, it's "behind the scenes" efficiency improvements to the write pipeline. I (one) might notice an improvement or not. So okay, I don't need to do anything, but thanks for the enhancement, ixSystems. It's nice to have an idea whats going on "under the hood".
    Finished.
That's literally it. Its enough to triage the changes for their impact on me and to appreciate the changes. But for someone else, Async COW (or some other change!) may seem important, if they knew what it was.

In fact this would be enough:
  • Async Copy on Write (COW) - is a ZFS IO pipeline enhancement that improves sequential writes which are unaligned with the ZFS record or block size. It reduces the number of disk I/Os needed to perform the writes. It should improve write throughput and allow the use of larger record/block sizes with less performance penalty. Some workloads will benefit, others will not see much difference.
For changes that could break stuff, add:​
"Possible side effects: none (transparent)" or whatever​

The same applies to other seeming small changes. One can't ever know what would matter a lot to some user, but they don't know about it or discover it's changed or newly added.

ixSystems do a lot of work. Tell us what you've done! That's all of us, not just those who understand in-tech phrases that only BSD/ZFS devs and sysadmins on Github and JIRA understand :) Doesn't need to be more depth than your very brief note, but it tells us everything we need!
 
Last edited:

Stilez

Guru
Joined
Apr 8, 2016
Messages
529
Its not an unreasonable request, but not something we currently do systematically. Ideally these community forums have been a way for people to ask questions and get answers. The new TrueNAS 12.0 documentation site also provides scope for more detailed doumentation and user contributions. By the time TrueNAS 12.0 is RELEASE, the documentation should be more complete. Should we add a paragraph on each feature to the Release notes or perhaps allow users to do the same?

"Async COW" is a very shorthand description for a way of improving sequential writes which are unaligned with the ZFS record or block size. It reduces the number of disk I/Os needed to perform the Writes. It should improve write throughput and allow the use of larger record/block sizes with less performance penalty. Specific workloads will see acceleration. Others may not notice the difference.

For now, lets just use this forum to resolve any other questions. Would the paragraph above be sufficient in the release notes or would we need a full technical description (in which case, each new feature may need a document)?
Just coming back to this older post, where we discussed a bit about explaining changes with new releases.

I just saw the blog post for 12-BETA2 and ** thank you thank you thank you **!

It's perfect!!
I loved it!!!

I loved being able to understand what's changed, not just a list of buzzwords. I loved being able to see the work that's gone on and a hint of the "engine room" of ZFS and TrueNAS.

Thank you and ixStaff for this small but really big piece of communicating!!

Could you please feed my thanks that back to anyone else it's relevant to?
 
Top