TrueNAS SCALE Release Plan

While TrueNAS 12.0 (CORE and Enterprise editions) continues its release march, we’re also busy getting the first version of TrueNAS SCALE into the hands of many tech-savvy users. TrueNAS SCALE 20.10-ALPHA is planned for October and will be codenamed “Angelfish”.

As our initial community post on SCALE indicated, TrueNAS SCALE is defined by its acronym:

TrueNAS starts from the TrueNAS 12.0 base which includes OpenZFS, all the storage services, the middleware to coordinate these, and the web UI to present a user-oriented view of the system. This base has been tested by hundreds of thousands of users over the last few years.

The good news is that nearly all of this base has been preserved with relatively small software changes. For Enterprise users, it has also been possible to port over the High Availability (HA) software, enclosure management, and other Enterprise features. This means that SCALE will be able to run on TrueNAS M-Series and X-Series systems in the future and take advantage of the redundancy.

Being similar to TrueNAS 12.0 is awesome because it means it will be a similar UX, which minimizes the training necessary to get up-to-speed on TrueNAS SCALE, but it’s what you can do with TrueNAS SCALE that’s most exciting. The new capabilities being added define the new opportunities for SCALE:

  • KVM Virtualization: Mature Hypervisor with good reliability, Guest OS support, and enterprise features.
  • Kubernetes: Applications can be single (docker) containers or pods of containers.
  • Scale-out ZFS: SCALE will enable datasets to be defined as ZFS datasets or cluster datasets which span multiple nodes and ZFS pools. Cluster datasets will have a variety of redundancy properties and still support ZFS snapshots.

Unlike other Hyperconverged Infrastructure solutions, TrueNAS SCALE will have deployment benefits as a single node, an HA system, or as a cluster of multiple nodes. Start with a single node system and in the future, you will be able to scale-out.

Given the amount of existing software and new software, we have a release plan that lets the community confidently test and deploy SCALE as it becomes available. The high level plan follows this process.


Release numbering will be based on Year and Month. The first numbered release will come out in October and will be called TrueNAS SCALE 20.10 (Angelfish). The codenames will be alphabetically sequential and will be associated with aquatic animals that have scales or swim in schools (clusters).

The focus is on characterizing “feature groups” as either PREVIEW, ALPHA, BETA, RC, or RELEASE quality. Users should read the release notes to confirm support for their particular use case. Angelfish is almost feature complete in the NIGHTLY releases and includes the following feature groups: It should be noted that KVM has little testing by this community but is widely used elsewhere. Kubernetes will also be based on stable, released code, but the WebUI and Middleware are expected to be PREVIEW quality.

Clustered datasets require some additional TrueCommand features (expected in November) to provide an easy-to-use WebUI. In the meantime, the CLI and APIs can be tested and this feature group is classified as PREVIEW status.

We appreciate the community feedback and bug reports and hope to get all those features to RELEASE quality faster.

Is TrueNAS SCALE for Users or Developers?

Right now, TrueNAS SCALE is for developers and bug hunters and can be downloaded here. For Linux developers, there are many opportunities to contribute to the Open Source TrueNAS SCALE project. We have made it a very well coordinated and managed environment to develop the best Open Hyperconverged Infrastructure. For more information, see this Community post.

The TrueNAS SCALE Angelfish releases in Q4 will be good for tech-savvy enthusiasts and testers. We’ll let you know when TrueNAS SCALE 20.10 is ready.

In 2021, TrueNAS SCALE is expected to get to full RELEASE quality for a clustered system.

If you have any additional questions or need advice on a new project, please email us at We are standing by to help.


  1. Flee

    Will GPU pass-through be a (gui) feature of the KVM Virtualisation in TrueNAS scale?

    • Joon Lee

      Yes, it will be a feature!
      It is also a feature in TrueNAS CORE 12.0

  2. Kev

    Will a Suspend S3 state and wakeup timer be possible or in plan for TrueNAS Scale?

    • Michael Dexter

      Is your use case for your NAS to sleep at night? Many motherboards have a “daily schedule” and you should be able to shutdown and have the BIOS wake it on a schedule, or use Wake on LAN (WOL). If sleep is the only solution, please share your use case to this ticket:

  3. Travis Watson

    Will PCIe GPU pass-through to a plugin aka container such as plex be available?

    • Michael Dexter

      Yes, Intel / Nvidia passthrough to containers is working well and the GUI elements are being finalized.

  4. Nate Moore

    I’m confused whether I should go with Scale or Core (aside from Scale not being fully rolled out yet). Do you have a comparison table that shows the features of each? Thanks.

    • Michael Dexter

      TrueNAS CORE is proven for production use while the key upcoming features of TrueNAS SCALE are Linux containers and scale-out, rather than up storage. The main page provides a comparison of the key features:

  5. Luke Fearn

    Am I able to move from TrueNAS Core to Scale easily and import existing pools & configuration? Thanks

    • Joon Lee

      No, they are separate products!

  6. MP

    Will there be a high-availability or failover feature for KVM guests?

    • Michael Dexter

      This is a popular request and while this is not planned for the initial release, you could submit a feature request.

  7. Sam M.

    What is the overall design/intention for HA iSCSI targets in TrueNAS SCALE? I ask because 1, it seems that the clustering related features (especially GUI) are getting few mentions & seemingly low priority; and 2, things I’ve read *implied* that SCALE’s clustering has more file sharing like SMB & NFS in mind as opposed to block sharing like iSCSI. For example, one post *speculation* was that Gluster compared cluster nodes at a file level as opposed to a block level. Since iSCSI tends to look like 1 massive file instead of a ton of small files as SMB/NFS shares can appear to storage servers, the fear is that when a node falls out of sync, that the entire iSCSI file has to be replicated instead of just the part that’s changed. I emphasis “speculation” because I have no idea how this is really supposed to work.

    For context (TL/DR): Our current FreeNAS server hosts several iSCSI targets being served to a number of ESXi hosts & Windows VM’s. We’d love to do the same thing, but with a HA TrueNAS server instead.

    • Kris Moore

      iSCSI HA is on the table, but may not be included on initial release. Right now focus is on multi-channel SMB and of course native glusterfs client access (the most optimal method). The official gluster docs even recommend this method for iSCSI if you are curious:

      Right now the majority of API work to support gluster is already merged into the TrueNAS SCALE nightly images. Most of the GUI work is taking place inside of TrueCommand, and is shaping up nicely. Cluster creation is already supported, and we’re fleshing out other aspects of it. I’ll be doing a forum post on the status, as well as some screenshots in the coming weeks.


Submit a Comment

Your email address will not be published. Required fields are marked *

ESG Labs: TrueNAS Technical Report
Download Enterprise Storage Guide Button
iXsystems values privacy for all visitors. Learn more about how we use cookies and how you can control them by reading our Privacy Policy.