TrueNAS - Fully Supported Unified Storage

Not every business has the time or desire to build their own storage solution. To serve those who need a supported, high availability solution, we’ve taken the open source technology of FreeNAS even further. The result is the TrueNAS Unified Storage Appliance - Unified Storage backed by rock-solid hardware and professional support.


Part of the FreeNAS Community

We have a confession to make... we are infatuated with open source. For our entire history, we’ve been dedicated to using open source software - and giving it back even better. When we needed storage software, we turned to FreeNAS, and today, we lead FreeNAS development and continue to release it under the permissive BSD License.


Open Source Strong

FreeNAS has the largest install base of any storage product on the planet. With over 5.6 million downloads and users like the United Nations, the U.S. Department of Homeland Security, and Reuters, FreeNAS is exposed to every imaginable environment and use case. The feedback, contributions, and suggestions from this incredible community benefit everyone, allowing FreeNAS to improve faster than any closed development process would allow.


Storage that’s Always There

iXsystems has been building hardware for open source since 1996, and we’ve leveraged every bit of that experience to create TrueNAS. Every appliance goes through a rigorous testing and burn-in process to reduce failures in the field. Active-Passive failover means that even when something does go wrong, your data is there. And iXsystems support offers optional Advanced Part Replacement and 24/7 Next Day On-Site Support, for mission-critical deployments where every hour counts.


One Appliance from One Team. Designed and Built in Silicon Valley

TrueNAS takes the ease of use and powerful features of FreeNAS and and combines them with Professional Support, Rock-Solid Hardware, and enterprise features like High Availability. TrueNAS is a complete appliance, with hardware and software working in concert. This is only possible because the hardware is designed hand in hand with the software, leaving nothing in doubt. Everything from the initial concept to the last checkbox in quality control is conducted with a unified will and single point of responsibility. You’ll never be told “that’s a software problem, call someone else” here - we’re one team, and we always put our clients first.


Problems Solved

TrueNAS Unified Storage appliances solve challenges in virtualization, backups, and more. With performance, availability, and redundancy, TrueNAS gives businesses and organizations around the world the power and peace of mind they need.


White Glove Post-Purchase Support

We’re really proud of TrueNAS, but we’re even more proud of the overall experience people have with our company. A big part of your experience with a company is how they handle problems should they arise. With that in mind we build and support TrueNAS ‘under one roof’.

That means if you need help with TrueNAS you will speak with a team of dedicated support engineers located at iXsystems headquarters in San Jose, CA. The support team has direct access to the people who design and build TrueNAS, whom they can quickly call on if the situation warrants. Every issue is handled by the person best suited to resolve it as quickly as possible, not by the next available representative in a call center. We’re committed to the best possible experience for all our clients, and we sustain that commitment through the entire lifetime of each appliance.

Rock-solid Features

  • Unified Appliance: Share data over file (NFS, CIFS, and AFP) or block (iSCSI) protocols - whatever your deployment calls for.
  • Unlimited Storage Capacity: No capacity-based license fees, and no software limitations on how much data a single filesystem can hold.
  • Data Deduplication and Native Compression: Conserve primary storage with deduplication and compression.
  • Hybrid Storage Pools: Accelerate read and write performance automatically with RAM and SSD.
  • Snapshots and Replication: Set up policies for data retention and remote replication. Snapshots do not create additional copies of the same data, conserving space.
  • Simplified Disk Management: Manage disks and JBODs using either command line or web user interface.
  • Virtualization Ready: TrueNAS is Citrix Ready Verified and works great as backing storage for VMWare.

TrueNAS Lines & Models

TrueNAS® File Share

TrueNAS® File Share

An excellent backup target or low-demand high-capacity file server for a small or medium sized business. Consolidate management of backups and files with this Unified Storage Appliance.

TrueNAS® Archiver Pro

TrueNAS® Archiver Pro

The TrueNAS Archiver Pro is optimized for long term, efficient storage of critical backups. Dedeplication and compression make the best possible use of the massive storage capacity of this appliance.

TrueNAS® Pro-HA

TrueNAS® Pro-HA

Two TrueNAS nodes in a single chassis ensure that unexpected downtime never brings an entire office to a screeching halt. The TrueNAS Pro-HA amply serves a medium-sized office with reliable storage services.

TrueNAS® Enterprise-HA

TrueNAS® Enterprise-HA

Power and reliability in one manageable package. Back critical infrastructure and applications with this high-availability, scaleable appliance, and never worry about getting paged at 2AM again.

TrueNAS® Ultimate-HA

TrueNAS® Ultimate-HA

The TrueNAS Ultimate-HA stands unbowed before workloads that make lesser appliances tremble. Use it with confidence to back the most critical, high-performance workloads.

TrueNAS® Pro

TrueNAS® Pro

Flexible storage capabilities for a growing small business or office of under 100 people. Up to 220TB of capacity, enough for years of storage with long-term retention. A TrueNAS File Share also makes an excellent backup target.

TrueNAS® Enterprise

TrueNAS® Enterprise

High-performance storage with ample room for expansion, suitable for a large, heterogeneous office or as backing storage for demanding applications.

TrueNAS® Ultimate

TrueNAS® Ultimate

The TrueNAS Ultimate offers paramount power and performance for the highest-demand applications. A single TrueNAS Ultimate can provide multiple services and mind-boggling capacity without sacrificing performance.


TrueNAS Case Studies & Datasheets

  • TrueNAS Clickbank Case Study

    Updated on 19 June 2013


    Download

  • TrueNAS Creative Integrations Case Study

    Updated on 07 June 2013


    Download

  • TrueNAS UCSD Case Study

    Updated on 16 May 2013


    Download

  • TrueNAS TechSoft Case Study

    Updated on 26 July 2013


    Download

  • TrueNAS B2+ Case Study

    Updated on 13 March 2013


    Download

  • TrueNAS SFL Data Case Study

    Updated on 28 February 2013


    Download

  • TrueNAS with Fusion-io

    Updated on 29 March 2012


    Download

  • TrueNAS Datasheet

    Updated on 31 August 2012


    Download

  • TrueNAS Ashland Food Case Study

    Updated on 28 February 2013


    Download


Frequently Asked Questions

TrueNAS is a unified storage appliance developed in-house by iXsystems engineers. TrueNAS builds upon the foundation previously laid with our open source FreeNAS project but further extended with advanced features targeted specifically for enterprise storage applications.

Unified Storage describes a platform that combines both file and block based access into a single system. iXsystems makes an innovative ZFS-based unified storage solution with hybrid storage pools. Hybrid storage pool technology enables the seamless combination of system memory, flash memory, and enterprise disk drives.

Originally known as the Zettabyte File System, ZFS is a 128-bit file system originally developed by Sun Microsystems beginning in the early 2000s. ZFS was designed with a focus on ensuring data integrity and improved reliability while addressing the capacity needs of tomorrow.

TrueNAS supports the following protocols:

  • File based access over NFS, CIFS/SMB, and AFP
  • Block based access over iSCSI

Yes, TrueNAS can be utilized as a SAN by providing block storage via iSCSI.

All limits of ZFS capacity are hypothetical as they exceed the limitations imposed by physical hardware. The technical limitations for file systems capacity is 256 zettabytes. For comparison, the Internet itself was estimated to be approximately 0.5 zettabytes in 2011.

Directories can contain 281.4 trillion (248) files and subdirectories. The largest an individual file may be is 16 exabytes.

With snapshots, ZFS preserves a filesystems state at a specific moment in time. To do this, it generates checksum information for all referenced blocks of data and maintains pointers to this data, even as the data on the filesystem is updated. ZFS utilizes a copy-on-write methodology to preserve the state of both the current and previous copies of data. This feature allows for the recovery of files that have been updated or deleted, and also allows reverting the entire filesystem to a previous state.

With ZFS snapshots and deduplication, there can exist multiple pointers to the same block of data. When the data is accessed, this block is served to meet all requests. If an attempt is made to change this data, a new copy of this data is created instead. This new block may then be changed and pointers for the new version of the file will be updated to indicate this. This allows the previous pointer to continue referencing the existing data while the new pointer is updated, keeping the filesystem in a consistent state in case of power loss or other contingency. Because this process only creates this new copy of the data when a write request is made, this is referred to as copy-on-write.

Data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. The technique is used to improve storage utilization and can also be applied to network data transfers to reduce the number of bytes that must be sent. During the deduplication process, unique chunks of data, or byte patterns, are identified and stored during a process of analysis. As the analysis continues, other chunks are compared to the stored copy and whenever a match occurs, the redundant block is replaced with a small reference that points to the stored block.

In this process the deduplication hash calculations are created on the target device as the data enters the device in real time. If the device determines a block is already stored on the system it does not store the new block, just references to the existing block. The benefit of in-line deduplication over post-process deduplication is that it requires less storage as data is not duplicated. On the negative side, it is frequently argued that because hash calculations and lookups takes so long, it can mean that the data ingestion can be slower thereby reducing the backup throughput of the device. However, certain vendors with in-line deduplication have demonstrated equipment with similar performance to their post-process deduplication counterparts.

A ZFS virtual device (VDev) may be:

  • A single disk
  • Two or more disks that are mirrored
  • A group of disks that are organized using RAID-Z.
  • Special devices like intent log, read cache, and hot-spares.
  • Essentially, any device or group of devices that acts as a singular entity within the pool.

Striped

Striped VDev’s is equivalent to RAID-0. While ZFS does provide checksumming to prevent silent data corruption, there is no parity nor a mirror to rebuild your data from in the event of a physical disk failure. This configuration is not recommended due to the potential catastrophic loss of data that you would experience if you lost even a single drive from a striped array.

Mirrored

This is akin to RAID-1. If you mirror a pair of VDev’s (each VDev is usually a single hard drive) it is just like RAID1, except you get the added bonus of automatic checksumming. This prevents silent data corruption that is usually undetectable by most hardware RAID cards. Another bonus of mirrored VDev’s in ZFS is that you can use multiple mirrors. If we wanted to mirror all 20 drives on our ZFS system, we could. We would waste an inordinate amount of space, but we could sustain 19 drive failures with no loss of data.

Striped + Mirrored

This is very similar to RAID-10. You create a bunch of mirrored pairs, and then stripe data across those mirrors. Again, you get the added bonus of checksumming to prevent silent data corruption. This is the best performing RAID level for small random reads.

RAID-Z

RAID-Z is very popular among many users because it gives you the best trade-off of hardware failure protection vs usable storage. It is very similar to RAID-5, but without the write-hole penalty that RAID-5 encounters. The drawback is that when reading the checksum data, you are limited to basically the speed of one drive since the checksum data is spread across all drives in the zvol. This causes slowdowns when doing random reads of small chunks of data. It is very popular for storage archives where the data is written once and accessed infrequently.

RAID-Z2

RAID-Z2 is like RAID-6. You get double parity to tolerate multiple disk failures. The performance is very similar to RAID-Z.

Adjustable Replacement Cache (ARC): The ARC (sometimes known as the “Adaptive” Replacement Cache) lives in DRAM. It is the first destination for all data written to a ZFS pool, and it is the fastest (i.e., lowest-latency) source for data read from a ZFS pool. When data is requested from ZFS, it looks first to the ARC; if it is there, it can be retrieved extremely fast (typically in nanoseconds) and provided back to the application. The contents of the ARC are balanced between the most recently used (MRU) and most frequently used (MFU) data.

Level-Two ARC (L2ARC): The L2ARC lives in flash. In concept, it is an extension of the ARC. Without an L2ARC, data that could not fit in the ARC would have to be retrieved from HDDs when requested. That is when drive speed makes a difference, but the performance difference between “fast” (e.g., 15k-RPM) and “slow” (e.g., 7,200-RPM) is in terms of latencies measured as a few milliseconds or several milliseconds; both are dramatically slower than ARC accesses measured in nanoseconds. L2ARC, in flash, fits nicely between the two—both in terms of price and performance. Buying hundreds of gigabytes of flash is cheaper than the same capacity of DRAM (though still more expensive today than HDDs), and flash’s I/O latencies typically are measured in microseconds—slower than DRAM but still far faster than even “high-performance” HDDs. The L2ARC is populated by data first placed in the ARC as it becomes apparent that the data might get squeezed out of the ARC, and not every piece of data that existed in ARC will make it to the L2ARC (those that do not would be retrieved from HDDs instead, if requested); the algorithms that manage L2ARC population are automatic and intelligent and tuned by iXsystems where appropriate.

ZFS Intent Log (ZIL) and Separate Intent Log (SLOG): The ZIL is used to handle “synchronous” writes—write operations that are required by protocol (e.g., NFS, SMB/CIFS) to be stored in a non-volatile location on the storage device before they can be acknowledged to the application. Application threads typically stop and wait for synchronous write operations to complete, so reducing the latency of synchronous writes has a direct impact on application performance. ZFS can do this by using a separate intent log (SLOG) for the ZIL—typically on a flash device. All writes (whether synchronous or asynchronous) are written into the ARC in DRAM, and synchronous writes also are written to the ZIL before being acknowledged. Under normal conditions, ZFS regularly bundles up all of the recent writes in the ARC and flushes them to the spinning drives—at which point the data in the ZIL is no longer relevant (because it now exists on its long-term, non-volatile destination) and can be replaced by new writes. The ZIL only is read from when synchronous writes in the ARC are unable to be written to spinning disk—like after a power failure or controller failover—at which point ZFS reads the ZIL and places that data onto the spinning drives as intended. One might compare this concept to non-volatile RAM (NVRAM) from storage vendors, but where NVRAM uses batteries that can wear out and have other issues, write-optimized SLC flash devices do not need batteries. And while NVRAM scalability is limited to available slots, adding SLOGs is as easy as adding HDDs. (There is a major price difference too!) Like L2ARC, the ZIL/SLOG is managed automatically and intelligently by ZFS: Writes that need it are accelerated without any additional effort by the administrator.

Hard Disk Drives (HDD): With the ARC, L2ARC, and ZIL/SLOG providing the bulk of the performance from a ZFS Hybrid Storage Pool, spinning drives are relegated to the job they do well—providing lower-performance, higher-density, low-cost storage capacity. Until the day that flash competes with HDDs on a dollar-per-gigabyte front, the right balance of DRAM and flash for performance, and HDDs for capacity, results in a total cost of ownership (TCO) that is less—both initially and over the long-term—than solving both requirements using all flash or all HDDs. Note: While it is no longer the primary purpose of HDDs in ZFS to provide performance, RAID layout and drive speed still can impact overall performance—sometimes significantly.

A New Storage Parameter: Working Set Size

For legacy storage systems, sizing means determining necessary capacity, IOPS, and throughput and then doing some simple math to determine the number of spindles that could provide those numbers (with some thought given to parity overhead, controller limitations, etc.). The “Working Set Size” (WSS) can be described as the subset of total data that is actively worked upon (e.g., 500GB of this quarter’s sales data out of a total database of 20TB). Knowing the WSS makes it possible to size ARC, L2ARC, and even HDDs more accurately, but few applications today have an awareness of WSS.