Testing for Freenas 12 Deployment and Special VDEV

riggieri

Dabbler
Joined
Aug 24, 2018
Messages
42
Hello Freenas Community

I am prepping to start testing for Freenas 12 Deployment at my film production Studio. Here is our current Setup.

One Single CPU Ivy Bridge Xeon Server
128GB RAM
48 Bay 4U Chassis for Volume1
45 Bay 4U Chassis for Volume 2-4
Freenas 11.3-4.1

4 Pools

Volume1 7x6 8TB RaidZ2 - Used for Long Term Archival Storage. Also backup up by LTO. Shared via SMB
Volume2 14x2 4TB Mirror - Used for Current Editorial Production storage, Shared via AFP
Volume3 1x2 1TB SSD Mirror - Used for Final Cut Pro X (FCPx) libraries, shared via NFS.
Volume4 4x3 6TB RaidZ1 - Additional Pool for Editorial Production. Shared via AFP

Goal: To Move All Shares to SMB. Remove need for NFS SSD pool for FCPx libraries.
Secondary Goal: Setup second physical server, increase SSD pool, and increase physical server redundancy in case of equipment failure.

Currently, Volume 2 and 3 provide us plenty of sequential speed. Can fully saturate both 10G connection from the server to client if the demand is there. But we would like to start keeping the library file for FCPx libraries in the same folder structure of the projects. Currently, all the media lives on either Volume 2 or 4, and the library lives on Volume 3.

Final Cut Pro X Library Technical Background: FCPx libraries are stored in a File package Container. In the package, it contains many small DB type files. Smaller projects, the files stay under 1M. Larger projects, such as feature films and documentaries, some of the files get to about 50M. When you are working on an FCPx library, it is constantly writing to these files. Every mouse click and action in the program is multiple write requests, as the program is constantly saving your progress. Therefore, IOPS need to be high to support multiple editors.

I picked up a 2U 16bay 2.5 Supermicro Server to do some testing. I originally was thinking about using the new Special VDEV class, to force small blocks onto an SSD 3-way mirror pool, therefore being able to keep FCPx libraries in the same file directories but effectively keep them on SSDs. But after looking at the FCPx file directory layout, it seems that the larger files will not get pushed to the SSD.

Questions:
Is there a way with the new special Vdev to force files with certain extensions to pool? I don't think there is.
Will I see any speed up using Special Vdev Class? Perhaps some, but it doesn't look like it will provide me the type of cacheing I want. Can I push files under 50MB there? Most of the other media files are over 1G. I am ok with a larger SSD pool.
Would increasing the ARC be just as effective?
Would adding a L2ARC be effective?

Is there any other way I can use SSD to speed up FCPx library reading and writing, without having to store them on separate pools?

Follow up ARC question: If FCPX is writing to a file that is loaded in ARC, does the write go to the file in ARC or does the write get held for 5s in RAM and sent with a standard transaction group. I am guessing it goes to the transaction group, so it does not seem like the ARC is going to speed up FCPx libraries.

Maybe I am also over thinking this, and a 14x2 mirror will provide enough IOPS but I also can't sacrifice sequential.

Thanks for your time
 

jasonsansone

Explorer
Joined
Jul 18, 2019
Messages
79
"Is there a way with the new special Vdev to force files with certain extensions to pool? I don't think there is.
Will I see any speed up using Special Vdev Class? Perhaps some, but it doesn't look like it will provide me the type of cacheing I want. Can I push files under 50MB there? Most of the other media files are over 1G. I am ok with a larger SSD pool."

The special vdev will store all metadata by default. You can also assign small blocks with a certain cut off. It isn't based on file size. You could have the pools main dataset set to 1MB record size but a nested dataset with record size set equal to your small block size. I did this on my setup to force iocage dataset onto the special vdev while all other media (large h265 files) are on the spinning rust.

Would increasing the ARC be just as effective?
The prevailing advice is to always max out RAM prior to adding L2ARC.

Would adding a L2ARC be effective?
L2ARC does not by default cache streaming reads. For film production, I don't think L2ARC will benefit very much, but I am not the expert. I am sure someone wiser will come along to either chime in or correct me.

Is there any other way I can use SSD to speed up FCPx library reading and writing, without having to store them on separate pools?
Examine the size of records and consider an approach as detailed above. If almost all your other media is >128Kb record size, you could force the FCPx dataset to a size equal to the special small blocks. That will allow you to have one pool with data intentionally segregated between the flash and magnetic drive storage.
 

Storage_King

Cadet
Joined
Sep 30, 2020
Messages
1
Hello Freenas Community

I am prepping to start testing for Freenas 12 Deployment at my film production Studio. Here is our current Setup.

One Single CPU Ivy Bridge Xeon Server
128GB RAM
48 Bay 4U Chassis for Volume1
45 Bay 4U Chassis for Volume 2-4
Freenas 11.3-4.1

4 Pools

Volume1 7x6 8TB RaidZ2 - Used for Long Term Archival Storage. Also backup up by LTO. Shared via SMB
Volume2 14x2 4TB Mirror - Used for Current Editorial Production storage, Shared via AFP
Volume3 1x2 1TB SSD Mirror - Used for Final Cut Pro X (FCPx) libraries, shared via NFS.
Volume4 4x3 6TB RaidZ1 - Additional Pool for Editorial Production. Shared via AFP

Goal: To Move All Shares to SMB. Remove need for NFS SSD pool for FCPx libraries.
Secondary Goal: Setup second physical server, increase SSD pool, and increase physical server redundancy in case of equipment failure.

Currently, Volume 2 and 3 provide us plenty of sequential speed. Can fully saturate both 10G connection from the server to client if the demand is there. But we would like to start keeping the library file for FCPx libraries in the same folder structure of the projects. Currently, all the media lives on either Volume 2 or 4, and the library lives on Volume 3.

Final Cut Pro X Library Technical Background: FCPx libraries are stored in a File package Container. In the package, it contains many small DB type files. Smaller projects, the files stay under 1M. Larger projects, such as feature films and documentaries, some of the files get to about 50M. When you are working on an FCPx library, it is constantly writing to these files. Every mouse click and action in the program is multiple write requests, as the program is constantly saving your progress. Therefore, IOPS need to be high to support multiple editors.

I picked up a 2U 16bay 2.5 Supermicro Server to do some testing. I originally was thinking about using the new Special VDEV class, to force small blocks onto an SSD 3-way mirror pool, therefore being able to keep FCPx libraries in the same file directories but effectively keep them on SSDs. But after looking at the FCPx file directory layout, it seems that the larger files will not get pushed to the SSD.

Questions:
Is there a way with the new special Vdev to force files with certain extensions to pool? I don't think there is.
Will I see any speed up using Special Vdev Class? Perhaps some, but it doesn't look like it will provide me the type of cacheing I want. Can I push files under 50MB there? Most of the other media files are over 1G. I am ok with a larger SSD pool.
Would increasing the ARC be just as effective?
Would adding a L2ARC be effective?

Is there any other way I can use SSD to speed up FCPx library reading and writing, without having to store them on separate pools?

Follow up ARC question: If FCPX is writing to a file that is loaded in ARC, does the write go to the file in ARC or does the write get held for 5s in RAM and sent with a standard transaction group. I am guessing it goes to the transaction group, so it does not seem like the ARC is going to speed up FCPx libraries.

Maybe I am also over thinking this, and a 14x2 mirror will provide enough IOPS but I also can't sacrifice sequential.

Thanks for your time

Hey riggieri,

There is an easier and more reliable way to deploy TrueNAS Core for your editorial workflows. PM me to discuss.
 

riggieri

Dabbler
Joined
Aug 24, 2018
Messages
42
Hey @jasonsansone Thanks for the reply.

I understand that none of the caching abilities of TrueNAS is going to help speed up any sequential workflow, i.e. streaming large video files.

We again are looking to gain two things.

The ability to keep Final Cut Pro X Libraries, which are the project files in the same folder structure and shares as media files, while being able to keep them on SSD.

Having a separate dataset for Libraries and using a small record size seems to me achieves the same thing as keeping them on a separate SSD pool. I don't seem the benefit, actually seems to me to be a negative, since you then have a special VDEV attached to a large pool that could increase chance of failure.

All of our media is kept on datasets with a 1M record size. The project files, FCPx libraries are kept on a dataset with a 128K record size.
 

jasonsansone

Explorer
Joined
Jul 18, 2019
Messages
79
I understand that none of the caching abilities of TrueNAS is going to help speed up any sequential workflow, i.e. streaming large video files.

It "can" under certain conditions. You would need to enable the tunable:
Code:
vfs.zfs.l2arc_noprefetch

However, streaming workloads are usually not benefited by L2ARC, thus the default settings.

Having a separate dataset for Libraries and using a small record size seems to me achieves the same thing as keeping them on a separate SSD pool. I don't seem the benefit, actually seems to me to be a negative, since you then have a special VDEV attached to a large pool that could increase chance of failure.

The benefit would be a single SMB share with easier file management and organization. A nested dataset inside of the main dataset will look like any other folder when the main dataset is shared out over SMB. You could keep project files and FCPx libraries on the same SMB share along side the media, albeit in a specifically designed folder hierarchy.
 
Top