Hello Freenas Community
I am prepping to start testing for Freenas 12 Deployment at my film production Studio. Here is our current Setup.
One Single CPU Ivy Bridge Xeon Server
128GB RAM
48 Bay 4U Chassis for Volume1
45 Bay 4U Chassis for Volume 2-4
Freenas 11.3-4.1
4 Pools
Volume1 7x6 8TB RaidZ2 - Used for Long Term Archival Storage. Also backup up by LTO. Shared via SMB
Volume2 14x2 4TB Mirror - Used for Current Editorial Production storage, Shared via AFP
Volume3 1x2 1TB SSD Mirror - Used for Final Cut Pro X (FCPx) libraries, shared via NFS.
Volume4 4x3 6TB RaidZ1 - Additional Pool for Editorial Production. Shared via AFP
Goal: To Move All Shares to SMB. Remove need for NFS SSD pool for FCPx libraries.
Secondary Goal: Setup second physical server, increase SSD pool, and increase physical server redundancy in case of equipment failure.
Currently, Volume 2 and 3 provide us plenty of sequential speed. Can fully saturate both 10G connection from the server to client if the demand is there. But we would like to start keeping the library file for FCPx libraries in the same folder structure of the projects. Currently, all the media lives on either Volume 2 or 4, and the library lives on Volume 3.
Final Cut Pro X Library Technical Background: FCPx libraries are stored in a File package Container. In the package, it contains many small DB type files. Smaller projects, the files stay under 1M. Larger projects, such as feature films and documentaries, some of the files get to about 50M. When you are working on an FCPx library, it is constantly writing to these files. Every mouse click and action in the program is multiple write requests, as the program is constantly saving your progress. Therefore, IOPS need to be high to support multiple editors.
I picked up a 2U 16bay 2.5 Supermicro Server to do some testing. I originally was thinking about using the new Special VDEV class, to force small blocks onto an SSD 3-way mirror pool, therefore being able to keep FCPx libraries in the same file directories but effectively keep them on SSDs. But after looking at the FCPx file directory layout, it seems that the larger files will not get pushed to the SSD.
Questions:
Is there a way with the new special Vdev to force files with certain extensions to pool? I don't think there is.
Will I see any speed up using Special Vdev Class? Perhaps some, but it doesn't look like it will provide me the type of cacheing I want. Can I push files under 50MB there? Most of the other media files are over 1G. I am ok with a larger SSD pool.
Would increasing the ARC be just as effective?
Would adding a L2ARC be effective?
Is there any other way I can use SSD to speed up FCPx library reading and writing, without having to store them on separate pools?
Follow up ARC question: If FCPX is writing to a file that is loaded in ARC, does the write go to the file in ARC or does the write get held for 5s in RAM and sent with a standard transaction group. I am guessing it goes to the transaction group, so it does not seem like the ARC is going to speed up FCPx libraries.
Maybe I am also over thinking this, and a 14x2 mirror will provide enough IOPS but I also can't sacrifice sequential.
Thanks for your time
I am prepping to start testing for Freenas 12 Deployment at my film production Studio. Here is our current Setup.
One Single CPU Ivy Bridge Xeon Server
128GB RAM
48 Bay 4U Chassis for Volume1
45 Bay 4U Chassis for Volume 2-4
Freenas 11.3-4.1
4 Pools
Volume1 7x6 8TB RaidZ2 - Used for Long Term Archival Storage. Also backup up by LTO. Shared via SMB
Volume2 14x2 4TB Mirror - Used for Current Editorial Production storage, Shared via AFP
Volume3 1x2 1TB SSD Mirror - Used for Final Cut Pro X (FCPx) libraries, shared via NFS.
Volume4 4x3 6TB RaidZ1 - Additional Pool for Editorial Production. Shared via AFP
Goal: To Move All Shares to SMB. Remove need for NFS SSD pool for FCPx libraries.
Secondary Goal: Setup second physical server, increase SSD pool, and increase physical server redundancy in case of equipment failure.
Currently, Volume 2 and 3 provide us plenty of sequential speed. Can fully saturate both 10G connection from the server to client if the demand is there. But we would like to start keeping the library file for FCPx libraries in the same folder structure of the projects. Currently, all the media lives on either Volume 2 or 4, and the library lives on Volume 3.
Final Cut Pro X Library Technical Background: FCPx libraries are stored in a File package Container. In the package, it contains many small DB type files. Smaller projects, the files stay under 1M. Larger projects, such as feature films and documentaries, some of the files get to about 50M. When you are working on an FCPx library, it is constantly writing to these files. Every mouse click and action in the program is multiple write requests, as the program is constantly saving your progress. Therefore, IOPS need to be high to support multiple editors.
I picked up a 2U 16bay 2.5 Supermicro Server to do some testing. I originally was thinking about using the new Special VDEV class, to force small blocks onto an SSD 3-way mirror pool, therefore being able to keep FCPx libraries in the same file directories but effectively keep them on SSDs. But after looking at the FCPx file directory layout, it seems that the larger files will not get pushed to the SSD.
Questions:
Is there a way with the new special Vdev to force files with certain extensions to pool? I don't think there is.
Will I see any speed up using Special Vdev Class? Perhaps some, but it doesn't look like it will provide me the type of cacheing I want. Can I push files under 50MB there? Most of the other media files are over 1G. I am ok with a larger SSD pool.
Would increasing the ARC be just as effective?
Would adding a L2ARC be effective?
Is there any other way I can use SSD to speed up FCPx library reading and writing, without having to store them on separate pools?
Follow up ARC question: If FCPX is writing to a file that is loaded in ARC, does the write go to the file in ARC or does the write get held for 5s in RAM and sent with a standard transaction group. I am guessing it goes to the transaction group, so it does not seem like the ARC is going to speed up FCPx libraries.
Maybe I am also over thinking this, and a 14x2 mirror will provide enough IOPS but I also can't sacrifice sequential.
Thanks for your time