Building a 60 bay with FreeNAS

clueluzz

Dabbler
Joined
May 17, 2019
Messages
24
Hi Chris,

Thanks for that suggestion. I was considering that something similar to that unit but are opting to be economical first since the reseller quoted me almost 3x what I spent with my current setup. I'm using the following:
HGST 60Bay JBOD for the drives
Ryzen 7 2700X
32GB ECC RAM
120GB SSD for FreeNAS OS
so far 6x 12TB
Areca ACR-1883LP SAS
Intel X540-T2 dual LAN

Since the objective is to have as close to 1000MB/s transfers for the video editors (mainly read and sometimes write), I'm still trying to figure out how to achieve this from the server side. Each client is either using a Thunderbolt 3 10Gbe or a PCI card (depending on their computer). I read this https://calomel.org/zfs_raid_speed_capacity.html and am still balancing the performance vs redundancy. I know the more clients access the server, the higher the degradation in performance but I would like to try getting at least 1 client hitting peak speed.

Is it faster to have:
2x vdev of 6x 12TB
1x vdev 10x 12TB

I haven't setup additional SSDs for cache or ZIL
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Is it faster to have:
2x vdev of 6x 12TB
1x vdev 10x 12TB
I had a server in my home with 12 drives in a RAIDz3 (single vdev) and found it to be slower than the 1Gb network it was attached to. I was so disapointed with the speed of the system that I built a replacement server, also with 12 disks, but with two RAIDz2 vdevs (six drives each) in a single pool. This gave me more than double the performance. I moved all my data into the 2 vdev pool and have continued with that pool design for more than three years now. I started that with 12 x 2TB drives and migrated one vdev to 4TB drives, expanding the capacity, then migrated the second vdev to 4TB drives with another capacity expansion. It allowed me to grow the capacity in stages rather than needing to replace all 12 drives to expand storage capacity.
so far 6x 12TB
If you only have six drives now, you could establish the pool on those six drives, then expand the pool with a second vdev of six more drives later. That is what I did with one of my home systems back around 2011 or 12, I forget exactly when... It had five 1TB drives in RAIDz1 when I built it and I expanded the pool with a second vdev later.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. The sweet spot on price per TB looks to be 8TB drives, so you might build with smaller drives now and buy the larger drives later, once the price has come down. Unless you need all that capacity at once.
If I was looking to build a system for performance, I would get a large number of smaller drives to have the high vdev count and corresponding high performance. The drives can be replaced with larger capacity dries, on a per vdev basis, to grow capacity over time.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I know the more clients access the server, the higher the degradation in performance but I would like to try getting at least 1 client hitting peak speed.
More vdevs is the answer to this question. I have a perfect example from work, I think I already mentioned, two servers that both have a full set of 60 drives. On one of those servers, it is four vdevs of 15 drives and on the other server is is ten vdevs of six drives. The system with ten vdevs can easily saturate a 10Gb network and my testing would tend to indicate it could probably hit 20Gb, where the system with only four vdevs is doing good if it manages 250 to 450 MB/s. To put that in perspective, 1Gb networking would theoretically max out at 112.06 MB/second, so it is faster than 1Gb networking, but around half the speed that 10Gb networking should be and the reason is the vdev count.
 

clueluzz

Dabbler
Joined
May 17, 2019
Messages
24
Chris, you rock! Thanks for all that info. I’ll do single pool, multi vdev. Will populate per 6 drives as necessary.

Now just have to wrap my head around datasets and how that translates to shares.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
First, please use this article not as discouragement but a roadmap of issues to work on. Then post here with your results and let us know of your success. I am smart but started out with little experience and could only push things so far. I am very curious on your results and what you find work. Now here are my experiences.

My Background and equipment.
I used to be a professional Film/TV editor and have used various professional Hard Drive servers (ISIS and the like) so I know how editing workstations should operate. I currently have at home a FreeNAS system with 50 drives, 5 Zpools with 10 drives each in RaidZ2 with 10 slots open. 10x4TB (2 Zpools), 10x8 TB (2xZpools), and 10x12TB (1xZpool). All full of Quicktime Videos at AppleProRes HQ 422, HD and 4K. I have a 10 gig switch with 2x10 gig Ports and 6x1gig ports. I have 4 client systems hooked up to the server, but usually only have 2 at the very most going at once. The clients are 3 x macOS and 1 x Windows 7. Mac still use AFP shares (lots of color tags in there so I just leave it as so for now) and the Window connects via SMB. To test real world system speeds, I use Aja Lite and throw 64GB files at the sever (more about why DD in shell mode doesn't give you real world usage later), I can get about 320MB/s write and 300MB/s reads. Overall the system works great but there is a big HOWEVER... Latency and consistency. Meaning video files don't always immediately playback when you hit play (or stop when you hit stop). FreeNAS works, but its not near perfection like an ISIS can dish out to 10 clients. Oh for my CPU hardware: Intel(R) Xeon(R) CPU E3-1230 V2 @ 3.30GHz (8 cores), 32 GB RAM, Supermicro 9scm MB (I think), Myricom 10 GB SFP+ card, SSD boot drive. No Zil or SLOG. 10 Gig switch is a Netgear 8 port. 2xMacPro's with Intel 520 10 gig cards in one, 1 gig on the other, and a MacMini 2018 10gig built in.


Experience.
1) I run RAIDZ2 because safety is much more important to me than pure speed as this system is a video archive. Also note, 10 drives gives you maximum speed in RaidZ2. With that said I am not entirely convinced drive speed causes latency and consistency issues.

2) I am pretty convinced the latency/consistency issues have to do with client OS, mounts are share points in the client OS, and client network tuning (or lack thereof). Remember, ISIS and similar drive systems have finely tuned load balancing, direct customized ISCSI mounts, and have finely tuned network cards and drivers. Also I theorize these systems might be tuned and customized to deal with "streaming" big video files (Gigabytes) where FreeNAS is not. You can turn on Jumbo frames on all you like, but editing off a FreeNAS... well it works, but not with hiccups - until you solve it. Which brings me to my next point.

3) I never could get clarity if ZIL and SLOG made a big difference when working in Video, meaning large large large sequential files. Just from reading I came to the conclusion that they do not and apparently are much better applied for lots of small files. However, I encourage you to test and might find out differently.

4) To test drive speed, I have run the DD test in shell mode and I do get much better RAW read and write speeds (scrubs report 1-2Gb/s speeds), but in real world, AJA disk test, I did not see that same speed. Oh, make sure you use file sizes larger than your ZFS RAM, otherwise your results will be much faster than in real world use. Its has to do with the way FreeNAS first writes to memory and then dumps to disk. 32Gigs of RAM is great, until you try to write a 48 Gig file. If you use AJA Lite disk test, click the the little graph button at the bottom and you can see the inconsistency of speed. You'll get great speeds and then it drops to near 0 for a moment and the back up again. *** Also note, these issues appear across all my computers, so its not the different network cards or OS. Could be the OS or AFP share are a little worse, as the Windows machine seems to have less latency, but it is still there. ***Note, the Windows machine when it had 10 gig card in there, also saw similar speeds of 350MB/s. Maybe slightly faster than the macOS machines, but not blazingly different.

Advice
1) Stripe drives. Make some 6 drive Vdevs of RaidZ2 and then expand it with another (making it a Stripe) or even a third and the Zpool should scream. You are losing capacity this way ( or needing more drives), but gaining a ton of speed yet keeping it safe. Play around with other configurations if you can't throw that many drives at it.
2) Experiment with ZIL and SLOG. I just never had the time or skill set.
3) As much RAM as you can afford. You can never have too much RAM in a ZFS rig. At least for writes.
4) For Mac clients, turn off the auto generate preview icon. That makes a world of difference of speed opening a video / picture folder in the Finder.

Last:
1) I hope you have the time and skill set to play and report back here. Or someone in the community with a ton more of video experience comes in and not only shows me what I am doing wrong, but gives me real world hacks to implement.

2) Please do not take my tone a dire. The FreeNAS system works great. I have had it for 5 years now and haven't lost s single bit of data. It also works serving video just well enough. I see from your original post that you are trying to make it a video server for multiple workstations in a real live environment. If you are talking about a few Avids (or Adobe Premiere) hitting it all at once and maybe a few AE stations, well, it might struggle. If was doing full time work as an Editor at a professional studio, I would not work on a system that performs like my current FreeNAS box. A few days, sure, but not full time. But perhaps I am the limiting factor. Use my experience to make it work better.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Chris, you rock! Thanks for all that info. I’ll do single pool, multi vdev. Will populate per 6 drives as necessary.

Now just have to wrap my head around datasets and how that translates to shares.

0) vdev is a bunch of drives.

1) A pool can be one vdev. However, "expand" a vdev with another vdev and now the pool is striped across vdevs. That's why you do not want one vdev to have no parity. If one of the two vdevs (in this example) goes down in a pool made up of multiple vdevs, you lose the whole pool. (I think the new GUI does not call anything a vdev anymore, but only refers to Zpools. Same thing. A zpool can be made up of other zpools.

2) Dataset. All I know is that I use them. Apparently you can do lots of things with them. Set Quotas and... well other things I do not use. I think replication and snapshots are fully realized in datasets.

*on Chris good comment below - changed Zpool to pool and crossed out my mistake about the GUI not referencing vdevs. The GUI does use the term vdev.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Same thing.
No. Your use of terminology is not right. Please review these guides:

Slideshow explaining VDev, zpool, ZIL and L2ARC
https://www.ixsystems.com/community...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

The performance problems you have with your system would have been solved if you had multiple vdevs in a single pool instead of all those separate pools. If you would have read my earlier posts in this thread, I explained that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Or someone in the community with a ton more of video experience comes in and not only shows me what I am doing wrong, but gives me real world hacks to implement.
My experience is with data and ZFS. I understand how they work. Video files are just data.
 

Doug183

Dabbler
Joined
Sep 18, 2012
Messages
47
Video files are just data.
Yes and no. For example, 150 GB of 1 MB files versus one 150 GB 4K video file. Its an issue of sequential read/write versus random read/writes. Plus for playback, the video file needs enough throughput without deviation that dips below the required throughout. Last the software on the client needs fast latency for stopping and starting the video. FreeNAS just doesn’t automatically handle 150 GB of tiny files the same way it handles that one big 150GB file.

That is why I encourage the op to test away. Although getting those vdevs striped will certainly help, it’s just not the whole story.
 
Last edited:

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
At $dayjob we shovel around both large blobs and thousands of tiny files. Throughput is important to us on the blobs and raw IOPS are important on the tiny files partitions.. We separate them on 2 different volumes.

We use the following layout for the blobs:

Code:
    NAME                                            STATE     READ WRITE CKSUM
    vol1                                            ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/8e0c0136-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/8f482319-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/d5ff65a7-1aad-11e8-8572-001b216ed0dc  ONLINE       0     0     0
        gptid/9111ebf8-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/87ccfea6-705b-11e8-b8b7-001b216ed0dc  ONLINE       0     0     0
        gptid/92d92247-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/93904f5e-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/94522a91-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/950463d1-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/077693b6-1421-11e8-8bc6-001b216ed0dc  ONLINE       0     0     0
      raidz2-1                                      ONLINE       0     0     0
        gptid/fec47b75-145c-11e8-8bc6-001b216ed0dc  ONLINE       0     0     0
        gptid/6ab9ac8b-1bf7-11e8-8678-001b216ed0dc  ONLINE       0     0     0
        gptid/adef81a1-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/aed9e248-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/afcc06e1-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/b0a80244-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/b18cb20f-0ade-11e8-b66d-001b216ed0dc  ONLINE       0     0     0
        gptid/e176006c-1a62-11e8-8572-001b216ed0dc  ONLINE       0     0     0
        gptid/2725bbec-0f2f-11e8-8bc6-001b216ed0dc  ONLINE       0     0     0
        gptid/80c3c08c-0f67-11e8-8bc6-001b216ed0dc  ONLINE       0     0     0
    logs
      gpt/slog0                                     ONLINE       0     0     0
    cache
      gpt/cache0                                    ONLINE       0     0     0



Depending on when the box was built, those are 3TB NAS class drives (usually WD reds). The slog and cache are a intel NVe card, usually around 200GB with PLP.

We only need about 3Gb/s of read and write performance, but the box has a 10G NIC and can easily sustain close to 7Gb/s when it's empty and stays north of 4 when the volume gets above 60% full.

The small files go on an all SSD raidz2 vdev. Nothing really special about that configuration. With 9 SSDs in a raidz2, we have plenty of iops for our environment.

My only suggestion is to have a bench test environment and spend some time testing your usage scenarios. Do it with an empty filesystem, but also do it with a filesystem that's half to 3/4 full. Also make sure you use your actual apps, not just load testing tools like iperf, dd and etc.
 

clueluzz

Dabbler
Joined
May 17, 2019
Messages
24
Hi guys,

I’ve been running my newly built FreeNAS server for about 1 month. Overall, performance is solid. I’m able to have 5 editors comfortable working without any performance degradation.

Recently I’ve had to copy a project folder from an external drive into the FreeNAS. This 5GB folder (small and large files combined) took 1min 40sec. During the transfer I noticed that the transfer timing went from under 1min to 10mins. It took forever to copy 60MB from part of that copy.

I then had to also copy this project folder to another server running OpenMediaVault running RAID 1 (running i7-3700K with 16GB RAM). It took only 30sec. The exact same folder. What gives?

Is there a way for FreeNAS to copy as fast as OMV? Do I have to setup cache or maybe other settings I have to change on the FreeNAS?

Any suggestions appreciated
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Recently I’ve had to copy a project folder from an external drive into the FreeNAS. This 5GB folder (small and large files combined) took 1min 40sec. During the transfer I noticed that the transfer timing went from under 1min to 10mins. It took forever to copy 60MB from part of that copy.
When you were copying to FreeNAS, what were you copying from?
I then had to also copy this project folder to another server running OpenMediaVault running RAID 1 (running i7-3700K with 16GB RAM). It took only 30sec. The exact same folder. What gives?
If this second copy was from the same source, so the system you were copying from may have already had the data cached so that it could access it more quickly.
Any suggestions appreciated
That "Areca ARC-1883LP SAS" disk controller is likely part of the problem but having a single vdev is also a limiting factor on your transfer speed. More drives would be my recommendation, other than the controller card.
 

clueluzz

Dabbler
Joined
May 17, 2019
Messages
24
When you were copying to FreeNAS, what were you copying from?

If this second copy was from the same source, so the system you were copying from may have already had the data cached so that it could access it more quickly.

That "Areca ARC-1883LP SAS" disk controller is likely part of the problem but having a single vdev is also a limiting factor on your transfer speed. More drives would be my recommendation, other than the controller card.

Copying from SSD external into FreeNAS and OMV.

My Areca card has since died and now am using an LSI 9300-8i HBA
 

clueluzz

Dabbler
Joined
May 17, 2019
Messages
24
I think I figured out what's wrong. My FreeNAS pool is using the default 128K record size. I think for my needs, it would be better to be on 4K. Do I need to redo the entire pool? Is there a way to create the pool with 4K and case insensitive as default? The problem is that the GUI is recommending no less than 32K

BTW, I neglected to inform everyone that if I'm copying big files, I can saturate 10G LAN. I just copied 300GB worth of media files in under 20mins. Right after that, I copied my FCPX libaries (which is basically a folder with a gazillion little files), it's hitting max 1.1MB/s

EDIT: I figured out that using NFS is the fastest way to handle this (transfer files and using FCPX). Now just have to figure out restrictions to other users
 
Last edited:
Top