Freenas Build for editing video slow!

LUIGIVALTULINI

Dabbler
Joined
Feb 25, 2017
Messages
12
Hello everybody ,
I need help, I'm new with Freenas.
I built a server for a client, it is used for video editing.
I used a supermicro x9dri with 128gb of ram, 192tb 24 disks, nvme pci from 512gb samsung evo 970, 1 x520t2 10gbe intel.
I created 2 pools in raidz1. There are 5 MAc pro whit last mojave ,connected with related 10gbe cards all connected with Arista 10gbe switch.
The server works fine, I have about 700 mb / s read and write tested with BLackmagic disk speed.
Sharing is done with Samba.
I have a problem, when everyone starts editing movies with Adobe premiere the performance drops drastically.
Is this problem due to the little ram? 128gb not enough?
What could I do to increase performance?
Any useful advice?
I also tried different types of pools from raidz1 to 3 mirror with more pools .. but the result is always the same.

thanks
Luigi
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If they are all accessing large files simultaniously, you are likely running into two problems. First, your pool may not have enough IO to satisfy that number of simultanious requests. That would be down to the pool layout. All the disks would need to be in a single pool instead of multiple pools and you would likely need to use mirror vdevs to have enough vdevs because vdevs roughly equate to IOPS. If you use 24 disks in mirror vdevs, it gives you 12 vdevs times the number of IOPS that a single disk can handle.
With the type of workload you have (large files) you are probably wasting the NVME drive, but you didn't say how you are using that.
The second IO limit you are probably hitting is the capacity of a single 10Gb NIC. With five clients, all accessing the system from 10Gb NICs, you could easily be fully saturating the single 10Gb NIC in the server. This would likely be a good place to use link aggregation on the server so it can have higher bandwidht, but that takes a switch that can also do link aggregation.
 

LUIGIVALTULINI

Dabbler
Joined
Feb 25, 2017
Messages
12
thanks Chris, so would you recommend putting a 40gbe card connected to the switch?
I have a Arista DCS-7050T-64 with 4 port 40gbe.
NVME is used for the ARc cache.
I tried to mirror 12 vdev but nothing or little or nothing changes ..
 

Attachments

  • Screen Shot 2019-12-13 at 7.44.25 PM.png
    Screen Shot 2019-12-13 at 7.44.25 PM.png
    134.5 KB · Views: 238

Rand

Guru
Joined
Dec 30, 2013
Messages
906
It should be easy enough to see if your network is overused by looking at network stats.

Is performance better if only a single user is editing files? Try to find the limit by increasing user counts and observing the stats change while you add more load
 

LUIGIVALTULINI

Dabbler
Joined
Feb 25, 2017
Messages
12
thanks Rand,
a user works much better, as soon as the network is used by more users the performance decreases considerably.
I have 200-300mb picci at the network card output ..
I also changed the network card from intel to chelsio .. but nothing changes, thinking that I had a defective card.
Maybe Samba who has difficulties.
I just need to try putting a 40gbe card.
I also tried to use small file prores 1080 LT, it works better. The request for each single workstation is more or less between 30-40 mb / s ... so for a maximum of 250-300mb / s total, so a 10gbe should suffice ... But as soon as everyone is editing go to hiccup.

Thanks for your advice, I'll try to figure out where the catch is.
Screen Shot 2019-12-17 at 4.23.28 PM.png
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Have you observed pool utilization when under load?
gstat or zpool iostat -v?

I still would rather expect the array to be the bottleneck than networking.
Your ARC will only help if the file is actually cached (eg if multiple users get the same file)..
 

LUIGIVALTULINI

Dabbler
Joined
Feb 25, 2017
Messages
12
thanks Rand,
are you thinking that the bottleneck is the pool?
I created a pool of 18 mirrored disks for a total of 36 disks on a second server ..
I connected a windows machine with a 10-disk 24-drive raid and connected the frennas directly with the 10gbe port, for the first 40 seconds everything works at 700mb / then dastrically drops to 200-150mb / s ... I don't understand why. I tried to raise the 64gb to 128gb ram and the only thing that changed is that the transfer time to 700mb / s from 40 seconds went to 1 minutes and 20 seconds.
Just finished, deleted the files and copied the same file, practically I have a speed of 1100gb / practically for all the time saturates the band of the network card.
So the file went into cache or something like that?
If I copy a new file it always happens the same one described above.
This happens with the 2 freenas servers I have.
I don't know where to hit my head anymore. Maybe 5 editing worstations are tropper for a frenas server?
And what would be the right setting to work without problems? need to have many more disks?

Now they are working on three workstations and gstat gives me this.
Screen Shot 2019-12-17 at 10.44.53 PM.png
 

Rand

Guru
Joined
Dec 30, 2013
Messages
906
Well, the high speed that drops of is write caching in memory; thats why it took longer to drop off after you added more memory.

Then, there clearly is a drive that has issues on your pool (da5, in red) - can you replace that with a spare (or redo pool without it) to see if its bad?
 
Top