Hi Biff,
Are still in need of some help? I work in Post/VFX with similar image resolutions.
Yes, we'd very much like some help. We have been tweaking and testing some FN builds as we have time, and I'd like to think it's a viable option for the lower-end of our usage spectrum. We're building-out another 800TB node next week as well.
Which leads us to another issue / problem. We have a 36x 8TB SAS3 array connected via twin ports (18 disks per port) through an LSI 9300-8e SAS3 HBA (latest firmware). We get modestly decent performance running a 6x6Z2 grouped pool. Our data is WRITE-heavy (as always) SEQUENTIAL DPX files (50MB/per file) and we need to write 30 FPS (1500MB/sec) sustained, for HOURS on end -- i.e. we need to write about 24-30TB of data PER 8-hour day!
The server / node it's connected to is a SM dual E5-2660 v3 (20 physical cores) with 256GB of DDR4 ECC. Boot media is dual 500GB 950 EVO SSD. That should be overkill for this deployment. The disk shelf is a Quanta 60-disk JBOD (4602, I believe) with 18 disks on each of the two 12G controllers.
We're using Mellanox 40GbE CX5 QSFP+ cards that are direct-connected to a couple of film scanner clients. The "interesting" thing is that we can achieve these throughput rates (barely) via iSCSI, but SMB and NFS are **considerably** slower -- and I'm talking 50% slower, like 800-900MB/sec. In some cases, this is OK, especially when we're ingesting 2K DPX files (12MB/per) instead of 4K DPX (50MB/per). The problem we're seeing is that, over time (after 7,000-15,000 frames), SMB and NFS performance continually degrades until it falls below 24 FPS ("real time" for film), but this degradation does NOT happen via iSCSI to the SAME pool using the SAME wire / NICs.
Can we tune NFS or, if necessary, SMB to NOT degrade? What is it about the file I/O and/or CIFS that is causing this very linear speed degrade over time?
Thanks,
BB