Matt Zell
Cadet
- Joined
- Jun 19, 2013
- Messages
- 4
Hi!
I am setting up a new shared storage solution at the business I work at. We primarily work as a full service post production facility for television and film production as well as VFX. So our main application would be the ability to serve and write media files. The majority of these files would be image sequences (dpx and tiff sequences) as well as .mov files mixed in there (Prores and DNX mostly). Because these are editing/coloring/VFX applications, they all demand a high amount of data in bursts as well as the need to have consistent enough to support realtime playback of image sequences/.mov files.
Our facility is fully wired with 10G optical cables, at least to the machines that will need the bandwidth. I will focus on my personal work machine that I have been testing with and the actual server itself. However the majority of our facility is OS X with a few more windows machines creeping their way in (due to the non-expandability of the new mac pro...separate topic).
Because of this I would either like to highly optimize AFP or NFS.
Our shared storage is a 4U 24 hot swap bay supermicro case with the following internals:
I will mention before I go any further that I am only using one port of one 10G nic on the server at the moment as I am setting it up, and that the traffic by in large is going through a cisco enterprise fiber switch with a fabric extender. The engineer who set up our switch informed me that the switch in fact was only enabled to work with 1500 MTU instead of the 9000 MTU. Because of this I have set the nic to 1500 on both sides. I did however try directly connecting the two machines and setting 9000 MTU on both with minimal gains in my testing.
I have gone through all of the proper tuning on both ends with the Myricom cards as found here: https://www.myricom.com/software/my...w-myri10ge-or-mx-10g-performance.html#freebsd
My benchmark for better or worse is Blackmagic Disk Speed Test. A lot of engineers and their friends will instantly scoff at me when I tell them this is my benchmarking tool however this is what our facility uses to help compare different systems. In some ways this testing tool is appropriate in that we are a post facility and this does help determine latency and available bandwidth for realtime playback.
My results are as follows:
I have done massive amounts of research, read all of the manuals, peeked at the evil tuning guide (haven't tried any of those just yet) and am really just coming up short.
I have read some of the moderators on this thread saying that they are able to get 1G write speed on their home systems and I would be enormously happy just to get 450-500MB/s write. Heck, as it stands, AFP for read already blows me away on 1 10G connection! I just would like to know what I could do to squeeze any more performance for writing files to this server. We are looking to aggregate those 4 10G slots in the near future (however I am trying to get them to tackle rolling out 9000 MTU first and both need to be supported from the switch).
I do apologize for the novel I have just written but I have read countless threads where people did not provide enough information and then 15 posts later finally got to the route of the problem. So I decided to swim a little upstream to wish for the best.
Thanks in advance for your time and for any advice you are willing to provide!
I am setting up a new shared storage solution at the business I work at. We primarily work as a full service post production facility for television and film production as well as VFX. So our main application would be the ability to serve and write media files. The majority of these files would be image sequences (dpx and tiff sequences) as well as .mov files mixed in there (Prores and DNX mostly). Because these are editing/coloring/VFX applications, they all demand a high amount of data in bursts as well as the need to have consistent enough to support realtime playback of image sequences/.mov files.
Our facility is fully wired with 10G optical cables, at least to the machines that will need the bandwidth. I will focus on my personal work machine that I have been testing with and the actual server itself. However the majority of our facility is OS X with a few more windows machines creeping their way in (due to the non-expandability of the new mac pro...separate topic).
Because of this I would either like to highly optimize AFP or NFS.
Our shared storage is a 4U 24 hot swap bay supermicro case with the following internals:
- Chipset - SUPERMICRO MBD-X9DR3-LN4F+-O
- CPU'S - 2X -Intel Xeon E5-2620 Sandy Bridge-EP 2.0GHz (2.5GHz Turbo Boost) 15MB L3 Cache LGA 2011 95W Six-Core Server Processor
- HBA'S - 3X -LSI LSI00301 (9207-8i) PCI-Express 3.0 x8 Low Profile SATA / SAS Host Controller Card
- HDD - 20X - Seagate Constellation ES 2TB 7200RPM 6GB/s SAS 64MB Cache Drives
- RAM - 48GB for test period, production box will have order for 128GB
- Networking - 2X - Myricom 10G-PCIE2-8b2-25 Lanai Z8ES Chipset based 2 port nic
- I also have at my disposal 2x Seagate 600 Pro Series 240GB 2.5" SATA III MLC Enterprise SSD's that I ordered in case it was needed
- Mac Pro 2.93 Ghz 2x6core Xeon
- 96GB Ram
- Myricom 10G-PCIE2-8b2-25 Lanai Z8ES Chipset based 2 port nic
- OS X 10.8.3
- Raid 10-like setup with nested mirror vdevs striped
- Raid 50-like setup with nested 4-disc raidz vdevs striped
I will mention before I go any further that I am only using one port of one 10G nic on the server at the moment as I am setting it up, and that the traffic by in large is going through a cisco enterprise fiber switch with a fabric extender. The engineer who set up our switch informed me that the switch in fact was only enabled to work with 1500 MTU instead of the 9000 MTU. Because of this I have set the nic to 1500 on both sides. I did however try directly connecting the two machines and setting 9000 MTU on both with minimal gains in my testing.
I have gone through all of the proper tuning on both ends with the Myricom cards as found here: https://www.myricom.com/software/my...w-myri10ge-or-mx-10g-performance.html#freebsd
My benchmark for better or worse is Blackmagic Disk Speed Test. A lot of engineers and their friends will instantly scoff at me when I tell them this is my benchmarking tool however this is what our facility uses to help compare different systems. In some ways this testing tool is appropriate in that we are a post facility and this does help determine latency and available bandwidth for realtime playback.
My results are as follows:
- For AFP I can get on average 220-250MB/s write and 600-670MB/s read
- For NFS I can get on average 140-160MB/s write and 150-160MB/s read (using 24 threads)
- For CIFS I can get on average 150-180MB/s write and 200-210MB/s read
I have done massive amounts of research, read all of the manuals, peeked at the evil tuning guide (haven't tried any of those just yet) and am really just coming up short.
I have read some of the moderators on this thread saying that they are able to get 1G write speed on their home systems and I would be enormously happy just to get 450-500MB/s write. Heck, as it stands, AFP for read already blows me away on 1 10G connection! I just would like to know what I could do to squeeze any more performance for writing files to this server. We are looking to aggregate those 4 10G slots in the near future (however I am trying to get them to tackle rolling out 9000 MTU first and both need to be supported from the switch).
I do apologize for the novel I have just written but I have read countless threads where people did not provide enough information and then 15 posts later finally got to the route of the problem. So I decided to swim a little upstream to wish for the best.
Thanks in advance for your time and for any advice you are willing to provide!