Opinions on this setup? 40GbE SAN for 4k film scans

Status
Not open for further replies.

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
We're rebuilding our FreeNAS system from the ground up in the next couple of weeks. Here's the current hardware configuration I'm thinking of:

* ASRock X99 Extreme 4 motherboard
* Xeon E5 2603 v3 6-core
* 32GB ECC RAM
* 64GB SSD System Drive
* Chelsio T580-SO-CR 40GbE NIC
* LSI 9305 24i
* LSI 9201-16e
* GTX 710 (PCIe x1) GPU

The system will be in an enclosure with 20 hot-swap bays, which we already have. These will be connected to the LSI 9305. The second LSI card is for our existing FreeNAS system, which will be stripped and turned into an external enclosure with hot-swap bays. This will give us a total of 36 x 2TB drives. Future expansion would probably happen by replacing the drives with larger ones, rather than adding another enclosure.

I chose the ASRock motherboard because it's a solid performer. We use it in several of our high end PCs in-house for color correction, restoration work, etc. It's been a very reliable board and you can find them for under $200.

The big question at this point is the 40GbE setup. Looking through old posts here, this seems to be the one to get, but if anyone has other suggestions, I'm open. We have an IBM 16-port 40GbE switch. There will be at least 4 machines connected to this network: Two of them will require pretty good performance. One probably won't need as much speed. So some of the lesser machines might be connected at 10G instead of 40. Though I doubt we'd need it, the Chelsio card is dual-port, so we could take advantage of that as well, if we need more bandwidth. I suspect drive speed is going to be the bottleneck here, though.

The files we're working with are primarily 4k and lower resolution image sequences. Hundreds of thousands of files, one for each frame of film. I believe the 36 drives will give us the performance we need to be writing and reading files from two different machines simultaneously, but that's not a common scenario. More often than not, only a single machine will be accessing the files at any time.

We'll do everything through iSCSI.

Any thoughts?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
GPU will be wasted.

I don't know the exact models of the LSI cards you are using, but it looks like they are pure HBA's, which is good. It also looks like the internal drive might be cabled individually which will help with the aggregate disk bandwidth. Is that also the plan for the drives in the external enclosure?

And I would at least quadruple the RAM. But this is a WAG on my part.

This isn't a purely sequential workload, since you have lots of smaller files, not a huge stream of data, so you should plan to use mirror vdevs (instead of RAIDZ). Also, I'm guessing the impact of data corruption is huge to your business, I would suggest using Sync=enabled on the datasets and getting the best SLOG device you can afford.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
Someone correct me if i'm wrong but i believe X99 only supports unregistered ECC, even if used with a Xeon.

That hardware isn't going to push anywhere near 40GbE speeds with hundreds of thousands of files.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
ASRock X99 Extreme 4 motherboard
Xeon E5 2603 v3 6-core

You've selected a non-server motherboard and a TERRIBLE cpu for NAS work loads.

SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3 - http://www.newegg.com/Product/Product.aspx?Item=N82E16813182927

Intel Xeon E5-1620 v3 - http://www.newegg.com/Product/Produ...46588&cm_re=e5-1620_v3-_-19-117-512-_-Product

These are much better options for building a NAS. The E5-1650 v3 is a beast, but comes in around $600, if your budget allows for it go that route. Also, you don't need a video card as freeNAS runs headless for the most part.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
The GPU is something we already have kicking around, and I'd rather have it just in case we need quick access to the terminal, so that's why that's there.

The internal drive would be hung off of one of the motherboard's onboard SATA ports. The drives in the external enclosure would be on the second LSI card, which has external ports. Cables go from that to the back of the box we're currently using, and then from there to the SATA backplane in the box. I'll need to source those thru-connectors on the back of the enclosure, but I'm sure I can find them.

On the SLOG device, I forgot to list that, but I was thinking another SSD, or maybe mirrored SSDs. Do you have a specific suggestion for "Best?" I would think a smallish SSD would be fine - 64GB or so, right? Is there a formula for calculating the optimum size for this? Since SSDs are still pretty expensive, I don't want to get something that's unnecessarily large.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
For a serious NAS (which your system clearly is), I would seriously rethink your motherboard selection. That motherboard is a consumer grade motherboard, with all the consumer grade "features" that come along with in. If you want to stick with ASRock, they sell a number of different server motherboards. Or you could go with Supermicro, which is the most common here, I imagine.

Also, why are you specing in a GPU? FreeNAS does not need, nor will take advantage of, any dedicated GPU. Again, buying an appropriate motherboard will solve this for you.

You'll also need more memory to do what you're doing. The rule of thumb is 1GB for every 1TB of drive space. In your case, you'll have around 72TB of raw space, so I'd recommend at least 64GB of memory. Since you specifically require "pretty good performance", I'd add even more memory. For 40Gbps, you'll probably need even more memory still.

Given your workload, I would also recommend a SLOG. A good SLOG won't be huge (I don't think a SLOG ever needs more than a few gigabytes), but you want it to be super fast. For data security, I would recommend a second one, and mirror them.

Just to be clear, on iSCSI, you can only have one client accessing one iSCSI target simultaneously. In other words, iSCSI targets should be treated like a physical disk: only attached to one computer at a time. If you are trying to share data among your users, then you should use NFS.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Someone correct me if i'm wrong but i believe X99 only supports unregistered ECC, even if used with a Xeon.

From the spec sheet for the mobo: "- Supports DDR4 ECC, un-buffered memory/RDIMM with Intel® Xeon® processors E5 series in the LGA 2011-3 Socket"


That hardware isn't going to push anywhere near 40GbE speeds with hundreds of thousands of files.

Doesn't need to - I realize this is overkill, but the switch cost about the same as a good high end 10G unit, and the NICs aren't that much more so why not?

For a single stream of 4k DPX sequences, we'd need to move about 1200MB/s to play in real time. But that's not strictly necessary because our film scanner runs at about half that rate, and film restoration is frame by frame. So really only the color correction system needs serious bandwidth. I'm pretty certain we'll get the performance we need with all those drives in there, though.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
40Gbps *10 seconds would be the slog size you need. So something over 64GB should be good. The best would be a ZeusRAM, but if that's too expensive, then look at the Intel P3700 NVMe.
And as others have mentioned, get a proper server MB.

What is your expected IOPS requirement? And what is your planned vdev configuration?
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Thanks for the suggestions. I like SuperMicro motherboards. We have some in machines that were built in the late 90s and are still in daily use. My initial stab at this was based on the motherboard we've been using in-house for our workstations. The ASRock boards have been super-reliable so far. We've got several machines built on those, all custom, in use daily for a couple years. But, point taken. I'll get the SuperMicro board and upgrade the CPU.

On the SLOG, would it make sense to use a PCI-based SSD, or just a pair if 2.5" SSD drives via SATA? I would imagine the PCI-based models will be significantly faster, but of course, they're fairly expensive.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
On the SLOG, would it make sense to use a PCI-based SSD, or just a pair if 2.5" SSD drives via SATA? I would imagine the PCI-based models will be significantly faster, but of course, they're fairly expensive.
PCI wins hands down. Low-latency is the key (and write endurance, of course). A pair doesn't increase performance.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
ASRock X99 Extreme 4 motherboard
Xeon E5 2603 v3 6-core

You've selected a non-server motherboard and a TERRIBLE cpu for NAS work loads.

SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3 - http://www.newegg.com/Product/Product.aspx?Item=N82E16813182927

Intel Xeon E5-1620 v3 - http://www.newegg.com/Product/Produ...46588&cm_re=e5-1620_v3-_-19-117-512-_-Product

These are much better options for building a NAS. The E5-1650 v3 is a beast, but comes in around $600, if your budget allows for it go that route. Also, you don't need a video card as freeNAS runs headless for the most part.
You probably want this build with 4x16GB registered dimms. No GPU, mirrored vdevs, some kind of slog and lots of testing because 40gigE is uncharted waters.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
What is your expected IOPS requirement? And what is your planned vdev configuration?

I'm coming from the video world, where we think in terms of MB/s, so I'm not really sure how to calculate this. We're a small office - just a couple of us in here day to day, but because of the massive filesets we have to work with, we need to centralize it. sneakernet is killing us. There won't be much concurrent use, as we tend to work on one project at a time. But there will be situations where one person is doing color correction and scans are running on another machine, both to/from the FreeNAS setup. Or something might be reading files off one iSCSI volume and writing the rendered files to another. Each attached workstation has different requirements: some might need quite a bit, while others might work at a relative trickle. But usually there will only be one system that's pushing it at any time, and the others will be lower bandwidth.

As for the vdev configuration, I'm still working that out. The film scanner might need 350MB/second, while the color correction system might need more like 800-1200 MB/s, worst case. In my experience with direct-attached RAIDs, you can't really get to that level of speed without 8 drives, and more is better. But I'm not sure with ZFS at what point it makes sense to stop adding drives to one set. I suppose this will probably require some testing once we get it set up to know for sure.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
PCI wins hands down. Low-latency is the key (and write endurance, of course). A pair doesn't increase performance.
Thanks. I was thinking a pair just for mirroring purposes, but I think it's going to cost too much.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Someone correct me if i'm wrong but i believe X99 only supports unregistered ECC, even if used with a Xeon.
Nope, my X99 board runs fine with a Xeon E5-1650v3 and Registered DIMMs.

In any case, a proper server board would make much more sense, as outlined above.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Just ordered the following components:

SuperMicro X10SLR-F Motherboard
64GB (4x16) ECC Registered DDR4 RAM
Xeon E5 1620 v3
OCZ RC400A PCIe SSD
SuperMicro CPU Cooler
Chelsio T580-SO-CR 40GbE NIC
LSI 9305 24i SAS HBA

I know the OCZ isn't an Intel SSD, but we've had very good luck with their 2.5" SSDs so far (they're in 6 of our workstations, plus two laptops) with no failures so far, and this is within the budget. This will be the SLOG drive.

Should have everything here next week. In the mean time we're going to start backing up everything on the FreeNAS system so the drives can be moved into the new box next week.

Thanks for the feedback. I'll post back here with any updates.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Just ordered the following components:

SuperMicro X10SLR-F Motherboard
64GB (4x16) ECC Registered DDR4 RAM
Xeon E5 1620 v3
OCZ RC400A PCIe SSD
SuperMicro CPU Cooler
Chelsio T580-SO-CR 40GbE NIC
LSI 9305 24i SAS HBA

I know the OCZ isn't an Intel SSD, but we've had very good luck with their 2.5" SSDs so far (they're in 6 of our workstations, plus two laptops) with no failures so far, and this is within the budget. This will be the SLOG drive.

Should have everything here next week. In the mean time we're going to start backing up everything on the FreeNAS system so the drives can be moved into the new box next week.

Thanks for the feedback. I'll post back here with any updates.
Ocz ssds any good? They are usually crap. Also why a 24 lane hba?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Ocz ssds any good? They are usually crap.
They're Toshiba now. Controllers are a mix of Silicon Motion and Marvell, so very run of the mill.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
OCZ isn't a good choice for this application.
I don't see power protection in the OCZ which makes it completely unsuitable.
Assuming you got the 128GB variant, you will burn it out in no time, since it's only rated for 74TBW. (for comparison the 400GB intel P3700 is 7.3PBW) Every byte written to your array will be written to the SLOG.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
we've had very good luck with their 2.5" SSDs so far

One of the things that I see a lot here is that people try to take their experience from previous IT endeavors and apply it to building a FreeNAS machine. It's really, really important that you check this experience at the door.

ZFS is a crazy beast, which does things that are far outside the "normal" realm of what most server applications or client applications do. This is why it's really important to double check all assumptions, or else you're going to have a bad time. The wear that ZFS puts on a SLOG device is incredible, especially if you are moving a lot of data. ZFS's need for memory boggles the mind, especially in comparison for what a Windows File server would need. The thermal load that ZFS generates when you scrub the drives can easily overwhelm simple cooling systems.

All this isn't just because ZFS is inefficient; far from it. It is because ZFS does everything it can to protect your data from corruption. Right now, there is no alternative on the market with the same level of redundancy and features as ZFS. But that does come with a huge performance hit, relative to what other solutions can provide.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Maybe I missed it, but you initially said you were planning on iSCSI shares. So unless you're planning on forcing sync writes for the zvol a slog isn't going to do anything.
 
Status
Not open for further replies.
Top