Opinions on this setup? 40GbE SAN for 4k film scans

Status
Not open for further replies.

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Right now, there is no alternative on the market with the same level of redundancy and features as ZFS.
I wouldn't say "no alternative on the market"; there are several alternatives with similar and/or additional features and performance. They aren't free to download and implement, but they're out there for purchase. Look at CASL, WAFL, and Intelliflash to name a couple off the top of my head.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I wouldn't say "no alternative on the market"; there are several alternatives with similar and/or additional features and performance. They aren't free to download and implement, but they're out there for purchase. Look at CASL, WAFL, and Intelliflash to name a couple off the top of my head.

Very true. By "market" I supposed I didn't really mean "market" :D When I said it, I was thinking open source options, even if that wasn't actually what I said.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
64GB (4x16) ECC Registered DDR4 RAM

*coughs* slot stuffers! *cough*

Seriously, the price differential between 16's and 32's is very modest. Two 32's would have left you more future options.

I know the OCZ isn't an Intel SSD, but we've had very good luck with their 2.5" SSDs so far (they're in 6 of our workstations, plus two laptops) with no failures so far, and this is within the budget. This will be the SLOG drive.

Why not just skip the SLOG? If you can't do it right, then don't halfarse it... don't do it at all and save the money plus potentially gain speed.
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
You could return those parts and get a TrueNAS. You are wanting up to a saturated 10G transfer. That will be a tall feat. I think it would definitly be worth it to give iX a call and tell them your needs and what you have already planned out for your FreeNAS setup and see what they think. They are the people that make FreeNAS. Plus you could finance it and get exactly what you need. I have a feeling you are going to be disappointed in your network speeds.

You never did tell us what your drive setup was going to be. Is it going to be all mirrored stripped vdevs? Also how much data do you currently have to put on the NAS? For iSCSI to have optimal performance you want to stay under 30% utilization. And not go over 50%. I think I was saying this earlier but got a call and ended up forgetting about this thread until now.

Also are you just changing the hardware and using your current HDD arrary? I know you said something in the first post but I've forgotten already.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Why not just skip the SLOG? If you can't do it right, then don't halfarse it... don't do it at all and save the money plus potentially gain speed.

It was suggested by two or three folks early in the thread for both performance and data integrity reasons. Let me step back a bit and detail a little more about how I envision this working, from a user-facing perspective:

Our FreeNAS system will have a combination of iSCSI volumes, probably in the 6TB range each. I'm figuring maybe 5-6 of these eventually, but to start we'll likely just do a couple. In addition, we'll have some smaller shares that are accessible via SMB. These will live on a single volume as separate folders with varying permissions. They'll be used infrequently and mostly for small files: One will hold software installers and OS ISOs, etc. One will be for long term parking of company data files (things like Quickbooks backups). One will be a couple terabytes, for holding collections of smaller project files before they get backed up to LTO tape. All of these shares will be very infrequently used, so I expect they won't really have an appreciable impact on the performance of the iSCSI volumes, which is where the performance is most important.

Things that live in some of the shared folders will be there for very long periods of time, though we do make occasional LTO backups of these. Things that live in the iSCSI volumes are there for a few weeks to a few months at most, before they're backed up and cleared for another job.

If the performance of the initial setup is good enough, the plan is to remove the internal RAIDs in some of the workstations, and repurpose those drives into the FreeNAS system. That's a total, right now, of about 24 additional drives (some 2TB, some 3TB). If the performance of the iSCSI volumes isn't quite good enough for use as a centralized storage system, it will still be significantly faster than anything we have now for moving files around, so we'd use it for parking jobs that are dragging out longer than expected, when we need space on local RAIDs.

Given all that, does the SLOG make sense or not?


Seriously, the price differential between 16's and 32's is very modest. Two 32's would have left you more future options.

There are 8 slots on the motherboard, so there's still room to expand if needed.

You never did tell us what your drive setup was going to be. Is it going to be all mirrored stripped vdevs? Also how much data do you currently have to put on the NAS? For iSCSI to have optimal performance you want to stay under 30% utilization. And not go over 50%. I think I was saying this earlier but got a call and ended up forgetting about this thread until now

Well, given the scenarios above, what would you suggest? Performance would trump data integrity for the iSCSI shares. Data integrity is more important with the SMB shares, where things like the project files will live.

The files being stored on the iSCSI volumes are typically reproducible in the event of a disaster (they would already live on an external hard drive, or could be rescanned from the original film if necessary). The project files on the SMB shares are separate from the data they link to (the stuff on the iSCSI volumes), but they're arguably more important than the actual image sequences, since they contain all the work we've done.
 
Last edited:

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Also why a 24 lane hba?

The enclosure we're using has 20 ports. The card has 6 SAS connectors. We'd only use 5 of them, for the 20 internal SATA drives.

The second LSI card has external ports. Cables from this will go to a second enclosure containing an additional 16 drive bays.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It was suggested by two folks early in the thread for both performance and data integrity reasons.
Given all that, does the SLOG make sense or not?

Let this guy answer your question:

The files being stored on the iSCSI volumes are typically reproducible in the event of a disaster (they would already live on an external hard drive, or could be rescanned from the original film if necessary).

A SLOG does nothing for "data integrity" except in the extremely unusual event of a crash, power loss, or other adverse event. Simplified, it provides a guarantee that the last several seconds of data get written to the pool. Someone would probably "notice" if the fileserver did any of those things during a transfer, yes?

I think you really answered your own question there and didn't quite realize it. ;-)

So anyways, if you want iSCSI and big performance, throw lots of mirror vdevs with large drives and maintain GOBS of free space. As in, if you want 24TB of *used* space, build a 60TB pool out of 20 6TB drives in mirror. Note that one of the endearing things about mirrors is that you do not need to actually start with all the spare space, but when things start feeling slow you can add. Try to avoid the mistake of using smaller drives. They tend to hurt you. So if you want that 24TB of space, start out with 12 6TB drives in mirror (36TB), which is actually above the ~50% max I recommend for iSCSI, but I get the feeling from your description that this might pan out for you. If not, add more drives, call it a day, fun stuff.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
A SLOG does nothing for "data integrity" except in the extremely unusual event of a crash, power loss, or other adverse event. Simplified, it provides a guarantee that the last several seconds of data get written to the pool. Someone would probably "notice" if the fileserver did any of those things during a transfer, yes?

I think you really answered your own question there and didn't quite realize it. ;-)

Well, kind of. I guess what I meant to ask in my last post, where I laid out what we're doing, is whether the SLOG would be necessary for the SMB shares. I'm willing to take some risk with the iSCSI volumes since it's all data that can be reproduced in a worst case scenario. But some of the SMB shares will hold files that haven't yet gone to LTO tape, so I'm more concerned about not losing anything there.

If the SLOG isn't going to buy me anything, I'll just use that OCZ SSD as the system disk.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
SMB doesn't even use sync writes, so an SLOG is useless in that case.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Ok, then. That settles that!
I tried to tell you before you placed the order... sorry.
Maybe I missed it, but you initially said you were planning on iSCSI shares. So unless you're planning on forcing sync writes for the zvol a slog isn't going to do anything.
If it hasn't shipped maybe you can cancel that part. :(
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
I tried to tell you before you placed the order... sorry. If it hasn't shipped maybe you can cancel that part. :(

eh - its on its way. It wasn't that expensive, so we'll just use it as the system drive.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
So anyways, if you want iSCSI and big performance, throw lots of mirror vdevs with large drives and maintain GOBS of free space. As in, if you want 24TB of *used* space, build a 60TB pool out of 20 6TB drives in mirror.

My experience with big drives is that they aren't as fast. Does ZFS do something that gets around that? I would love to throw big disks in this thing and just have a massive amount of space. But we've had bad luck with anything bigger than 3TB disks in traditional RAID arrays, usually getting worse performance than we'd get with 2 or 3TB drives by the same manufacturer with almost identical specs (speed, cache, etc), on the same controller cards.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
My experience with big drives is that they aren't as fast. Does ZFS do something that gets around that? I would love to throw big disks in this thing and just have a massive amount of space. But we've had bad luck with anything bigger than 3TB disks in traditional RAID arrays, usually getting worse performance than we'd get with 2 or 3TB drives by the same manufacturer with almost identical specs (speed, cache, etc), on the same controller cards.
Cyberjock put together a doc to explain the basics: https://forums.freenas.org/index.ph...ning-vdev-zpool-zil-and-l2arc-for-noobs.7775/

A primer on ZFS: https://forums.freenas.org/index.php?threads/zfs-primer.38927/

There is lots more, and not just on this forum but that should get you started.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My experience with big drives is that they aren't as fast. Does ZFS do something that gets around that? I would love to throw big disks in this thing and just have a massive amount of space. But we've had bad luck with anything bigger than 3TB disks in traditional RAID arrays, usually getting worse performance than we'd get with 2 or 3TB drives by the same manufacturer with almost identical specs (speed, cache, etc), on the same controller cards.

You're thinking seek speeds, yeah? If you look at sequential I/O, though, I bet you don't really see that. Bet your drives all do 150MB/sec++ average. My 6TB Seagate desktop ST6000DX000's do 225MB/sec on the outer tracks (but only around 100 on the inner).

So here's the thing. The reason you want to maintain gobs of free space is that ZFS builds "transaction groups" to write to disk. If it can allocate a long contiguous stretch of disk, ZFS will take all the writes going on and will tend to lay them down sequentially. That means that random writes happening, the ones where you're used to only getting ~100-200 IOPS out of a HDD for, will happen an order of magnitude faster. Conversely, on a full-ish pool, even sequential writes are not actually happening sequentially on the physical disk, so that slows things down.

Seek speeds do matter with ZFS, but free space can be a more powerful accelerator.

delphix-small.png


So take a look at this. At the right of the graph, a pool with 90% occupancy, where nearly every write results in a seek, this disk is only able to write at ~400KB/sec. At 4K sector size, that's almost precisely 100 IOPS.

However, if you instead create a pool that is ten times the size of what you need, that device will write at 6000KB/sec, or about 1500 IOPS per second. A single disk, doing "random" writes, at 1500 IOPS. Not because it's seeking more, but because it's laying them down sequentially. However, you also need to be aware that this is for write activity. Read activity is dependent on lots of factors, including what percentage of traffic might be stored in ARC/L2ARC, and whether or not it is ACTUALLY stored sequentially on disk.

ZFS performance is a complicated topic.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Stuff begins arriving for the build today, so I took the old enclosure we're using down off the rack to strip it and get it ready for the new components. I realized when I opened the case that it currently has an Adaptec 52445 card in it - 5 internal and 1 external SAS port. We haven't purchased the LSI card yet, so I figured I'd check here to see what people think of this one. We used to have this card inside one of our film scanners, but it was kind of overkill so we swapped it for something with less capacity. It was a solid performer as a RAID card. I believe drives can be set up in pass-thru mode on this one, but it's been a while, so I'd need to experiment.

Searching on the forum shows that there are folks using it, but not much beyond that as far as experience with it goes. It'd be great to use this, both for the cost savings, and because it's a better port configuration than the LSI card I was considering, which would have one unused internal port. With this one, I could get a different card for our outboard JBOD box for future expansion, because we'd already have one of the external ports we need.

If the Adaptec is no good, an alternative might be to use a reverse SAS->SATA breakout cable to connect some of the enclosure's backplane to the motherboard SATA connections, then getting an LSI card with fewer internal ports for the rest. But, since we already have the Adaptec, I'd rather just use that if it's good.

Thanks!
 

maglin

Patron
Joined
Jun 20, 2015
Messages
299
Pretty sure you are thinking backwards here. You can go SAS -> SATA but not SATA -> SAS which is what you are talking about using SATA MB ports to hook up to a mini-SAS port on an backplane. If that worked almost no one would even bother with a HBA.

I've not seen anything good for Adaptec RAID cards. If you can pass through and view all the SMART data you might be ok. You also need to be aware of what driver FreeNAS uses. If it uses the MFI (might have the wrong driver) driver then it's probably not going to work out correctly.
 

friolator

Explorer
Joined
Jun 9, 2016
Messages
80
Pretty sure you are thinking backwards here. You can go SAS -> SATA but not SATA -> SAS which is what you are talking about using SATA MB ports to hook up to a mini-SAS port on an backplane. If that worked almost no one would even bother with a HBA.

I'm talking about one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16816133033

While I've never used one myself, it goes from the motherboard (or SATA controller card) SATA ports to an SFF8087 port on the enclosure's drive backplane. With 9 of the 10 ports on the motherboard going unused in this build, my thought was that I could use two of these cables and hang 8 drives off the motherboard SATA ports. That would mean I'd only need a 12-port card for the remaining drives inside the enclosure, which is cheaper than buying a 24port card for the 20 internal drives.

I've not seen anything good for Adaptec RAID cards. If you can pass through and view all the SMART data you might be ok. You also need to be aware of what driver FreeNAS uses. If it uses the MFI (might have the wrong driver) driver then it's probably not going to work out correctly.

The spec sheet for this as well as the manual lists FreeBSD as a supported OS, but not much more beyond that. I don't know which driver FreeNAS would use, nor how I could tell without first installing it. Anyone?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Searching on the forum shows that there are folks using it,

I seriously doubt that. The last several times I've recalled people asking about Adaptec RAID, and then having insisted on trying it, no good has come of it. Adaptec stopped providing technical documentation and driver support for free software OS's a long time ago, which means that if it works, it's probably barely.
 
Status
Not open for further replies.
Top