A couple of FreeNAS Questions

Status
Not open for further replies.

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
Hi All

New here, I have experience with a lot of systems and hardware but wanting to give FreeNAS a go for the first time.

I am looking for a storage solution that I plan to use on a local network to serve a video editor RED video content over 10GB/E. Here is the system I obtained:

Supermicro 2U System
1x X9DRi-LN4F Motherboard
- Integrated Dual Intel 1000BASE-T Ports
- Integrated Software Supported RAID
- Integrated IPMI 2.0 Management
2x Intel Xeon E5-2650 Octo Core 2.0GHz
128GB DDR3 (8X 16GB DIMM)
12x 3.5" Drive Caddies
1x Adaptec ASR-71605 Controller
1x X540-T2 Dual 10GBase-T
No Hard Drives
Innodesk 64GB SATADOM
Dual Power Supply
Rail Kit

Purchased 6x 6TB Western Digital Gold and plan on purchasing 7x more (6 for system and 1 as spare for immediate failure replacement).

I have settled on FreeNAS and have a couple of questions:

1) I am leaning towards raidz2 with two vdevs of 6 drives each is this correct? Or in some way is it better to get all 12 and do something else first?

2) If I do one vdev and then the second later, the vpool extends over i understand? So it will still appear as a single drive to share?

3) If i need more storage than this in the future and build a second system, can they link together over the network to extend the vpool to the new vdevs? Or I am guessing this is not recommended over the network there is probably another way to do it?

4) I do plan on building another system with even more space for another solution (24x 6TB) so in this situation would 4x 6 drive vdevs on raidz2 still be the recommended solution.

Thanks all in advance! I do understand some of this may be answered in docs, but at the same time I want to establish a basic understanding of some capabilities from you wonderful knowledgeable folk before I invest too much time in documentation being on a time budget as being mostly solely responsible for this.

Wolfsta
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
First, what are your requirements? You mention "RED video content" - are you referring to the RED camera? As in, up to 8K resolution? That's going to *dramatically* influence your design.

As for your hardware, trash that controller and get one of the recommended LSI cards.

As for your questions...
1 - yes, 2 6-drive RAIDZ2 vdevs in one pool would be a great configuration... assuming you aren't really trying to serve 8K content.
2 - yes, you can start with one vdev then add another (ideally identical) vdev later.
3 - no, you can't create one pool spanned across multiple physical systems. There are some file systems that support this, but that's a whole different level of complexity. If you wanted to add drives, you could either add a JBOD chassis and connect it to a controller (you'll need an HBA with external drive support), or simply start with a bigger box. Supermicro makes a very nice 4U chassis - some are 24-bay and some, like mine, are 36-bay. You might just want to start with one of these if you intend to grow.
4 - yes, again, depending on your requirements.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I am leaning towards raidz2 with two vdevs of 6 drives each is this correct? Or in some way is it better to get all 12 and do something else first?
I think striped RAIDZ2 makes sense for this application. You could also probably get away with one RAIDZ3 setup, but I'm not sure I'd be comfortable with that wide of a vdev.

If I do one vdev and then the second later, the vpool extends over i understand? So it will still appear as a single drive to share?
Yes.

Be warned that your data will not be balanced across the vdevs, however. There is not automatic process that spreads the data out. Over time, as you write and delete data, your vdevs will become more balanced.

NB: there is no such thing as a vpool. Also, "zpool" is a command, not a thing. The correct term is just "pool".

If i need more storage than this in the future and build a second system, can they link together over the network to extend the vpool to the new vdevs? Or I am guessing this is not recommended over the network there is probably another way to do it?
What I believe you are talking about is something like GlusterFS.

ZFS does not support pooling remote vdevs together (or it wouldn't be a recommended configuration).

You can always use a SAS expander to add additional drive shelves to your server. However, be mindful of memory requirements as you add additional drives.

4) I do plan on building another system with even more space for another solution (24x 6TB) so in this situation would 4x 6 drive vdevs on raidz2 still be the recommended solution.
Depending on what your storage/performance needs are, you might be better off doing striped mirrors.

If you gave me 24x drives, I would probably configure them in 3x vdevs, each with 8x drives in RAIDZ2.
 
Last edited by a moderator:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
1x Adaptec ASR-71605 Controller
Bad plan. Assuming your Supermicro chassis comes with a SAS expander backplane, get a LSI/Broadcom/Avago 2008-/2308-/3008-based SAS HBA instead.
I am leaning towards raidz2 with two vdevs of 6 drives each is this correct?
Sounds like a good plan.
If I do one vdev and then the second later, the vpool extends over i understand? So it will still appear as a single drive to share?
Correct.
If i need more storage than this in the future and build a second system, can they link together over the network to extend the vpool to the new vdevs? Or I am guessing this is not recommended over the network there is probably another way to do it?
Not over the network, but SAS can be cabled externally. A SAS drive shelf could do this.
I do plan on building another system with even more space for another solution (24x 6TB) so in this situation would 4x 6 drive vdevs on raidz2 still be the recommended solution.
Four, six-disk vdevs would work; so would three, eight-disk vdevs. The latter would give you better space utilization, slightly less redundancy, and fewer IOPS.

I'd also say that, if all you're doing is serving files, a dual-socket board is probably overkill--though if you're buying used gear, it may be what you see most of.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
no, you can't create one pool spanned across multiple physical systems.
Just to be clear: You can't span a pool across multiple computers. You can, however, easily span a pool across multiple chassis.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Just to be clear: You can't span a pool across multiple computers. You can, however, easily span a pool across multiple chassis.
Yes, quite correct.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You can't span a pool across multiple computers.
Now that I think about it, I guess you could if you wanted to be ugly. Create a pool, create a zvol on it, export that via iSCSI. Repeat on as many computers as you like. Then, on yet another computer, create a pool across all those iSCSI extents. Sounds like something that would belong in the late and lamented "How to Fail" thread, but it should be possible.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Now that I think about it, I guess you could if you wanted to be ugly. Create a pool, create a zvol on it, export that via iSCSI. Repeat on as many computers as you like. Then, on yet another computer, create a pool across all those iSCSI extents. Sounds like something that would belong in the late and lamented "How to Fail" thread, but it should be possible.
I was thinking about something similar... use DFS or GFS on your "aggregation" system and make it all look like one massive share. But this definitely falls in the "just because you can, doesn't mean you should" category!
 

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
All good info thank you!! I am going to check with https://unixsurplus.com/ see if I can change a couple of things like the raid card and possibly the enclosure for a larger one if I can be bothered swapping everything.

Yes it is RED camera content, 6K specifically not 8K. But I dont want to give you the impression that its going to be rendering all day everyday or anything though. Its more storage of that data than anything. Maybe one day a week, a block of an hours worth of footage will be worked on and rendered out to multiple sizes 4K being the max. And a lot of small clips 30secs and 7mins and 40mins max. As long as it can handle that I'm happy. And only one video station working on it. Not multiple.

We currently have a beefy i7 computer just with 12gb sas hba's and then storage enclosures that are hooked directly to the computer formatted with NTFS. And this new setup will be for two reasons 1) to backup some of the content on that system at a different location (as the tape backups have fallen behind) 2) to be able to render at the new location.

Sidenote: i was thinking about putting a small dedicated linux box at location a, to act as a rsync server over the network from the windows box or something to remotely backup the content over the web to location b on the freenas server if this is possible or if someone has a better solution let me know)

Thanks again! Loving the communities quick responses! You guys rock!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
As long as you understand this is just an off-line datastore, you're golden. Trying to do on-line editing, especially on multiple systems, of 6K video, is a totally different problem.

You can do your backups via ZFS replication. Ideally, build an identical box to your primary (or at least one with similar storage... you could go to bigger drives and less vdevs if you want) and use that as the target. Just make sure you have the bandwidth (and transfer, if your connection has data caps) to replicate whatever you want to back up every night. Some people aren't doing massive changes all the time and can replicate across a relatively slow link, after doing an initial local replication to get the bulk of the data. You're going to be dumping huge files in all the time, so you need sufficient bandwidth to deal with that and not fall behind.
 

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
I definitely want to be able to edit video files from it locally over 10gbE a couple times a week, like I said though 85% of the time it will sit idle 15% it maybe be used for editing.. what are the issues with editing from it?

The backups I was talking about I meant that I wanted to backup TO the freenas. So remotely the other windows box has some content on it that I want to be able to backup to this box. It may just be better to do manually each time also as a one way rather than sync. As its just one card of content every couple of weeks, ~400gb each.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
If you want to edit over 10gig you are going to want to use stripped mirrors. Anything else will probably be too slow. The throughput of raidzx is great but iops are equal to a single disk in a vdev. So the more vdevs the faster things are going to be.
 
Last edited by a moderator:

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
If you want to edit over 10gig you are doing to want to use stripped mirrors. Anything else will probably be too slow. The throughout of raidzx is great but iops are equal to a single disk in a vdev. So the more vdevs the faster things are going to be.

What is "iops"

We edit the same video over 10GBE at our other location and on the windows box its just a raid6 array.. seems to work ok.. is FreeNAS going to be somehow slower than that?
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
For a single system, assuming nothing else is happening on the pool, I'd give RAIDZ2 a try first. I think you'll be OK.

There are two measurements of storage throughput. Bandwidth is the easy one - how fast you can read or write data. But IOPS is the hard one - IOPS = I/O Operations Per Second. Your average 7200RPM SATA drive is good for 100 IOPS, and a single vdev has the IOPS equal to the slowest single drive in the vdev. So, if you have one RAIDZ2 vdev of 7200RPM SATA drives, you've got 100 IOPS. Let's say that was 6 drives... put those in striped mirrors, and you have 3 vdevs - or 300 IOPS. That's why you see striped mirrors recommended for busy pools... especially things like VM stores. This is also why SSDs are so amazing... yes, they have a higher bandwidth and that's helpful in some situations. But, the reason SSDs make systems feel so "snappy" is primarily due to their IOPS. The worst SSD ever made will handle orders of magnitude more IOPS than the most advanced 15K SAS spinning rust drive.

If you have a single device doing the work, theoretically you should be looking at long, fairly linear reads and writes. When you start adding multiple systems pulling data simultaneously, where the drives are having to seek continuously, things slow down in a hurry. I would try to keep the free space on the pool high - the less space you have, the more fragmentation will accumulate. Remember, ZFS is a copy-on-write file system... it behaves quite a bit differently than your RAID6 array running NTFS. ZFS demands more hardware and more capacity to provide the same level of performance, because it's providing unparalleled protection to your data.

So... give it a try, see what happens.
 

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
So i got 6x 6tb 7200 WD Gold drives.. and am getting a further 6 of them. My original plan was two vdevs of 6x drives each in Raidz2 (the raid 6 equivalent right?) So this is still what you would recommend?

One further question.. earlier someone mentioned I should swap the Adaptec ASR-71605 Controller for "LSI/Broadcom/Avago 2008-/2308-/3008-based SAS HBA instead" how much difference will this make? is it necessary?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
OK thanks, I already spoke to UnixSurplus and they are going to swap it out for me for one you recommended :) they have been great, I assume most of you here have heard of them?
 

wolfsta

Cadet
Joined
Jan 30, 2018
Messages
8
Oh, this is the card they are getting me, just to be double sure before they ship:
AOC-S3008L-L8e HBA 12Gb/s. It's supermicros card with the LSI 3008 chip

That's the one I need right?
 
Status
Not open for further replies.
Top