Considering FreeNas... A few questions (performance related)

Status
Not open for further replies.

gregh42

Dabbler
Joined
Dec 28, 2012
Messages
12
Background:
I do a lot of HD video editing and photo manipulation from a Macbook Pro. I am considering a Synology NAS or building a new kit from scratch. I figure either option will be in the same ballpark cost wise, but like the idea that I will have more control and expandability with the FreeNas solution. I am still in the research phase, so please forgive my ignorance. I've found these forums extremely helpful, and thanks to everyone who contributes!

I also have a mixed environment of devices including 4 PC's, 2 Macbook Pro's, and a host of iPads/iPhones/etc.

I have Cat 5e network and a Netgear 1 gigabit switch.

My goal:
I would like to set up a FreeNas as follows:

1) Volume 1: "High Performance" striping intended for work-in-process video editing (a scratchpad). Desired speed would be 100 MB/sec or higher.
2) Volume 2: "High Capacity / High Reliability / Decent Performance" using ZFS Raid 10. The end product (photos, videos, etc) would be stored and shared here. 75 - 100 MB/Sec would be fine here.

The kit I'm looking at:
Intel i3 2105
ASUS Intel Mini ITX HT77
Corsair 16GB DDR3 1600
Corsair Modular Power Supply (650 Watt)
Fractal Design Node 304 Case
4 WD Red 2 TB Drives
2 WD Red 1 TB Drives

The Question:
Can I achieve the above with the right setup using FreeNas, and is it a "good idea" to try to do so? I figure I'm going to be spending roughly $1300 to build the above out. I understand I will probably need to use Link Aggregation to get the performance I desire, and per Apple Support I can purchase a thunderbolt to ethernet adapter that should help me set this up on the client side. I'm not sure what else I need to do on the FreeNas side but from the research I've done so far, I need to avoid Realtek and go with Intel ethernet adapters.

My plan B is to buy another SSD drive for my MBP if the high performance option isn't a good idea and too difficult to set up w/ link aggregation. So the priority is really the second option, but I'd prefer to have it all in one solution if possible.

Thanks much for any guidance.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
A few comments:
- I see no reason you can't do gigabit without link aggregation. That covers 100MB/s.
- I'd suggest you look at just doing a 6x2TB RAID-Z2. Why? Because speed will be limited by the network anyway. However, if you insist on doing 2 "pools", make sure you do 2 pools when you set up and not 2 vdevs.
- It's a good looking build and pretty close to what I'd do if building today.
- Get a UPS.
- Do a SMART long test on the drives before populating them.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
What are you using for video editing? If FCP, then you'll want to look at iSCSI requirements if you intend to store the projects on the server. iSCSI works well if you have enough power behind it. I'm unsure if your configuration will do it - not questioning, just don't know. Writes to server take a lot of CPU, so I am concerned about potentially slow writes.

My system is capable of putting out 100MBps through AFP shares and ~85MBps on iSCSI, but I'm putting a lot more power behind it. I don't know what the lower threshold that will achieve what you want. If you don't need iSCSI, then I think your configuration will work.

Link Aggregation won't improve the throughput to any individual client, but will allow multiple clients to potentially get that throughput.

BTW, the Synology issues will be similar with iSCSI, so that's not a quick fix to the problem.
 

gregh42

Dabbler
Joined
Dec 28, 2012
Messages
12
Thanks for the info. I am using FCP. Thanks for the tip about iSCSI, I hadn't considered that, and will have to look into it more. I was just planning on using an AFP share. I read that there are some "tricks" required to make a network volume visible to FCP. Given the heavy CPU usage for writes it sounds like I may want to consider something beyond the i3.
 

gregh42

Dabbler
Joined
Dec 28, 2012
Messages
12
A few comments:
- I see no reason you can't do gigabit without link aggregation. That covers 100MB/s.
- I'd suggest you look at just doing a 6x2TB RAID-Z2. Why? Because speed will be limited by the network anyway. However, if you insist on doing 2 "pools", make sure you do 2 pools when you set up and not 2 vdevs.
- It's a good looking build and pretty close to what I'd do if building today.
- Get a UPS.
- Do a SMART long test on the drives before populating them.

Thanks, good to hear I'm heading down the right path. I haven't quite figured out the detail on the config yet, but that sounds like a pretty good suggestion and may be a better fit for what I'm trying to do. A good UPS should probably be a priority considering all the $$'s I'm going to get tied up in this. :)
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
If you do decide to try iSCSI, know that it does take some configuration and may take some trial-and-error experimentation before you get the performance you want. I guess what I'm saying is test the system out first before you load it up with files. I found I had to start over several times before I got what I wanted.

Also, I found that I needed to have mirrored, striped arrays to get best performance out of iSCSI. I couldn't get the throughput I wanted in my configuration using any version of RAIDz.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just an FYI.. there's a support ticket number somewhere discussing how much iscsi and ZFS can potentially suck when put together. Just because you get amazing performance with a RAID-Z, RAID-0 or even a single disk is no indication of how performance will be in 3 months. There is no fix since this is an issue between how ZFS works and how iscsi works. It's like having 1000 gallons of diesel fuel and a gasoline engine. It may work right now, but there's no telling how quickly that engine will go broke.
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
Just an FYI.. there's a support ticket number somewhere discussing how much iscsi and ZFS can potentially suck when put together. Just because you get amazing performance with a RAID-Z, RAID-0 or even a single disk is no indication of how performance will be in 3 months. There is no fix since this is an issue between how ZFS works and how iscsi works. It's like having 1000 gallons of diesel fuel and a gasoline engine. It may work right now, but there's no telling how quickly that engine will go broke.

Which ticket? 1531?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's it! I knew someone knew the number.. I was thinking 1570 but I couldn't search easily.. doing 3 things at once.
 

gregh42

Dabbler
Joined
Dec 28, 2012
Messages
12
If you do decide to try iSCSI, know that it does take some configuration and may take some trial-and-error experimentation before you get the performance you want. I guess what I'm saying is test the system out first before you load it up with files. I found I had to start over several times before I got what I wanted.

Also, I found that I needed to have mirrored, striped arrays to get best performance out of iSCSI. I couldn't get the throughput I wanted in my configuration using any version of RAIDz.

Thanks. Given the info from noonbsauce80 and your reply, it sounds like I'm probably better off using local disk (on my MBP) to do my work in process editing. It's going to get backed up to my Time Machine router as well so I've still got some level of backup should things fail. Given the potential configuration issues, I think I'll just play it safe and use the NAS for the storage. I can also use the money I save to get some kind of thunderbolt drive, also on my list. :)
 

bollar

Patron
Joined
Oct 28, 2012
Messages
411
Yep, nothing wrong with keeping the data local. Just know that there is a group of people who can benefit from iSCSI and FCP users are key in that list, so don't be afraid to give it a try at a later point. Ticket 1531 thoroughly describes the potential problems and how to configure the system to mitigate them.
 

gregh42

Dabbler
Joined
Dec 28, 2012
Messages
12
Thanks again for the guidance. I took everyone's advice here and also reviewed the guide from noobsauce80 (very helpful btw!) and tweaked my setup a bit. I decided I wanted a little more room for future expansion so I went with an ATX board and case.:

5 WD 2 TB Red's (WD20EFRX)
SanDisk Cruzer Fit SDCZ33-008G-B35 8 GB USB 2.0 Flash Drive
Corsair 750W Modular PSU (probably a bit overkill)
MSI LGA1155/Intel Z77 - "Military Spec" (I've always had good luck with MSI- has 4 Sata II ports and 2 Sata III)
Intel Core I3-2105 CPU
16 GB Corsair DDR3 1600
Fractal Define 4 (love the case, hard to find in stock!)

I also picked up a UPS (APC 750 VA for NAS, and an APC 350 for switch / network).

Total cost of everything above is around $1300 so far, including shipping. I scored the UPS's on clearance at Office Max.

I am planning to run ZFS Raidz1 which should net me about 8 TB of usable disk if I'm doing the math right. Once I get a little more cash I may purchase 3 more drives and do a Raidz2 for the extra redundancy (realizing that I will have to rebuild it).

Something that really helped me was sitting down and writing out what the most critical stuff to keep was. Eg, photos and final videos are going to be backed up "in 3's" - my poor man's mirrored WD 1 TB drives attached to my current PC; Blu-Ray disks, and I am going with Crashplan for cloud backups(on a related note, it looks like Crashplan doesn't support FreeNas or FreeBSD out of the box, but there are some workarounds to getting it backing up from the network). The rest of it (WIP video editing footage, mp3 collection, etc) may only get 2 backups because it is less important.

Looking forward to getting my rig- should arrive by Friday. If you guys have any further advice I'd love to hear it. I'm hoping I get 100 mbit performance (read/write) on the network. If not I'll be looking at ways to improve it. Hopefully this will get me what I need.

Thanks,
Greg:cool:
 
Status
Not open for further replies.
Top