Will a 60GB SSD as an L2ARC be helpful in my situation?

Status
Not open for further replies.

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
We are configuring our new freenas install and I've been reading the manual and forums to learn more about freenas. We have a successful installation on a Rackable c1000 unit with 16 drive SAS expander... so far, we have only 4 x 1TB drives configured in a RaidZ2 array plus one 60GB SSD installed in the internal drive slot of the c1000. We boot from a 4GB flash drive. The system specs are as follow:

2 x 2.33GHz Intel Xeon processors
16GB ECC RAM
4 x 1TB WD Re4 Enterprise HDDs
1 x 60GB SSD

We operate a printshop with 3 networked large-format printers and most of our network traffic is transferring ultra high resolution images from server to computer for Photoshop layouts, image manipulation, etc. Our image file sizes range from 10MB to 6GB (for single PSB image files). Obviously, transfer of layouts and images can be quite slow, so we are trying to find that happy medium of data redundancy and performance... Which brings me to my questions, which, as of now, consist of the following:

1. Is Raidz2 the best configuration for our workflow or should we be working off mirrored sets (i.e. 2 mirrored vdevs in one zpool)...?

2. Is there any performance advantage to adding a few additional drives? I was thinking of adding 2 more 1TB Re4 drives to create one zpool of around 4TB and expand the overall capacity in the future by adding new vdevs as needed... any thoughts/advice?

3. And to the main question from the post title: since we have that 60GB ssd just sitting there (not being used for anything right now), would setting that drive up as an L2ARC benefit our performance (on the basis of the workflow described above)? We basically only have 2 people accessing the network at a time right now and 95% of the network traffic is related to transferring these image files from computer to computer for manipulation/print preparation.

Thank you in advance for your advice and help!
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
What kind of network setup do you have? Single gigabit?

Assuming single gigabit:
1. RaidZ2 will saturate the link using CIFS, so mirrored drives won't improve anything as that isn't your bottleneck
2. You should already be able to saturate the gigabit link using CIFS and your 4 drive RaidZ2. More drives would give you more storage though. If you went 10Gb networking then more drives would improve speed.
3. You would be better off using the SSD in another system. You will want more RAM for your ARC before adding a L2ARC.

Also depending on the CPU model, you might want to pull the second CPU to save power as it is unlikely to be gaining you anything over 1 CPU. Unless you need the extra RAM slots that belong to CPU #2.

EDIT: What transfer speeds are you seeing over your network with the large files? Also what protocol are you using?
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
What kind of network setup do you have? Single gigabit?

Assuming single gigabit:
1. RaidZ2 will saturate the link using CIFS, so mirrored drives won't improve anything as that isn't your bottleneck
2. You should already be able to saturate the gigabit link using CIFS and your 4 drive RaidZ2. More drives would give you more storage though. If you went 10Gb networking then more drives would improve speed.
3. You would be better off using the SSD in another system. You will want more RAM for your ARC before adding a L2ARC.

Also depending on the CPU model, you might want to pull the second CPU to save power as it is unlikely to be gaining you anything over 1 CPU. Unless you need the extra RAM slots that belong to CPU #2.

EDIT: What transfer speeds are you seeing over your network with the large files? Also what protocol are you using?

Thanks for the reply ser_rhaegar.

As I'm a a bit new to networking generally, it's hard for me to say if it's single gigabit or not. I know that it's a gigabit connection but there are actually a total of 4 ethernet ports on the unit, 2 that are side by side. Would my bandwidth be increased by utilizing both of the primary ethernet ports? My exact setup is shown here. I doubt it's even possible for me to upgrade to 10GB networking since I'm working with an older server and the existing networking is all onboard. Additionally, the PCI-e slots are all utilized, so I'm pretty sure I'm stuck there.

How would I know re: the CPU... I'm running 2 x Intel Xeon 2.33GHz and all the RAM slots are utilized currently to achieve 16GB.

Thanks again for your assistance!
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
You won't be able to pull out a CPU as your system has 8 slots (all used) and if you pull one, you lose 4 RAM slots.

You could use LACP and bond the two ports together. This wouldn't improve single connections but would allow you to have up to 2 separate connections maxing out 1Gbps each. However last I recall, FreeNAS had some issues with LACP (would have to search the forum), and you would need a switch that supports it. Most don't recommend it here.

What speeds are you seeing when transferring large files and what protocol are you using? (CIFS, AFP, NFS)
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
I will need to do some testing to determine file transfer speeds, but I have seen around 40 mb/s on AFP, which is the only wired connection I have tried. Wirelessly, I'm seeing around 11 mb/s on AFP and 4 mb/s on CIFS (to a Windows 7 box). We haven't setup NFS yet. I realize that wired connections are the only ones that count, though... so I'll collect the data and report back.
 

ser_rhaegar

Patron
Joined
Feb 2, 2014
Messages
358
I recommend using CIFS on Apple as well. Not just Windows machines. I'd only use AFP for TimeMachine backups.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you had to utilize all of your RAM slots to get to 16GB, and that's the max the board can do, you'd be better off getting a whole new system. L2ARC will probably make the system slower. :(
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
cyberjock: thanks for the tip... in looking up the details of the board, I do believe that 16GB is all it can do (unless it's one of those situations where it can do more, but they didn't advertise the fact in the manual) :( I had a different response typed out expressing hope for more RAM, but a simple search for Rackable (SGI) c1000 revealed a link to the Motherboard Manual, which rather closes that point I'm afraid. In any case, I'm still hoping to get the most I can out of this machine since I'm already so far down the road to customizing and learning about it.

In light of my setup, what specific recommendations would you have to optimize the setup, understanding it does have the RAM constraint mentioned? Perhaps Link Aggregation to utilize both gigabit channels plus mirrored drives? Any drive configs worth considering? I've read the recommendation to *not* go over 1TB storage per 1GB of RAM, so I suppose I should avoid topping 16TB of total storage. I've also read that I should only utilize around 70% of the usable drive capacity, so at max capacity, I would have a server with only around 10TB of usable storage in a Raidz2 configuration... is that right?

I might also mention, just briefly, that our previous "server" was a 3TB USB Hard Drive connected to an Airport Extreme that was (and is) both slow and unreliable. Our current implementation is already lightyears ahead of that piece of junk :)

ser_rhaegar: I'm really surprised to hear that your recommendation is CIFS for Apple too since I thought I recall reading somewhere that CIFS shares are slower than the other sharing options due to some CPU consideration... Is performance on CIFS equivalent to AFP across the board or is that your recommendation just in light of my system setup?

Feel like I'm going back to school with all this reading, but it's great... I really appreciate your help (ser_rhaeger and cyberjock)... you guys really know your stuff! Can't wait to learn more to get the most of out freenas.
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
Now I saw in another place that the max RAM is 32GB... don't know who to believe... if 32GB is true, I could max out the RAM from ebay for $120 or so...
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
Here are some of my findings:

1. I'm *fairly* confident the c1000 can handle 32GB of RAM... Here is the same machine I have for sale on ebay with 32GB installed. Looks like I'll be spending the $120 to max out the RAM on the board.

--in which case, would I then be in a situation where the 60gb ssd would be useful as an L2ARC?

2. I'm seeing the following transfer speeds over a wired connection:
AFP: 58.7mb/s (3 replicates with file sizes ranging from 700mb to 6gb... very consistent)
CIFS: 82.5mb/s (2 replicates with the following file sizes: 1.14gb, 2.48gb... 86.8mb/s for the former, 78.3mb/s for the latter)

It does look like CIFS on Windows is producing VERY good transfer rates... perhaps this is ser_rhaegar was referring to. The next step will be to disable AFP and attempt CIFS transfers on OSX.

One weird finding... download speeds were significantly slower than upload speeds... is that normal? Any reason why that might be.

Also, my first attempt at transferring that 6gb file on AFP simply timed out at some point... the network seemed to have gone down for a second and finally came back. Any thoughts of what that could have been. Does freenas have a hard time with very large file sizes? Are there any known issues with files over a certain size?

So, in light of the above findings: any recommendations?

Thanks again!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
From my experience in the forum(as well as personal experimenting) you aren't going to see much for L2ARCs and ZILs until you have 64GB of RAM. The reason being that a small ARC is already stressed and adding either of those stresses it more. Performance drops despite you adding more hardware. That's why I said you'd be better off with a new system. I knew you wouldn't be able to get to 64GB(although some hardware claims a 16GB limit but can actually use 32GB).

When asking why one is faster/slower it's very much case-by-case. You pretty much get what you get unless you want to start diagnosing the limiting factor on your own. :(
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
Just for the record... I was able to successfully upgrade the SGI (Rackable) C1000 to 32GB of RAM. I went with Samsung RAM from an eBay supplier... Total cost to upgrade: $125. Everything went super smooth with the upgrade. Hope this helpful in case there are other SGI C1000 users who are wondering if their board can handle 32GB or not.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Got a link to your supplier? ;)
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
Cyberjock: Absolutely. The link to the listing is here:
http://www.ebay.com/itm/251549828039?_trksid=p2059210.m2749.l2649&ssPageName=STRK:MEBIDX:IT

I offered $120 and it was accepted... total came to $125.80 with shipping.

One note: the auction listing shows Hynix RAM and I actually received Samsung RAM, so I can't say for sure if Hynix will work. My guess, though, is that the board is pretty versatile... every change I've made so far has been without incident (I'm more accustomed to finicky Macs). Of course, the C1000 and its 667MHz RAM are very slow compared to what alot of units are using nowadays, but at least the machine is maxed out and we've got almost 7TB of usable storage in a RaidZ2 configuration... it's totally changing our workflow already!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Oh, they're FBDIMMs. Enjoy that 10w per stick power draw. ;)

Just the power draw and heat generated makes that generation of hardware a poor choice for servers. :(
 

Fr Jonah

Dabbler
Joined
May 3, 2014
Messages
24
cyberjock: There's no question that this unit isn't the most economical from the power consumption (i.e. ongoing) cost POV... it was, however, ridiculously economical from the upfront costs POV... I was able to get the Dual Xeon 1U C1000 plus 2 SAS expanders, 10 x 1TB hard drives, M1015 card and the 32GB upgrade for $850 or so... subtract out the things I can sell on ebay and I end up with this whole setup for around $600, which comes out to less than I would have paid for just the 10 drives alone! Add to that the *massive* expandability via the SAS expanders and I think it turned out to be a great deal, at least in terms of upfront costs vs expandability/performance... Then again, maybe it's something like getting a puppy... it's not the upfront costs that will kill you... :)

My thought is to use this setup to gain familiarity with freeNAS and, eventually, use this machine as a backup to a future server that I'll build with modern components in a couple of years (once this server has paid for itself in ROI)... In that scenario, it wouldn't even need to be live all that often, so the power consumption would, naturally, go down.
 
Status
Not open for further replies.
Top