Is 32GB of ECC RAM sufficient for 32TB of storage using ZFS?

Status
Not open for further replies.

billhickok

Dabbler
Joined
Oct 8, 2014
Messages
36
I want to build a RAID Z2 array using ZFS and ECC RAM. I will be using 8 x 4TB hard drives and an Intel E3 quad-core processor. I plan on running FreeNAS. My only concern is, these processors and the supported motherboards only support up to 32GB max RAM. I've read that in such a setup running FreeNAS, it is highly recommended to allocate 4GB of RAM + 1 GB additional for each 1TB of storage. That means a minimum of 36GB is needed in my case.

My question is,

Can I get by with only 32GB of RAM for my 32TB of space? Besides storage, I only plan to utilize this NAS as a media server using Plex to transcode/stream 1 or 2 1080p rips.

I've read of a case where someone utilized 6 x 4 TB hd's (24TB total) and nearly maxed out all 32GB of RAM in his set-up, also running FreeNAS with ZFS Raid Z2.

Any input is appreciated.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Sir,

32GB for a 32TB pool is ***PLENTY***. It's not even close. You're 100% fine for 32TB **AND THEN SOME MORE** with 32GB, in most cases.

Proceed.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There is the possibility that 32GB of RAM will be sufficient. There are lots of forces at work that may or may not help you.

If its any consolation, I have 10x6TB drives in a RAIDZ2 with 32GB of RAM and I'm just fine in my house with it. ;)
 

billhickok

Dabbler
Joined
Oct 8, 2014
Messages
36
Thanks, your responses are reassuring.

If its any consolation, I have 10x6TB drives in a RAIDZ2 with 32GB of RAM and I'm just fine in my house with it. ;)

Interesting! I was actually considering adding 4 more 4TB drives to the array for a total of 12 drives and 48TB, but I completely dismissed the idea as I figured I would need at least 64GB RAM. Any recommendations against this (using 32GB RAM) or will I be fine?

For reference, I want to use an ASRock E3C224D4I-14S extended mini-ITX motherboard as I believe it supports 12 drives via SAS.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Ask cyber if he'd be uncomfortable adding another 10x6TB to his rig. ;) My money is on he'd smile, take the drives, and continue to be happy. I'll test em for ya as well.

The reason for the flexibility is how a typical media user is accessing their data. ZFS is using almost all of your RAM for ARC, so on a write once... read infrequently and randomly workload how often are you going to hit the cache? In addition the pool is much much faster than your network, so it really makes no difference if we hit the cache. The pool can keep up easily.

Where it changes is if you add a bunch of users, and change up the workload to include a bunch of VM's or file accesses that have usage patterns that will benefit from your ARC.

Point Blank. I'd fill my 24 bays with 6TB drives all day long with only 32GB for media and home use. But only 12TB feeding a bunch of users and VM's might ask for 64GB or much more.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you are going to go with 12 drives you should do 2 vdevs instead of one.

Your RAM needs are going to be a function of disk usage and pool size. In my house I'm the only user, so I can definitely bend the 1GB of RAM per TB of storage. If I had a dozen users in the house I would probably have problems.

Be careful with getting overzealous. I don't know what "overzealous" is for your situation, but I can tell you from experience that if you don't have enough RAM your pool will work great until one day. That day you'll find the pool performance is terrible (I went from over 100MB/sec to less than 5MB/sec overnight). Your *only* solution at that point will to do ZFS tweaking (good luck with that) or buy a system that can do more than 32GB of RAM.

I would not, under any circumstances, add a second vdev of 10x6TB drives and expect to need only 32GB of RAM. I might do it on the short-term, and I might do it "until I have problems". But the reality is that I'd expect it to catch up to me at some point. When it does, your only option is to spend more money, alot more, and get yourself enough RAM to make your pool perform. For me, when I needed more RAM there weren't blatant signs that I needed more RAM. After my server had been useless for 2 weeks and I couldn't narrow down the problem did I buy more RAM for my server (at the time I had upgraded from 12GB of RAM to 20GB of RAM with a 24TB pool).

As long as you are accepting of the potential risk that your FreeNAS server might be utterly useless because of how slow it is and won't improve until you spend money on a new CPU, new motherboard, and lots of RAM, be careful how far you stretch your luck. ;)
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Cyber, did you chalk your performance hit up to Z3 and super wide, ram, or a combination?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Cyber, did you chalk your performance hit up to Z3 and super wide, ram, or a combination?

Different time frames. They weren't all at the same time.

First was RAM with RAIDZ2 pool. Almost a year later was the Z3 that was super wide (which thank god is gone).

Not the same problem, not at the same time, and can't comment on how ugly it would have been if both had been at the same time. But since the shortage of RAM problem made it unusable for even streaming SD video with the RAIDZ2 that was proper, I can't imagine things would have been any better with the Z3.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Interesting, hadn't seen details on the full progression. Just knew you'd "been there." I'd love to test that problem.... just need someone to spring for a couple dozen 6TB drives ;)
 

billhickok

Dabbler
Joined
Oct 8, 2014
Messages
36
Some great information in here. I'll probably go for 10 x 4TB drives instead of 12 and shoot for 1 vdev. I don't believe the computer case I want to use can fit 12 anyways but I should be able to do 10. I'll be the only user, primarily, so based on the responses here, I should be fine. Don't want to cut it close.

Also, just delved into your guide cyberjock and it's awesome, looks like it should be really helpful. My first foray into FreeNAS and building my own server so I have lots to learn...
 

esamett

Patron
Joined
May 28, 2011
Messages
345
cyberjock: I am maxed out with 32GB ECC RAM with approx 42 TB in two Z3 pools. Is there a workaround (short of new motherboard) for the eventual slowdown due to maxed out RAM you described? Moving files from pool to pool to reduce fragmentation or something like that? What is a practical storage limit for home use? I am OK at the moment but read your post with interest. I intend to eventually expand my pool by replacing my 2TB drives with larger ones when the time comes.

Thanks,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Speaking as the guy who bumped the hardware requirements from 6GB to 8GB ... there was a time when FreeNAS would actually panic if it lacked sufficient RAM, but lately it seems that cutting it close merely causes it to suck real bad. I don't recall having heard about any pool loss events on anything more than 4GB lately. (I feel evil canine eyeglare trying to melt my head... hey there cyberjock, how's it goin!)

But there's an important bit there. Too little RAM will usually cause ZFS to suck. In the cases I've looked at, it seems to be struggling with metadata vs real data and doing a poor job at both. Increasing the amount of data in your pool increases the amount of stuff ZFS needs to keep track of. The search for free blocks is not free, and you can cause ZFS to store things suboptimally which will impact read speeds as well. Haven't looked too carefully at the specifics. But what I HAVE noticed is that it gets better rapidly as you bump RAM. It isn't linear; I suspect that a system with 32GB of RAM is likely to be fine with 64TB of disk, maybe even a little more! The old 1GB:1TB rule of thumb works fine for smallish 2011 era systems where 8GB was big and 16GB was as-big-as-most-people-could-afford.

The E3 32GB cap is going to become a bit of a bottleneck for people looking to build storage platforms out of a dozen 10TB drives, though, and it isn't clear what the best fix is. My observation has been that the overall size of the pool (total disk space) is less relevant than the amount of space already consumed by the pool, but both seem to contribute. That means you probably can't just cram a 120TB array that's only got 8TB in use on a 8GB or 16GB system and expect it to work well, but I *suspect* it would be fine on a 32GB system ... right up to a certain point, maybe somewhere around 64-80TB, where gradually (or perhaps suddenly!) things would take a turn for the worse.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nope, no real good fix. The fix is to get more hardware. I'm in the same boat. Enjoying my X9SCM-F with 32GB (the max) and the next "step" is to go socket 2011. Not my idea of a great situation, but it is what it is. :(

The practical limit depends on too many factors to really throw out numbers. The thumbrule of 1GB of RAM per TB of storage seems to be a pretty good bet, even for home users. The real test is if the server is able to satisfy your performance needs. If it does, then that's all that matters. If it doesn't, then you clearly need more hardware. And if you read one of the many documents around on ZFS they always say the same thing.. the best way to make a pool faster is to add more RAM.

Me personally, I'm trying to hold out upgrading until DDR4 has been out a little longer, 2011-3 is more abundant (and maybe better selection of hardware choices) and then I'll be doing so upgrades.

I will say that Plex in a jail seems to use about 1GB of RAM when not even using any Plex clients. That's almost 5% of your system memory for Plex. So if you are looking for ways to stretch your system, cutting out jails is one place to consider.
 

esamett

Patron
Joined
May 28, 2011
Messages
345
Thank you both. I did some reading (Google) and "fragmentation" is a known issue with ZFS. A defragmentation routine is not available. The more free space in your pool, the better - there are certain circumstances where performance lags with <80% capacity used. Enlarging your pool with bigger drives is helpful. The definitive "treatment" is to copy data to new pool and destroy/recreate original. More RAM is better but there is an potential issue going above 128GB. As I stated above my pools are new and I am not having problems. A couple of forward-looking questions come to mind:
  1. Is moving suspected fragmented data back and forth between pools helpful to reduce the fragmentation, e.g. with cp command? This would be less disruptive than moving entire datasets and recreating pools.
  2. What is the impact of the type of files and use on fragmentation?
    1. One active user, vs. few or many?
    2. Large vs. small files?
    3. Usage:
      1. long term file storage, e.g. media archive?
      2. more active PC/disk interaction, e.g. word processing, torrent, database, or other "scratch" drive uses. I have previously used my "ancient" Promise NS4300 4 bay NAS for this purpose, but it is showing signs of senility and probably needs to be retired. (I recently replaced power supply and retired one of the drives. Now it is showing quirky behavior requiring warm/cold restarts.) Will I rapidly fragment my ZFS pool with this type of activity. (There is a Medical phrase: "We may not be able to makes things better, but we can certainly make things worse.":()
Regards,
evan
 

SirMaster

Patron
Joined
Mar 19, 2014
Messages
241
FWIW I have a friend with a 71TB usable array that's about half full with 16GB of RAM. Though this is on ZFS on Linux. He gets about 2.5GB/s reads and 1.9GB/s writes so performance is great under his usage. Considering he is just using 4 1Gbit bonded NICs which results in about 450MB/s transfers his pool is still far faster than this LAN.

Also to note that his usage is media storage so his average file size is several MB each.
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Mr. Greco could pretty much be quoting straight out of my head. I find it really kinda cool that we can bump in to these things. It will also be interesting to see how the trends change. I've been hoarding data since forever. But there is now almost nothing I can't stream instantly if it is for entertainment. So why store 200 years of 4k video? ;) This just becomes unnecessary cost and expense (even though I love it). My kids will probably store almost nothing locally, not to mention we're at 200GB on a phone and rising.

I'm pretty geeky, but I have no idea how to fill 24 x 10 TB and that is just one little server. Guess I better get on that. :)
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
There is some work going on in FreeBSD to improve ZFS+system behavior in tight RAM. Some might hit 10.1, the rest probably in 10.2.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
There is some work going on in FreeBSD to improve ZFS+system behavior in tight RAM. Some might hit 10.1, the rest probably in 10.2.

That's good news. Hopefully there'll be no need to grow the minimum requirement in the mid-term future.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thank you both. I did some reading (Google) and "fragmentation" is a known issue with ZFS. A defragmentation routine is not available. The more free space in your pool, the better - there are certain circumstances where performance lags with <80% capacity used. Enlarging your pool with bigger drives is helpful. The definitive "treatment" is to copy data to new pool and destroy/recreate original. More RAM is better but there is an potential issue going above 128GB. As I stated above my pools are new and I am not having problems. A couple of forward-looking questions come to mind:
  1. Is moving suspected fragmented data back and forth between pools helpful to reduce the fragmentation, e.g. with cp command? This would be less disruptive than moving entire datasets and recreating pools.
  2. What is the impact of the type of files and use on fragmentation?
    1. One active user, vs. few or many?
    2. Large vs. small files?
    3. Usage:
      1. long term file storage, e.g. media archive?
      2. more active PC/disk interaction, e.g. word processing, torrent, database, or other "scratch" drive uses. I have previously used my "ancient" Promise NS4300 4 bay NAS for this purpose, but it is showing signs of senility and probably needs to be retired. (I recently replaced power supply and retired one of the drives. Now it is showing quirky behavior requiring warm/cold restarts.) Will I rapidly fragment my ZFS pool with this type of activity. (There is a Medical phrase: "We may not be able to makes things better, but we can certainly make things worse.":()
Regards,
evan

Moving fragmented data back and forth between pools is helpful in specific instances. One of the prerequisites would seem to be that you'd need a pool that had lots of free space, so that you weren't actually taking contiguous data out of one pool and shotgun spamming it around the other pool because of insufficient space. One of the cases where it'll be particularly helpful is where you have data such as VM virtual disk storage or database files, where constant updates to random blocks within the data file are fighting the system; ZFS will make a good effort to lay down contiguous blocks for sequentially written data, and copying the data forces it into that model. Note that this may not even require moving it between pools.

Any spindle based storage system is oriented towards large file storage especially as the size of the disk increases. The best use case for ZFS with 4TB+ HDD's is archival large file storage, where you are mostly laying down large items and letting them sit there for years. Small file storage is more difficult for spinning media because of the seek overhead. At a point, you will lose the ability to store and retrieve the files before the drive's expected service life is reached... if you do the math, if you are storing 4KB files on an 8TB HDD, assuming you can write 100 of them per second (controlled by seek speed), doing nothing else, it'd take 250 days to fill the disk. I don't care to discuss the fact that it is a contrived example and not realistic from some points of view - it demonstrates the problem spinny rust faces when storing small files in the long term. Also, the ZFS scrub mechanism is a metadata traversal so it tends to suffer on pools storing lots of smaller files.

But to make a long story short, if you design a pool to have lots of free space, like maybe 50% free space on a 60TB pool, and you're not doing anything stressy like database or VM storage, and you've got 32GB of RAM, I wouldn't expect to see any significant performance degradation on ZFS due to fragmentation. At what point would things start going downhill is a good question though.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I just ran into this myself. I had two servers running an E3-1230v2 with 32gb ecc ram, 12x3tb hdd, and performance tanked. I did some testing and determined I needed more ram but was capped by the motherboard. So I had to upgrade; went with an E5-1620v2 with 64gb of ram and things are faster than they have ever been. The board will support 512gb of ram but I doubt I'll put more then 128gb in it. Cost me quite a bit but it was still cheaper then buying something off the shelf.

Sorry, I should point out that these servers aren't my home use they are deployed in a business. My home server is setup with the 1gb ram per 1tb of storage and has operated without issue for years now.
 
Last edited:
Status
Not open for further replies.
Top