ZFS Memory Requirements for large storage

Status
Not open for further replies.

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
I've put together one of those Backblaze Storage Pods populated with 45 3-TB drives.
Have installed 64bit FreeNAS 8.3.0-RELEASE-p1. and 8 GB ECC UDIMMs.

Then I hear that zfs needs about a gig/TB of disk in the zfs pool(s).
Followed by the zfs wikipedia writeup saying that's old news and its not really true at more recent versions.

The super micro X8SIL MB has an intel core i3 CPU 540, limiting the UDIMM max to 16 GB in its 4 slots.

Is the 8 GB I've already got sufficient for this amount of zfs storage? Will 16 GB make that much of a difference? Should I swap the CPU for a Xeon so as to get the larger 32Gb max?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll put it to you like this. My 30TB zpool needed 20GB of RAM to provide decent performance on 1 machine for 1 user at any given time. It's something you have to guess for yourself. I will tell you that I wouldn't even bother working on that machine without at least 24GB of RAM and you will likely need quite a bit more.

I'd trust the manual for FreeNAS long before some generic wikipedia page...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So... you want to handle ~135TB of space on a machine with 8GB of RAM, less than 1/16th the recommended.

Here's the scoop. Expect problems. Possibly even spectacular failure. Definitely poor performance.

The hardware recommendations for FreeNAS are geared towards a busy fileserver and at the size we're discussing, 96GB, 128GB, and 192GB ought to perform similarly for a workload with a small working set... it wouldn't shock me if that even extended downwards to 64GB. The 1GB/1TB rule of thumb is just that - a rule of thumb - and isn't going to be the correct fit in every case. When you get up into "sufficiently large" systems, the extra space suggested by the RoT is less for making ZFS actually work than it is for handling the workload implied by massive storage, so if you had massive storage that was infrequently accessed, 64GB might be fine.
32GB? I'd be worried about. Might work. Might not.

See, I've got a 48TB pool here under ESXi, and I can vary the memory size of the VM. At 8GB, I notice strange performance issues that I haven't localized (I'd like to though). If I run into problems, like not being able to import the pool, I am safe, because I can tell ESXi to assign 64GB to the machine and then it'll work. But if you purposely build a machine that is too small and cannot be suitably expanded, :eek: then pain :eek: .

Suggestion: Get yourself something like an X9SRi-F, a low end E5, and 32GB of high density RAM, so that you have a path forward if 32 isn't sufficient for stable operation.
 

neils

Dabbler
Joined
Oct 29, 2012
Messages
46
Thanks Folks.
Now curious about the mechanics of it all:
if the recommendation is for 1GB/1 TB storage in a zpool, this certainly doesn't mean I could just split up my 45 x 4-TB drive JBOD into multiple 16 TB zpools which then could be served by a single FreeBSD install with 16 GB RAM?

And on related note,
Does implementation of ssd drives for the Log and Cache make up for constrained system memory?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks Folks.
Now curious about the mechanics of it all:
if the recommendation is for 1GB/1 TB storage in a zpool, this certainly doesn't mean I could just split up my 45 x 4-TB drive JBOD into multiple 16 TB zpools which then could be served by a single FreeBSD install with 16 GB RAM?

It does mean that'd be possible, but then you get to deal with the inconvenience of having to import a pool, access data, then export it before you access a different pool.

And on related note,
Does implementation of ssd drives for the Log and Cache make up for constrained system memory?

No. Adding L2ARC adds further stresses to the ARC, which maintains pointers into L2ARC. One typically adds an L2ARC device to augment the IOPS capacity of a pool with a working set size that is too large to address with ARC, such as if you maybe had 1-2TB of working set data and the pool was running at capacity, it would be extremely difficult to find a system with sufficient RAM to hold the working set, but a 256GB server is not too hard to come up with, and four 512GB SSD's as L2ARC would substantially increase the performance and hold the working set. But you cannot add four 512GB SSD L2ARC devices to a system with 8GB, you simply won't have the ARC size to support the necessary pointers into the L2ARC, and the system won't be able to make good use.
 
Status
Not open for further replies.
Top