Lower RAM requirement for RAID1?

Status
Not open for further replies.

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My aim is to store the data for my new small business, which primarily is graphics and video work
I understand the financial strain of making a go of a small business. If you need to cut some corners now to get usable storage, you can upgrade the server later. One of the great things about FreeNAS and ZFS is that they easily transplant to a new system board with better features. No need to reinstall, just pull the drives and move them into the new system. Here is a video about it:
https://www.youtube.com/watch?v=_DmMpETyBsY
 

John Nas

Dabbler
Joined
Jul 26, 2018
Messages
35
With regards to the general topic some of the math I have considered. Cost per TB generally goes down as the drive size goes up, e.g. a 10TB drive is not 10x the price of a 1TB drive. The inefficency of a mirror system is still more expensive per GB than a parity based one but not necessarily by the same factor as disk utilization (17%).

For example using some current pricing for NAS drives.

3x4TB = (3 x $121) = $363 ---Mirror
4x2TB = (4 x $80) = $320 ---RaidZ2

So for 4TB of storage (double redundancy), the mirror has 33% usable-vs-available space, and the RaidZ2 has 50%, but the Z2 solution is only 11% cheaper. You also now have a cheaper upgrade path should you be willing to migrate between the schemes. Add one more 4TB drive, and move over to RaidZ2 and you double available space, where as you would need to buy two 2TB to do the same with Z2. i.e. $121 vs $160
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

John Nas

Dabbler
Joined
Jul 26, 2018
Messages
35
There was a time when NAS4Free was using UFS instead of ZFS. Do you know that it was ZFS? If it has been 'years ago' it is very likely an earlier version of ZFS.

It was nas4free 9 and used ZFS, I remember learning the vdev and pool basics during setup, but it well could have been an earlier version of the filesystem. What changed as far as I can tell is the FreeBSD version nas4free is based on. My 9 dot something system was too old to be updated because of this rebasing, and instead I would need to install from new.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It’s based on practical experience. When the minimum was 6GB, servers had a tendency to freeze up unexpectedly.

It’s probably caused by a bug ;)
 

John Nas

Dabbler
Joined
Jul 26, 2018
Messages
35
I've seen the same figure used with regards Ubuntu Server and ZoL, so I assume it just makes for good rule of thumb, especially since memory pressure is dependent on the factors Chris Moore and others have cited.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
(The 8 GB minimum is) based on practical experience. When the minimum was 6GB, servers had a tendency to freeze up unexpectedly.
Right--a FreeNAS thing. Yes, ZFS loves its RAM, but freezing, reboots, pool loss, etc., were a FreeNAS problem.
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
There was a time before when ZFS did not required 8GB as a minimum but I am not aware of when this changed.

I used to run it on much less RAM... ~1Gb or so, Solaris 10 back around 2007... But even then the recommendation was give it as much RAM as possible.

Consider that the Solaris kernel developers almost always had the latest workstations to play with, and labs full of every possible supported system... Jeff Bonwick had an Ultra 2 on his desk when I worked in his building, but I'll hazard a guess (and only a guess... The Menlo Park campus got shutdown and I moved to Texas, so I didn't get to wander the halls of the kernel guru's much after Solaris 8...) he got upgraded to an Ultra 60 or Ultra 80 workstation for the period he was working heavily on ZFS. The Ultra 80 with the memory mezzanine board tops out at all of 4Gb. That doesn't sound like much, but consider that Sun memory was 576 bits wide, 512 bits for data and 64 bits for ECC. Suffice to say, ZFS originally ran on much less memory.

But that was a whole other universe on a different OS, with features like ARC that were not really developed for use yet, etc...
 

John Nas

Dabbler
Joined
Jul 26, 2018
Messages
35
Are there new features of ZFS not related specifically to the data security aspects, i.e. more for the sake of performance or aspects like compression that can be disabled to reduce memory utilization?
 

rvassar

Guru
Joined
May 2, 2018
Messages
972
Are there new features of ZFS not related specifically to the data security aspects, i.e. more for the sake of performance or aspects like compression that can be disabled to reduce memory utilization?

The big tuneable is probably zfs_arc_max. I'm not sure what the defaults are on FreeNAS. On Solaris it used to be:

75% of memory on systems with less than 4 GB of memory
physmem minus 1 GB on systems with greater than 4 GB of memory

Obviously, as previously discussed, FreeNAS has a hard requirement for 8Gb. I currently have 16Gb, and I've been meaning to investigate this tunable to reserve some space for VM's.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The big tuneable is probably zfs_arc_max.
I think they call it vfs.zfs.arc_max in the tunables on FreeNAS and the default, as I understand it, is to reserve 2 or 4GB for the core OS and allocate the rest of RAM to ARC. If you set that tunable, you have to work out the value, in bytes, of the RAM to give to ARC; so on a 32GB system it would be something like 30064771072
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Are there new features of ZFS not related specifically to the data security aspects, i.e. more for the sake of performance or aspects like compression that can be disabled to reduce memory utilization?
You can set the arc_max, but you can't get away from the 8GB minimum requirement.

Hardware Requirements
http://www.freenas.org/hardware-requirements/

upload_2018-7-27_14-11-56.png


So important they said it twice.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I would say mostly because ZFS is free, while ECC ram is not, especially considering that it is not usually just a case of buying the ram, but also the motherboard and CPU that support it. I think most recognise FreeNAS and ZFS in general are enterprise technologies and not like solutions such as unRaid which are built specifically to support enthuiast hardware recycling. Obviously data security/integrity is of different priority to different users, but I doubt anyone who goes to the trouble of setting up centralized storage, doesn't care if they lose what they're storing, and so systems like ZFS entice.
...
Yes, ZFS is generally considered enterprise quality and technology. To make it work well, it generally requires more resources. Specifically more memory, (8GB verses 1GB for simple non-ZFS RAID), and more CPU resouces, (because every block, data or metadata, is checksummed). Plus, potentially even more CPU time needed for compression or de-duplication. So, more cost than other NAS implementations.
...
With regards to creating backup copies of a mirror, is there any data on which technique is more stable/less prone to error? Rsync vs, disk splitting vs ZFS replication?
I don't know...

One advantage of the split mirror backup, (and I have used EMC BCV split mirror backups on the job, in a Fancy Data Center), is that if you leave the split mirrors attached until you are ready to split, you get more read performance, (reads stripped across more disks), and less risk, (more redundancy). But, in my case, the disks never left the EMC disk array. They were just for getting extremely quick and clean copies that could be sent to tape by a different host, (the backup host).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Once you go to a larger storage pool with striped vdevs, you can't do that any more.
In-correct!!!

You CAN split a ZFS pool made up of multiple Mirrored vDevs. As long as each Mirrored vDev has at least 2 disks per vDev.
Here is the manual page entry;

zpool split [-gLnP] [-o property=value]... [-R root] pool newpool [device ...]
Splits devices off pool creating newpool. All vdevs in pool must be
mirrors and the pool must not be in the process of resilvering. At
the time of the split, newpool will be a replica of pool. By default,
the last device in each mirror is split from pool to create newpool.
 
Last edited:

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Are there new features of ZFS not related specifically to the data security aspects, i.e. more for the sake of performance or aspects like compression that can be disabled to reduce memory utilization?
Basically when the minimum was 6GB, there were problems with FreeNAS. So a change was made to 8GB.

As for reducing memory below 8GB, don't. In essence, if you have problems, the first thing we will tell you, (after we learn you have less than 8GBs of memory), is to get to 8GBs of memory. And if you pool is un-recoverable, we will say sorry, restore from backups and use 8GBs of memory.

There are corner cases where a ZFS pool can't be imported with too little memory. Some of those have been fixed or worked around. But, there is no telling if, (or when), another may show up. And whether or not it destroys data.

If you use less than 8GBs of memory, you are in un-charted waters... that few of us want to explore.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You CAN split a ZFS pool made up of multiple Mirrored vDevs.
That's a pretty cool capability, though I'd expect it'd be a rare case where it'd actually be useful for a home user.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
That's a pretty cool capability, though I'd expect it'd be a rare case where it'd actually be useful for a home user.
Agreed.

I can see a use, (perhaps more small business than home), where a weekly off-site backup is performed. For example;
  • Friday morning a prior backup is retrieved from off-site storage, (like owners home)
  • Friday afternoon the 3rd disk in the Mirrored vDevs is split and removed.
  • The old backup is put in place as a 3rd disk of each Mirrored vDev.
  • After work, owner takes the new backup disks off-site and stores them.
  • Sunday night or Monday, the re-sync is complete and they have all 3 way Mirrored vDevs
There is a Linux on ZFS feature request in progress to allow having only 1 disk re-silver at a time, with a queue. (They are still discussing if that should be per vDev... like for RAID-Z2 or -Z3). And any remaining re-silvers would be queued up awaiting a prior completion.
 

John Nas

Dabbler
Joined
Jul 26, 2018
Messages
35
Basically when the minimum was 6GB, there were problems with FreeNAS. So a change was made to 8GB.

Any advice on how a user should think about that minimum? I understand you should use no less, but is it now regarded as perfectly sufficient for a NAS up to a certain size or merely as the absolutely most bare-bones acceptable? Also any ideas, as has already been discussed here a little bit, if it's applicable to ZFS in general (ZoL etc) or more specific to the FreeNas software?

There is a Linux on ZFS feature request in progress to allow having only 1 disk re-silver at a time, with a queue.

What is the cross-pollination like between ZoL and ZFS on FreeBSD/FreeNas? I know there's a license issue but does a feature like this, when added to one, tend to make it's way over to the other codebase?

if you leave the split mirrors attached until you are ready to split, you get more read performance, (reads stripped across more disks), and less risk, (more redundancy)

Not sure I completely follow how this provides for more redundancy, since isn't the idea you split off the 3rd disk before you add it's replacement? If I understand correctly, you have 4 disks (a,b,c,d) with c & d in periodic rotation? So it's a,b,c in the mirror and d at the offsite, then you split off c, insert d, c goes offsite, and d resilvers?

I can see a use, (perhaps more small business than home), where a weekly off-site backup is performed. For example;
  • Friday morning a prior backup is retrieved from off-site storage, (like owners home)
  • Friday afternoon the 3rd disk in the Mirrored vDevs is split and removed.
  • The old backup is put in place as a 3rd disk of each Mirrored vDev.
  • After work, owner takes the new backup disks off-site and stores them.
  • Sunday night or Monday, the re-sync is complete and they have all 3 way Mirrored vDevs

Am I wrong in assuming a resilver isn't performed via a delta, but instead every sector of the disk is rewritten? If so does that not add a lot unnecessary wear compared to an incremental backup?

With this technique would there be any point to balancing wear by rotating through which disk is moved to offsite?

Week no' : [nas] (offsite)

1 : [abc] (d)
2 : [bcd] (a)
3 : [acd] (b)
4 : [abd] (c)
5 : [abc] (d)
..
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Am I wrong in assuming a resilver isn't performed via a delta, but instead every sector of the disk is rewritten?
Only the data is copied. There is no reason to resilver unused space of the disk.
With this technique would there be any point to balancing wear by rotating through which disk is moved to offsite?

Week no' : [nas] (offsite)

1 : [abc] (d)
2 : [bcd] (a)
3 : [acd] (b)
4 : [abd] (c)
5 : [abc] (d)
Sure, this would work, but I think @Arwen was talking about using 3-way mirrors, not 4-way, but mirrors can be n-way and the more disks in the mirror, the more read capacity.
 
Status
Not open for further replies.
Top