danb35
Hall of Famer
- Joined
- Aug 16, 2011
- Messages
- 15,504
Correct; you mirror disks, not vdevs. Multiple vdevs are always striped.There is no way of mirroring vdevs
Correct; you mirror disks, not vdevs. Multiple vdevs are always striped.There is no way of mirroring vdevs
Note that ZFS has a feature to split off a mirror from a pool made of Mirrored vDevs.
As a matter of terminology, yes. Substitute "disks" for "vdevs" in her statement and it's correct.Does that mean Arwen too was wrong earlier in this thread?
As a matter of terminology, yes. Substitute "disks" for "vdevs" in her statement and it's correct.
That is mostly correct. Adding a disk to create (or add to) a mirror (i.e., turn a single disk into a mirror, or a 2-way mirror into a 3-way mirror), or removing a disk from a mirror to reduce redundancy, are the exceptions.Ahh, I was under the impression vdevs effectively cannot be altered once created.
Yes, but it's really only practical if you only have one vdev. If you have, say, four disks in striped mirrors (which would be two vdevs), the data is going to be striped across the two vdevs, and you won't then be able to take out one disk and have anything usable on it.Does this mean you can in fact remove and reattach a drive from a mirror vdev without affecting the pool the vdev belongs to?
My preference would be to create a separate backup pool, and use ZFS replication to keep it in sync with your main pool.Would this be a more or less advisable strategy to periodically syncing a drive to be stored in a secondary location compared to something like a simple file copy via rsync?
But there's nothing about ZFS that makes it more risky than any other filesystem with non-ECC RAM.
I'm aware of that thread. Here's why it isn't true:due to the checksumming aspect, especially during scrubs, I don't think this is true.
As you can see, this is a highly unlikely series of events.
Edit: I'll add that many of the pro-ECC arguments, especially the ones about the "scrub of death", also border on FUD.
ECC RAM should be used, IMO, in any server, under any OS, with any filesystem--if you care about your data, you should be using ECC RAM as much as possible
There is no such current path, and I'm not aware of anything under consideration that would do this either.I assume there's no direct migration path that let's you add a fourth drive to a 3 drive mirror, and convert it into a four drive RaidZ2, even in the proposed vdev expansion update?
That's an accurate summary. If you're willing to consider used server hardware, the investment need not be great.So I understand your position clearly, you think that the particular risks of a ZFS scrub with non-ECC ram have ben exaggerated, but the general risks of using non-ECC ram are significant enough to warrant people investing in ECC chips, ram and boards?
and convert it into a four drive RaidZ2, even in the proposed vdev expansion update?
Note that ZFS has a feature to split off a mirror from a pool made of Mirrored vDevs.
...
Does that mean Arwen too was wrong earlier in this thread?
Okay, okay I give in. Less than ideal wording.As a matter of terminology, yes. Substitute "disks" for "vdevs" in her statement and it's correct.
If you care about your data, make an effort to use ECC memory.
If you care enough about your data to use ZFS, why aren't you using ECC memory?
If you care about your data, make an effort to use ECC memory.
I think this sums it up about perfectly.If you care enough about your data to use ZFS, why aren't you using ECC memory?
But I could not help saying ZFS uses memory for cache (Adaptive Replacement Cache or ARC) and it has nothing to do with the way the pool is configured (drive layout) it has to do with the amount of data, the number of users, the kinds of transactions (large files / small files) and if the same data is being handled repeatedly or if it is always handling different data. A lot goes into it, which is why the general rule is 1GB of memory per 1TB of storage. It gets you in the ball park of where you should be, but you should never even try to use less than 8GB of memory with ZFS and ZFS is the file system for FreeNAS.are the memory requirements dictated only by drive size and the ZFS file system, and thus have less to do with the complexity of parity calculations?
If you care enough about your data to use ZFS, why aren't you using ECC memory?
it has to do with the amount of data, the number of users, the kinds of transactions (large files / small files) and if the same data is being handled repeatedly or if it is always handling different data.
Rsync does a comparison of the source and destination to see if the file is the same or has changed and only copies the file if there is a difference. ZFS replication looks at the time markers in the file system and if there is new data, it copies over the new data. Both of those systems are reasonably reliable and have been used in the enterprise for years and they can be automated to keep two servers or even two pools inside the same server synchronized. Splitting a disk out of a vdev is a manual process. I would not suggest that as a way of making a backup, but it does have it's purposes and I have used it before. As @danb35 already said, it would only work in a pool with a single vdev. Once you go to a larger storage pool with striped vdevs, you can't do that any more.With regards to creating backup copies of a mirror, is there any data on which technique is more stable/less prone to error? Rsync vs, disk splitting vs ZFS replication?
Because ZFS is software and it is designed to expect a certain amount of memory to be available. My understanding is, ZFS can work with less than 8GB of memory, but it turns caching off. I read that in a post on the forum a couple years ago and I can't cite the source.I'm also curious how that 8GB threshold is reached, as it is often mentioned against the 1GB for 1TB rule. How can the minimum recommendation be a fixed value when it has a correlation to the pool capacities?
There was a time before when ZFS did not required 8GB as a minimum but I am not aware of when this changed.Is it something to do with the particular version of ZFS?
There was a time when NAS4Free was using UFS instead of ZFS. Do you know that it was ZFS? If it has been 'years ago' it is very likely an earlier version of ZFS. If I recall correctly, there have been at least 3 upgrades of the ZFS features since I have been using it with another coming in FreeNAS 11.2I ask since years ago I tinkered with a 2 drive nas4free box on a little low powered machine. At the time FreeNAS still had the 8GB recommendation but the 2GB I had was well within nas4free's recommendations.
Probably because of the ZFS upgrades I mentioned.When I recently considered unmothballing the build I went to update to the latest nas4free and noted their requirements has now jumped up to the same 8GB.