Stux
MVP
- Joined
- Jun 2, 2016
- Messages
- 4,419
No idea, really--to even speculate would require a lot more knowledge of ZFS internals than I have.
Yeah, me too :)
No idea, really--to even speculate would require a lot more knowledge of ZFS internals than I have.
I would think that depends on if ZFS would continually calculate the parity for the stripwidth of the array including the missing disk. Perhaps its smart enough to reduce the stripe width by the one missing drive and still store the second parity.Any idea if there is a significant performance penalty when an array is degraded like that?
Apart from external phenomena like catastrophic multiple HDD failures during resilvering, can we expect there to be any innate risks associated with the new expansion feature?Yes, but as currently planned, you'd need to add them one at a time.
There was supposed to be a demo at the ZFS Dev Summit in October 2018 and expected to land in FreeBSD 12. That being said, FreeBSD 12 has been released but as far as I know RAIDZ expansion wasn't included. More work needs to be done.How far is this feature? Is it implemented?
I have not been able to find any way to track the progress of this, does OpenZFS have a system (bug/issue/tracking) so I (and others) can follow the progress?There was supposed to be a demo at the ZFS Dev Summit in October 2018 and expected to land in FreeBSD 12. That being said, FreeBSD 12 has been released but as far as I know RAIDZ expansion wasn't included. More work needs to be done.
You can find all of the developer resources at http://open-zfs.org/wiki/Developer_resourcesI have not been able to find any way to track the progress of this, does OpenZFS have a system (bug/issue/tracking) so I (and others) can follow the progress?
Is that going to be incorporated into a FreeNAS beta release?We'll also be doing a Call for Testers in the next few weeks for https://zfsonfreebsd.github.io/ZoF/ with what's in now, the currently known caveats, and a great big warning that testing needs to be on experimental systems only at this stage.
Is this call for testers going to be for a build that includes 'RAIDZ expansion' (the topic of this thread) ?We'll also be doing a Call for Testers in the next few weeks for https://zfsonfreebsd.github.io/ZoF/ with what's in now, the currently known caveats, and a great big warning that testing needs to be on experimental systems only at this stage.
all greek to me at the moment. biggest question is if I currently have 100tb in my system. I add another 100, after expanding will I have 200tb or will I loose some with this feature? How about read/write performance after expanding?There is an edge case when recovering blocks from parity, IIRC during the expansion process. The expansion process itself also takes up IO, obviously, but that's about it.
This point is critical... I had totally discounted the idea of using it until I saw this. Now (when it finally arrives in FreeNAS) it will be an option on my list.Matthew Ahrens @mahrens1 23h23 hours ago
It will rebalance the data so that it's evenly across all disks in the RAIDZ group, and a big chunk of free space at the end of each disk.
Let's say you have 10x10TB drives today, making your 100TB... RAIDZ2 means you lose 2 of those disks, so 80TB... let's not get into padding and keeping 20% free for CoW.biggest question is if I currently have 100tb in my system. I add another 100, after expanding will I have 200tb or will I loose some with this feature? How about read/write performance after expanding?
This point is critical... I had totally discounted the idea of using it until I saw this. Now (when it finally arrives in FreeNAS) it will be an option on my list.
Let's say you have 10x10TB drives today, making your 100TB... RAIDZ2 means you lose 2 of those disks, so 80TB... let's not get into padding and keeping 20% free for CoW.
If you add 10x10TB, that will mean a RAIDZ2 with 20 disks, meaning 18 for data 2 for parity (keep in mind recommended width of a RAIDZ2 is not to go over 12), meaning 180TB.
With the quote above, you may find performance is a little better after the addition due to the rebalance work done by the expand operation, but the performance of RAIDZ2 is the performance of a single VDEV, so effectively as slow as the slowest single disk in it in many scenarios.
Since you asked the question, you probably care about performance, so the recommendation would be not to do that (for many reasons).
If your only concern is capacity (and your data is important enough to keep too, hence using RAIDZ2), you could consider the adrenalin rush of "living on the edge" with a 20-wide RAIDZ2 VDEV (against the recommendation of all the clever people around here).