BUILD C226, i3, 10 drive build

Status
Not open for further replies.

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
It's not just because of performance. It is also the result of the variable stripe size. Even if you did not try to move the parity around, writes that are not "aligned to number of data drives" would move the parity to a different drive. I'll use the image from the first blog post:
raid5-vs-raidz11.png

The A write is "aligned", so the parity is on the last drive (Ap, Ap'). However, the B stripe is smaller so it only uses two data drives and the parity moves to the third drive (Bp). The next stripe (C) is again aligned, but because of B the parity is now on the third drive too.

Yes, the verification happens on all reads. However, you need to understand that checksum and parity are two different things. Checksum is just a hash (single number) that tells you if the block is consistent. The parity is a bigger chunk of data that allows you to actually reconstruct corrupted information. The checksum is stored in the block pointer (parent block) and is used to verify that the read block is OK. Only when the checksum doesn't match ZFS reads the parity and tries to reconstruct the data.

Ahaaa... :) Thanx again!
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Still thinking about 10x 2TB vs 6x 4GB...
Might there be a chance that ZFS could support vdev expansion by adding disks at some point in the future?
There is a nice article from oracle but its old and I guess not much has happened since then:
https://blogs.oracle.com/ahl/entry/expand_o_matic_raid_z

(No seals will be clubbed in this thread)
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
No, I do not believe vdev's will ever be expandable. It breaks way way too much in zfs land.

That being said, I really don't see the need. Sets of 6 drives work out very well. In Z2, it's a decent amount of parity (2 out of 6), so resiliency is good. Z2 gives you an 'optimal' vdev for those that are random iop challenged. 6 is also not a huge number of drives for expanding. And 6 divides into 24, which is a popular size 4u chassis option.

I had a single 6 drive z2 pool that I expanded with another 6 drive z2. When I need to expand more, I'll move to a 24 drive chassis and add another 6. And still be able to go to the full 24 with 6 more.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Anything you read from Oracle about future features is useless. Oracle has closed their source code for their branch of ZFS. So after v28 Oracle and the open source community part ways. If you had a need for the Oracle stuff, better be ready to buy it!
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Ahaaa... :) Thanx again!

I getting the distinct impression after looking at those perdy colored thangs der that look so close to the ones in wikipedia that distributed parity be a tad bit bloomin more efficient than dedicated parity.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Anything you read from Oracle about future features is useless. Oracle has closed their source code for their branch of ZFS. So after v28 Oracle and the open source community part ways. If you had a need for the Oracle stuff, better be ready to buy it!
Is Oracle still developing ZFS? I thought most/all of the developers left after the takeover and taking Solaris closed-source again.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
They sure are working on it again!

Here's v29 through v34(all closed)​
29
RAID-Z/mirror hybrid allocator
30
ZFS encryption
31
Improved 'zfs list' performance
32
One MB block support
33
Improved share support
34
Sharing with inheritance
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Yeh, would be erroneous to think that a company that large would not train any replacements needed to continue work on a project of that magnitude or import no matter how many developers left 3 years ago.
 

ZFS Noob

Contributor
Joined
Nov 27, 2013
Messages
129
Yeh, would be erroneous to think that a company that large would not train any replacements needed to continue work on a project of that magnitude or import no matter how many developers left 3 years ago.
I listened to a screencast of a talk by one of their former developers, then got the impression they'd moved elsewhere because they couldn't reincorporate ZFS changes back into Solaris due to GPL concerns.
Never mind.
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
I getting the distinct impression after looking at those perdy colored thangs der that look so close to the ones in wikipedia that distributed parity be a tad bit bloomin more efficient than dedicated parity.

Yeah... but still cinda strange a market leader like NetApp don't. o_O
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Yeah... but still cinda strange a market leader like NetApp don't. o_O

Thats becaus etheir NAS dweeby wannabees.
Actually I am sure they have a reason ....... like its easier to support or fix a customers issues, and/or you can break apart the data from the RAID easier than distributed parity but then I don't know their system or even if they employ ZFS so I'm just talkin outa pie discharge hole.o_O
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Actually I am sure they have a reason ....... like its easier to support or fix a customers issues, and/or you can break apart the data from the RAID easier than distributed parity but then I don't know their system or even if they employ ZFS so I'm just talkin outa pie discharge hole.o_O
They definitely do not use ZFS. I checked some of their documents and they use a modification of RAID4 they call RAID DP (double parity) -- RAID 4 uses a dedicated parity disk and they extended it to two parity devices.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
They definitely do not use ZFS. I checked some of their documents and they use a modification of RAID4 they call RAID DP (double parity) -- RAID 4 uses a dedicated parity disk and they extended it to two parity devices.

Whaaaaat!o_O What kinda a hokie wankie hib gibble is that? That tells me one thing, they wanted double redundancy of RAID 5 but definitely for some reason wanted dedicated parity.:rolleyes:
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
Whaaaaat!o_O What kinda a hokie wankie hib gibble is that? That tells me one thing, they wanted double redundancy of RAID 5 but definitely for some reason wanted dedicated parity.:rolleyes:

Well they most likely have their reasons and they do have some features you would appreciate as a enterprise user.
Remeber these guys deliver storage to CERN among others...

You create raidgroups with double parity and as with ZFS's zpool you can have several raidgroups in one aggregate.
However, after creation you still can expand. Hot adding disk shelves.
You can add disks to existing raidgroups or/and add new raidgroups expanding the aggregate.
And this can be done hot. Even after you have volumes, vfilers and LUN's in production.

Another great thing is that you can reallocate ALL existing data/volumes/LUN's across the expanded aggregate. Hot.

Disk management is just great.

Also with their Raid-DP they promise little to no performance loss over single parity.

Snapshoting and Deduplication is also supported and even recommended done in production environment.

I could go on and on but... ;)
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
No, I do not believe vdev's will ever be expandable. It breaks way way too much in zfs land.
Bit of wishful thinking on my part I guess, the article seemed to hint at a possible solution though.
Still it would be the greatest thing imo to just start with a 4 large disk RaidZ2 and keep expanding as the need arises.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Bit of wishful thinking on my part I guess, the article seemed to hint at a possible solution though.
Still it would be the greatest thing imo to just start with a 4 large disk RaidZ2 and keep expanding as the need arises.

Isn't that almost the same as a mirror? Not a very efficient use of drive space. 4x4TB HDD's would only give you 8TB of storage space. At that point you may as well just ad 2 disks mirrored and then add another s disks mirrored when you need more space. There isn't double redundancy but as I understand it, disks mirrored do not have the same threat of two dying at the same time since you don't go through a long, laborious work intensive re-slivering process. I think in devoting 2 disks to redundancy you should do a 6 disk vdev or at least do 5 disks.
 

indy

Patron
Joined
Dec 28, 2013
Messages
287
Yeah it was just hypothetically.
If you could add drives to a vdev, starting with a 4 drive RaidZ2 would make a lot of sense (for me at least).
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
You almost did and .................... um ............. I was looking for NetApp Server in your signature line and couldn't find it.;):D

Yeah, still haven't got that at home... yet :rolleyes: and my "toys@work" list would be too long for the signature anyway :D
 
Status
Not open for further replies.
Top