BUILD C226, i3, 10 drive build

Status
Not open for further replies.

Richman

Patron
Joined
Dec 12, 2013
Messages
233
I wasn't asking that question as to how it relates to FreeNAS. It was a general RAID question so go club another, no two baby seals :p
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
But both hardware and software RAID is discussed there.

In fact, I'm gonna club 2 seals now! Wanna try for 3 Mr. Smart mouth? :D
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Make it a half dozen. There are way too many baby seals in the ocean.

I forget I need to pre-empt all of my questions with 4 pages of lead in info as I have asked this particular question which seams simple enough to be Yes or No and never gotten an actually answer. I will do 45 minuets more of research (maybe one day if I get curious again) and I am sure, even though it is hard to find this I will run across it. Geez, seams like RAID trade secrets. Actually it was just a curiosity question. Like if I bought an old server and had 24x3TB disks on RAID 5, how much space would I have and how much would be given up to parity. Its probably a stupid elementary question. I am just going to figure I would have 69TB and only give up 3TB to parity. I don't really care any more since I most likely will never do that since I asked that question 5-6 different ways both in forum and Personal message and never seen a yes or no yet and I lost all enthusiasm and has drained my last once of energy.

I did however realize that I have had a dozens ASUS M5A78L-M LX Plus boards here as I built systems for other peopl and never realized that it can use ECC.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Just looked at the manual again and didn't see anything that answers my question about parity.
Just tried to google it again but seriously, I am past the point of curiosity where I am going to try and google it with 20 or more different ways of wording it in order to get the right info to pop up. Lost interest and sorry I asked. Can't believe nobody in this forum knows the answer.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Here's the portions of the sentences that answers your questions:

...can tolerate the loss of one disk...

...can tolerate the loss of two disks...

Also you could try reading my presentation. The visual aids will help. :)
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
No, that didn't answer my original question at all. Nice try. I already know that info and have know it for two years now.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So what are you trying to know? Im confused...
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
Ok, I will try one more time (7th try at least -hows the saying go? 7th times the charm...) and try to think of the simplest way to ask since I must have confused the hell out of everyone thus far. Simple question and the first person answered many questions dancing all around my original question almost like they were deliberately trying to evade it. Don't ask who.

Here it goes:
I will ask it in an example
ie: I have a 24 disk RAID 5 array. All are 4TB. How many disks or capacity do I loose to parity? 1 disk? and I have 92TB of usable space correct? Yes or No
ie: I have a 24 disk RAID 6 array. All are 4TB. How many disks or capacity do I loose to parity? 2 disks? and I have 88TB of usable space correct? Yes or No
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok, I will try one more time (7th try at least -hows the saying go? 7th times the charm...) and try to think of the simplest way to ask since I must have confused the hell out of everyone thus far. Simple question and the first person answered many questions dancing all around my original question almost like they were deliberately trying to evade it. Don't ask who.

Here it goes:
I will ask it in an example
ie: I have a 24 disk RAID 5 array. All are 4TB. How many disks or capacity do I loose to parity? 1 disk? and I have 92TB of usable space correct? Yes or No
ie: I have a 24 disk RAID 6 array. All are 4TB. How many disks or capacity do I loose to parity? 2 disks? and I have 88TB of usable space correct? Yes or No

You lose the number f disk from parity.

If you have a single 24 disk RAID5 array, you will have 23 disks worth of storage space. or about 92TB
If you have a single 24 disk RAID6 array, you will have 22 disks worth of storage space. or about 88TB

I thought this was obvious from my comment above...

...can tolerate the loss of one disk...

...can tolerate the loss of two disks...
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
I thought this was obvious from my comment above...
Now that I have confirmed what I thought, your comment does look obvious. Someone said something once that made me think that parity data grows per amount of disks in an array but I must have misunderstood. I had thought that parity could be just so many bits or bytes per block or chunk of data. Just wasn't entirely sure. Thanks for that confirmation. Now I can go to sleep and not have night mares. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You realize that whole conversation confuses the heck out of me. You seemed to know what you were talking about, but apparently you didn't know what you knew. LOL.
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
You know... if you bring your baby seals to norway... we can make some nice gloves and jackets of them for you :D

About the parity disk discussion, i'm not sure Dusan is completely right about the parity data beeing spread across all drives in the array.
Sure on some systems/controllers that is true, but i'm not sure that is the general rule on hardware RAID controllers.

On NetApp the parity data is on specific drives. You can even see wich ones are data drives and wich is parity drives.
I assume this would be true on FreeNAS aswell.

It does not matter though.
 

KevinM

Contributor
Joined
Apr 23, 2013
Messages
106
My first build idea was a 6x 4TB.
I agree, compared with 10x 2TB it would fit a smaller case (especially that nice Node 304) and consume less energy.
However it has a slightly higher initial cost and can't really be expanded too much just by swapping out the disks.
In general, 'wasting' 1/3 of the disk space on parity... idk, it just seems too much.
Pretty though descision.
Thanks for the welcome btw :)

I look at RAID lossage as just the cost of doing business. The case I'm using in my home system will hold 7 hard drives and I considered RAIDZ3, but I didn't want to take the performance hit. Instead I bought a seventh drive and threw it in a drawer.

I have a couple of supermicro e1r36n systems at work, a primary and a replication target, and even there I went with 6 RAIDZ2 vdevs. It just didn't seem like there was a better way of carving up 36 drives, considering production best practice is 2n+2 for RAIDZ2 where n should not exceed 6, hot spares don't really work, etc.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
About the parity disk discussion, i'm not sure Dusan is completely right about the parity data beeing spread across all drives in the array.
Sure on some systems/controllers that is true, but i'm not sure that is the general rule on hardware RAID controllers.

On NetApp the parity data is on specific drives. You can even see wich ones are data drives and wich is parity drives.
I assume this would be true on FreeNAS aswell.
The assumption is wrong. Maybe NetApp does it that way for some reason, but ZFS does distribute the parity. Spreading the parity across all devices also gives you better read performance. With ZFS' dynamic stripe size it would not even make sense to store the parity on a specific device. Here's a diagram that demonstrates how ZFS dynamic strip size + parity works: https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ (the p blocks are parity).
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Seal Flipper Pie

Recipe
Ingredients
  • 4 Seal flippers – Paws & fat removed
  • Cured salt pork fat – Diced as well as 4 small strips
  • 3 large carrots – Diced
  • 1 Parsnip – Diced
  • 1 cup of peas – uncooked
  • 1 Stalk of celery – Diced
  • 1/2 Small turnip – Diced
  • 1 Large Onion – Diced
  • 1 Regular Beer
  • Water
  • 4 Tbs Vinegar
  • 2 Tbs Worcestershire Sauce
  • Flour
  • Other ingedients for a pastry (Not going into detail on the pie crust here, google it)
Source: http://www.codenewfie.com/food/seal-flipper-pie
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
You realize that whole conversation confuses the heck out of me. You seemed to know what you were talking about, but apparently you didn't know what you knew. LOL.

Sometimes I think I'v forgotten more than I actually thought I knew or learned more than I thought I knew.......... errr uhh or something.o_O

I have a couple of supermicro e1r36n systems at work, a primary and a replication target, and even there I went with 6 RAIDZ2 vdevs. It just didn't seem like there was a better way of carving up 36 drives, considering production best practice is 2n+2 for RAIDZ2 where n should not exceed 6, hot spares don't really work, etc.
I never fully understood that whole math thing with the n+ something, something even though I read many things and looked at wiki's and wikipedia numerous times.
Seal Flipper Pie
Recipe
And finally forums.freenas.org turns into www.food.net
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
The assumption is wrong. Maybe NetApp does it that way for some reason, but ZFS does distribute the parity. Spreading the parity across all devices also gives you better read performance. With ZFS' dynamic stripe size it would not even make sense to store the parity on a specific device. Here's a diagram that demonstrates how ZFS dynamic strip size + parity works: https://pthree.org/2012/12/05/zfs-administration-part-ii-raidz/ (the p blocks are parity).

Allright, maybe i am. I read the blog post you linked to, but are you sure this guy is correct?

I assume FreeNAS does the parity thing as NetApp because NetApp DataONTAP is running FreeBSD. Also, their "structure" of the volumes/pools are a lot alike.

I would really like to see some public documentation on this. It's wery interesting indeed :)
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Allright, maybe i am. I read the blog post you linked to, but are you sure this guy is correct?

I assume FreeNAS does the parity thing as NetApp because NetApp DataONTAP is running FreeBSD. Also, their "structure" of the volumes/pools are a lot alike.
OK, so my word and an independent blog is not enough :). I'm not familiar with NetApp, but I'm 100% sure about ZFS. The only definite "documentation" is the source code itself. However, let's try an experiment you can repeat yourself. I hope you agree that when you write a file (lets assume a big file, not few bytes) then it makes sense (performance wise) to spread the write across as many devices as possible. In a system that uses a dedicated parity drive you should see write activity on all drives (all data + parity). However, when reading there is no need to read the parity unless you are resilvering. So, now try to run "zpool iostat -v 1" and do some bigger reads and writes. If there is a dedicated parity drive you should see one drive idle when doing reads. In reality you will see an almost equal activity across all the drives. If this is still not a convincing proof, let's look at the source code. You do not need to understand C as there is this interesting comment in vdev_raidz.c: https://github.com/trueos/trueos/bl...ensolaris/uts/common/fs/zfs/vdev_raidz.c#L560
Code:
* If all data stored spans all columns, there's a danger that parity
* will always be on the same device and, since parity isn't read
* during normal operation, that that device's I/O bandwidth won't be
* used effectively. We therefore switch the parity every 1MB.

As you can see ZFS prefers to not store the parity on the same device as this hurts performance during reads -- I/O bandwith of one device would be unused as parity is not needed for reads.
You can also check this blog post: http://mbruning.blogspot.sk/2009/12/zfs-raidz-data-walk.html
It is a bit technical (the guy is exploring the ZFS on disk structures) but just search for "parity". You'll notice that the parity shows up on all disks.
 

Sir.Robin

Guru
Joined
Apr 14, 2012
Messages
554
Great! Thanx!! Gonna read that :)

I'm more asking than arguing cuz i don't know for sure myself. And so i appreciate your info :)
Also.... if i took $.50 everytime someone told me how stuff worked or is, only for me to later discover they were dead wrong... well... i wouldn't be sitting with pennies ;)
And so i need to dig a little to make sure :p

If this is done because of performance it is strange that NetApp has parity on dedicated drives.
I'll look more into that myself...

However, you just gave me reason for another question.

I read somewhere that ZFS always checksums what it reads.... so that it is sure what it delivers is correct.
How can that be if parity is not read during read operations??
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
If this is done because of performance it is strange that NetApp has parity on dedicated drives.
I'll look more into that myself...
It's not just because of performance. It is also the result of the variable stripe size. Even if you did not try to move the parity around, writes that are not "aligned to number of data drives" would move the parity to a different drive. I'll use the image from the first blog post:
raid5-vs-raidz11.png

The A write is "aligned", so the parity is on the last drive (Ap, Ap'). However, the B stripe is smaller so it only uses two data drives and the parity moves to the third drive (Bp). The next stripe (C) is again aligned, but because of B the parity is now on the third drive too.
I read somewhere that ZFS always checksums what it reads.... so that it is sure what it delivers is correct.
How can that be if parity is not read during read operations??
Yes, the verification happens on all reads. However, you need to understand that checksum and parity are two different things. Checksum is just a hash (single number) that tells you if the block is consistent. The parity is a bigger chunk of data that allows you to actually reconstruct corrupted information. The checksum is stored in the block pointer (parent block) and is used to verify that the read block is OK. Only when the checksum doesn't match ZFS reads the parity and tries to reconstruct the data.
 
Status
Not open for further replies.
Top