FreeNAS works with Raid or Replaces it?

Status
Not open for further replies.

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
If you try to limit yourself to designs that do not include hard drives upgrades, it would be a lot easier...
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
Ok, so if I do 6 drives in Z2 right now so I can get my data out of this JBOD and in at least some safety, that would be one vdev. Then later I can add another 6 drive Z2 as a separate vdev and put them together as a single zpool right?

So I'll have 2 drives in each vdev for fault tolerance, and so long as I don't let either vdev die the data in the entirety of the pool will be usable. Sound right?

So currently I'm looking at 2 6 drive 12Tb vdevs (~15-16tb usable space in total).
Is there any advantage to doing 3 4 drive 8Tb vdevs over doing the 2 6 drive 12Tb vdevs in a zpool?
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
If you try to limit yourself to designs that do not include hard drives upgrades, it would be a lot easier...
I just bought a new car, I don't have the money to get 12 drives plus all the hardware I need to actually get the box up and running right now.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ok, so if I do 6 drives in Z2 right now so I can get my data out of this JBOD and in at least some safety, that would be one vdev. Then later I can add another 6 drive Z2 as a separate vdev and put them together as a single zpool right?

Exactly right ;)

So I'll have 2 drives in each vdev for fault tolerance, and so long as I don't let either vdev die the data in the entirety of the pool will be usable. Sound right?

Yep, exactly, again.

Is there any advantage to doing 3 4 drive 8Tb vdevs over doing the 2 6 drive 12Tb vdevs in a zpool?

Less power used and less noise but the 8 TB drives are a big question mark for now (some members have problems with them, others don't).
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
Well, the drives are 2tb,the vdev will have an effective 8tb space.

So 3 vdevs with 4 2tb drives
Or 2 vdevs with 6 2tb drives

Both would net me about 24tb on total, but the zpool would have different number of vdevs with different sizes. I was wondering if there's an advantage of one configuration over the other?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
With 3 RAID-Z2 vdevs of 4 drives you'll have 6 drives used for parity and 6 for data.

With 2 RAID-Z2 vdevs of 6 drives you'll have 4 drives used for parity and 8 for data.

So in the first case you use 50 % for parity and in the second case you use only 33 %.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Assuming RAIDz2, the first combination would yield about ~10.5 TiB of space, whereas the 2nd option would give you ~14 TiB.

Option #2 is more efficient.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
One thing you probably don't realize, is that calling those drives 2TB's is just a marketing ploy.

In reality, you'll only see about 1.8TiB of usable space. It doesn't matter whether you are using FreeNAS, Windows, or ???. As the hard drive sizes grow, the difference is more noticeable, than in the olden days when the drive sizes were much smaller.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
One thing you probably don't realize, is that calling those drives 2TB's is just a marketing ploy.

In reality, you'll only see about 1.8TiB of usable space. It doesn't matter whether you are using FreeNAS, Windows, or ???. As the hard drive sizes grow, the difference is more noticeable, than in the olden days when the drive sizes were much smaller.
I have scared my family when they had noticed 6 drives 4TB each giving less than 12TiB usable!

With 6 drives in RAID-Z2, only 4 have data, 2 have parity.
4 * 4*1000^4 * 0.80 / 1024^4 = 11.6

4 disks * 4 TB * 80% = 11.64 TiB

P.S.
So I bent the rules, and the volume had 12 TiB. Still looking improbable...
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
There is a HUGE difference between using FreeNAS and Windows. The two are not comparable because in Windows there is no way to grow an array, if I wanted to add space to Raid I would have to rebuild it, which means moving all the data elsewhere while I do that, and as I'm accumulating more movies, music, and television shows; I simply don't have the space for that. So yeah, it makes a huge difference, because as I want to increase my pool to work with, I'm going to have to have a files system that can handle a variable array. ZFS and Windows Raid are like apples and oranges, so yeah it does matter.

Yes, I am fully aware of how much space is actually usable on a drive, and no it's not a "marketing ploy". It's actually the drive space. Your computer calculates drive space in powers of 1024 while the manufacturer calculates it in powers of 1000. Both are actually correct, but measured different ways for different reasons. This isn't a ploy, it's called a misunderstanding.

I'm new to this type of file system, but not new to computing...
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
Well, TB is Terabyte which is 1000^4 while TiB is Tebibyte which is 1024^4.

Manufacturers are using Tebibytes (as does your OS) when the GUI is using Terabyte. That's why your size looks smaller.

Also, with your example of using 4Tb drives, should you be more around 19Tb of space not 12?
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
They use TB as a label. But when they advertise they are advertising the total space. The computer uses TiB, but most OS GUIs report in multiples of 1000 (TB) even though it's actually in TiB which is why it looks less than the advertised space. The computer knows exactly what it's dealing with, the people who make the labels and even write the GUIs however often don't. I've never seen the FreeNAS GUI so if it uses TiB over TB it's one of the first user interfaces I've ever seen that does it properly. Which of course, cudos to them.
 

tethlah

Dabbler
Joined
Nov 6, 2015
Messages
23
I think I'm settled on it then. But here's the next question, I heard that you can increase the size of disks so long as you increase all the sizes on the vdev at once, so if the information is on the current disks, how do you increase the size of the vdev without having to find a way of moving the data? Adding vdevs makes sense for expansion, but how do you increase current vdev size through disk replacement if there is no way to juggle the data?
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
My comment about the two OS's was based on usable drive space, not operational differences.

Based on the way I read your earlier messages, I didn't think you understood the difference in numbering systems being used.

There is a HUGE difference between using FreeNAS and Windows.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
You replace the disks one at a time and wait for the resilvering to complete. Then rinse and repeat. Once you've done it for all the disks in the vdev, you'll see the new volume size.

This procedure is documented in the manual.

... I heard that you can increase the size of disks so long as you increase all the sizes on the vdev at once, so if the information is on the current disks, how do you increase the size of the vdev without having to find a way of moving the data?
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
[...] Also, with your example of using 4Tb drives, should you be more around 19Tb of space not 12?
In the RAID Z2 having 6 drives that are 4TB each, 2 disks are used only for parity.

So only 4 disks that are 4 TB each contain data. Thus 16 TB in total.

Now 16 TB in total is the same as 14.5519 TiB.

With ZFS (that is not a FreeNAS feature, but a ZFS one) you really want to stay below 90% used, closer to maximum of 80% used.

90% * 16 TB = 90% * 14.5519 TiB = 13.0967 TiB

80% * 16 TB = 80% * 14.5519 TiB = 11.6415 TiB

So in the ideal world, you would like your users to never use more than 11.6415 TiB.

P.S.
The last part of the calculations applies to any case where only 4 disks that are 4 TB each store the data:
With RAID 0 with 4 disks 4 TB each (=16TB), users should never use more than 11.6415 TiB.
With RAID Z1 with 5 disks 4 TB each (=20TB), users should never use more than 11.6415 TiB.
With RAID Z2 with 6 disks 4 TB each (=24TB), users should never use more than 11.6415 TiB.
With RAID Z3 with 7 disks 4 TB each (=28TB), users should never use more than 11.6415 TiB.
With RAID 10 with 8 disks 4 TB each (=32TB), users should never use more than 11.6415 TiB.
 
Last edited:
D

Deleted47050

Guest
This makes no sense. Originally someone said you have to have either 3, 4, or 5 disk arrays. I saw that 3 was for Z1 (2+1) and 4 was for Z2 (2+2). Then someone else said you can do a Z2 in 6 (4+2). So I decided to dothe Z2 in 6 drives and do 2 arrays (I guess striping the arrays? no clue this is way more convoluted that a simple raid array).

Now people are saying that I'll loose 50% of the space, some are saying as little as 35%.

Seriously? How variable is this, is it even worth trying to get into? The entire thing seems really uncertain when it comes to how it's actually supposed to work because I have yet to get the same answer out of 2 different people. I just want to put all my media in a central server that has some fault tolerance without having to spend 2-5k to do it. (and no, I'm not going above a 2Tb drive because that completely defeats the whole affordability thing, I don't need 12 6Tb drives).

I understand it can be confusing at first, but it's not uncertain in any way. The available space is variable only because it depends on the number of drives you decide to put in a vdev. With RAIDZ2 for example, you ALWAYS use 2 drives for parity, so if you build a vdev with only 4 drives, you will have only half of your total storage, while if you use 10 drives, only 20% of them will be used for parity.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Status
Not open for further replies.
Top