Why are additional vdevs the recommended method of growing a volume?

Status
Not open for further replies.

Michael Hanna

Dabbler
Joined
Jun 17, 2017
Messages
43
I've been using freenas for about 6 months now and began looking into to proper way to grow my existing volume. It seems there are two methods... swap out drives for larger capacity drive or add an additional vdev and stripe it into the existing volume. It would appear the "recommended" method is adding additional vdev(s). I'm just curious as to why this is the recommended method? Is it because the risk of loosing additional drives during the resilvering process required when replacing drives with larger capacity drives or is there some other reason?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The answer depends a little on what hardware you have. Please share your hardware in accordance with this guide:

Updated Forum Rules 4/11/17
https://forums.freenas.org/index.php?threads/updated-forum-rules-4-11-17.45124/

The general reason for adding more vdevs is that more vdevs gives you more performance at the same time that it adds storage and without the need to replace existing drives. I have done several hardware migrations over the years. One of those was when I went from a five drive RAIDz1 pool to a six drive RAIDz2 pool and another time I added a second vdev to the RAIDz2 pool. I have also migrated the drives in my pool from 1TB, to 2TB to 4TB, so that is a completely valid option also, no matter how many vdevs you have. You only need to replace all the drives in one vdev to get the additional space that the upgrade provides. In my main NAS I upgraded vdev-0 one year and vdev-1 the next year when I was going from 2TB to 4TB drives. There are many options.
If you want to replace your existing drives, if you have healthy drives, it should be no more risk than replacing a drive for a drive failure. In all of my upgrades, I ensured that I had a current backup first, but I did not loose any data.
I hope I answered your question, but if you still wonder about something, please ask.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
It would appear the "recommended" method is adding additional vdev(s).
It's not at all clear what you're basing this on, but I'd disagree that there is one "recommended" method of expanding a pool. Both methods work, both are reliable, both are pretty straightforward. If you're short on drive bays, replacing your existing disks is likely to be a better route. If you have drive bays to spare, replacing the existing disks means you're (in most cases) throwing out perfectly good disks just to replace them with larger ones.
 

Michael Hanna

Dabbler
Joined
Jun 17, 2017
Messages
43
The answer depends a little on what hardware you have. Please share your hardware in accordance with this guide:

Updated Forum Rules 4/11/17
https://forums.freenas.org/index.php?threads/updated-forum-rules-4-11-17.45124/

The general reason for adding more vdevs is that more vdevs gives you more performance at the same time that it adds storage and without the need to replace existing drives. I have done several hardware migrations over the years. One of those was when I went from a five drive RAIDz1 pool to a six drive RAIDz2 pool and another time I added a second vdev to the RAIDz2 pool. I have also migrated the drives in my pool from 1TB, to 2TB to 4TB, so that is a completely valid option also, no matter how many vdevs you have. You only need to replace all the drives in one vdev to get the additional space that the upgrade provides. In my main NAS I upgraded vdev-0 one year and vdev-1 the next year when I was going from 2TB to 4TB drives. There are many options.
If you want to replace your existing drives, if you have healthy drives, it should be no more risk than replacing a drive for a drive failure. In all of my upgrades, I ensured that I had a current backup first, but I did not loose any data.
I hope I answered your question, but if you still wonder about something, please ask.

Thanks, I've updated my sig to include my hardware info... sorry about that. With regards to adding additional vdevs should they be the same layout as the existing... IE in my case would I need another 8 drive vdev to my primary volume or would any RAIDZ2 layout be sufficient?
 

Michael Hanna

Dabbler
Joined
Jun 17, 2017
Messages
43
It's not at all clear what you're basing this on, but I'd disagree that there is one "recommended" method of expanding a pool. Both methods work, both are reliable, both are pretty straightforward. If you're short on drive bays, replacing your existing disks is likely to be a better route. If you have drive bays to spare, replacing the existing disks means you're (in most cases) throwing out perfectly good disks just to replace them with larger ones.

I thought I read the "recommended" method in the official user guide... perhaps I was mistaken. I've been reading a lot the last several days on the forums. Thanks for the input. I really don't want to pull drives just because I'm getting larger drives. I do have spare bays it was just the multiple warning that the loss of any vdev would kill the volume that kept getting my attention. I guess in my mind more vdevs mean mores chances of failure... even though I understand that with each being RAIDZ2 I would need to lose multiple drives in each vdev for that to happen. Guess I just need to change my thinking on that one.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
With regards to adding additional vdevs should they be the same layout as the existing
"Should"? Yes, additional vdevs should be the same layout. They don't have to be, but it's best if they are.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I thought I read the "recommended" method in the official user guide.
Both methods are described in the manual, but I don't recall that I've seen one recommended over the other.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I guess in my mind more vdevs mean mores chances of failure... even though I understand that with each being RAIDZ2 I would need to lose multiple drives in each vdev for that to happen.
A couple times over the years I have had two drives in the same vdev start throwing errors around the same time. It is a little worrisome. It is part of the reason I have two full backups locally. I am paranoid about loosing my data. Remember, a RAID array of any kind is not a backup of the data on the array. You need a separate copy of that data somewhere else, even if it is on a single big drive. When your data is in question, one instance of the data equals no copies (one is none) two instances of the data is the minimum (two is one) and a best practice is to have another copy off-site, a third instance of the same data. More is often better. I worked for a company that maintained daily backup tapes for an entire month so they could roll back to the nightly backup from the previous night, but that is a little extreme for home. However you could still do snapshots to roll back to a previous time if you accidentally delete a file. It doesn't protect you from catastrophic system failure.
Have fun.
 

Michael Hanna

Dabbler
Joined
Jun 17, 2017
Messages
43

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
@Michael Hanna, you may want to update your signature from;

Pool: 8X 6TB WD Red Pro RAIDZ2, 4X 3TB WD Red RAIDZ1

To;

Pools: 8X 6TB WD Red Pro RAIDZ2; 4X 3TB WD Red RAIDZ1

I had a moment of panic when I read it as 1 pool with 2 vDevs mixing RAID-Zx levels, (which, while it works, is not recommended). But, I continued to read your post which indicated 2 x volumes / pools.
 

Michael Hanna

Dabbler
Joined
Jun 17, 2017
Messages
43
@Michael Hanna, you may want to update your signature from;

Pool: 8X 6TB WD Red Pro RAIDZ2, 4X 3TB WD Red RAIDZ1

To;

Pools: 8X 6TB WD Red Pro RAIDZ2; 4X 3TB WD Red RAIDZ1

I had a moment of panic when I read it as 1 pool with 2 vDevs mixing RAID-Zx levels, (which, while it works, is not recommended). But, I continued to read your post which indicated 2 x volumes / pools.

Thanks. I can see now how that was confusing. I fixed the signature.
 

Evi Vanoost

Explorer
Joined
Aug 4, 2016
Messages
91
You can attach more drives to a RAIDZ VDEV or remove drives from them too. You may have to upgrade to 11.2 but the functionality is there now. It has some drawbacks (memory usage primarily) so I wouldn't recommend it.

I recommend attaching more and smaller VDEVs to the pool. Writes get striped over VDEVs first (and subsequently reads too). Each VDEV however will have the speed of a single drive (since it needs to read/write across all devices in a VDEV to get a single block of data out) so multiple VDEV means your speed will increase a lot more. According to Nexenta sales engineers, on SAS systems the upper limit of the VDEV speed increase is ~12-13 VDEV, regardless of their configuration.

Also, rebuild times are a lot faster for smaller sets of VDEV. If your data needs to be resilvered, 10x10TB drives in a VDEV over SATA may take a week or so to recover, the chance that another failure and thus pool loss sets in is incredibly high.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Also, rebuild times are a lot faster for smaller sets of VDEV. If your data needs to be resilvered, 10x10TB drives in a VDEV over SATA may take a week or so to recover, the chance that another failure and thus pool loss sets in is incredibly high.
Not if you are using RAIDz2. Nobody should be using RAIDz (single drive parity) with drives 2TB and larger.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You can attach more drives to a RAIDZ VDEV or remove drives from them too. You may have to upgrade to 11.2 but the functionality is there now.
No, neither functionality is there with 11.2. Adding drives to RAIDZn is a WIP. Removing vdevs (not disks) from a pool is in 11.2 (but not earlier versions), but only if all vdevs in the pool are either single disks or mirrors. AFAIK, removing individual disks from RAIDZn vdevs is not being seriously considered at this time, nor is removing vdevs from pools containing a RAIDZn vdev.
 

Evi Vanoost

Explorer
Joined
Aug 4, 2016
Messages
91
Not if you are using RAIDz2. Nobody should be using RAIDz (single drive parity) with drives 2TB and larger.
I agree, but I've had a read/checksum error happen 3 days into a rebuild when another drive in RAIDZ2 failed.

I think the consensus amongst commercial vendors is RAIDZ2 for up to 6-8 drives and RAIDZ3 up to 8-12 drives but most don't recommend no more than 8 drives in a VDEV.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I agree, but I've had a read/checksum error happen 3 days into a rebuild when another drive in RAIDZ2 failed.
Even having two drives fail simultaneously in a RAIDz2 pool (I have had that happen) the odds of actually loosing data is low. I have resilvered two replacement drives into the same pool at the same time due to dual drive failure (within minutes of each other) and lost no data, not even any corrupt files. If you lost data, I would say it is a statistical anomaly.
I think the consensus amongst commercial vendors is RAIDZ2 for up to 6-8 drives and RAIDZ3 up to 8-12 drives but most don't recommend no more than 8 drives in a VDEV.
Typically, the forum here is for home users, even though we do have a fair number of cost conscious businesses that come here looking for guidance to build their own storage. The forum guidance is no more than 10 disks in a RAIDz2 and 11 disks in a RAIDz3 and best practice is usually either 6 or 8 disks. This has been reiterated thousands of times on the forum and you would know that if you spent any time reading before commenting.
 

pro lamer

Guru
Joined
Feb 16, 2018
Messages
626
don't want to pull drives just because I'm getting larger drives
it should be no more risk than replacing a drive for a drive failure
Even less risky if one does an in-place replace since they have the same redundancy level all the time (compared to replacing a failed drive when one has reduced redundancy)

Sent from my phone
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
Nobody should be using RAIDz (single drive parity) with drives 2TB and larger.
I read a lot about this those days so I started to question the builds I was thinking about (mostly to favor mirrored vdevs).
Nowadays, Seagate specifies an URE rate of 1e⁻¹⁵ for their Ironwolf hard drives and Western Digital specifies "<1e⁻¹⁴" which sounds more conservative. So I was wondering, does that recommendation still holds late 2018 ?

I mean, I did the maths. The chances of data loss during resilvering a few TB of data with an URE rate of 1e⁻¹⁵ in case of a disk loss is still not totally satisfying. I am more looking for an "expert opinion" here.
 
Status
Not open for further replies.
Top