Question about a potential smaller capacity difference in a Vdev being added to a pool.

Abby Normal

Cadet
Joined
Jan 4, 2021
Messages
2
Greetings all.

I had a question about adding another vdev to my pool to expand its storage. I was going to add another 6 drive raidz2 vdev, but the identical drives to the original vdev that I was going to use for this vdev have disappeared from retail. I was shucking the WD 10tb Easystores that are exclusive to Best Buy, but they are no longer on their site.

I have heard that the WD Elements are also EMAZ white label drives, but if they are going to go out of stock before I can finish acquiring these hard drives, then I run into the crux of my problem.

Assuming that I have to shuck the drives out of a Mybook, or a different brand of 10tb hard drives that have a different platter densities that could be smaller than my current EMAZ white labels, would that create a problem? I know that one lineup is helium chamber drives and the other is air. So I imagine that there is a difference in platter density, but I am unfamiliar with how WD does anything anymore after their SMR fiasco and labeling 7,200RPM drives as 5,400 RPM. I was shucking these drives specifically because they said they were 5,400 and were cheap, but I haven't even followed up to see if they really are.

I was under the impression that if I added a vdev consisting of 12TB drives in a raidz2 array of 6 drives to the 6 disk z2 10TB vdev I already have, that I would only get the usable space of the 10TB array added to the pool. That is unless you replace the 10TB drives in the first vdev with some identical 12TB density drives. Then you get the rest of the space as being usable in the pool. Am I wrong in this assumption so far?

Now from that assumption, if I was to jump to the issue that structures my dilemma. Let's say that if I added drives with a true capacity of 8.9TB for the new vdev to be added to the pool, instead of the first vdev's drives that had a true capacity of 9.1TB per disk. I am asking if that would create problems, or if it is only that the usable space on the first vdev gets cut back a bit to accommodate this?

In that case it would not create problems for me, since I would still be gaining a sizable increase from another vdev being added, also I don't care about getting immediate performance increase out right, since I know it would have to fill the new vdev first to see the gains, and that is only a plus to me not a necessity. If the mismatched vdevs would create problems though, then I would have to re-evaluate my approach.

This is only a cloud hanging over my head, since I'm acquiring these drives on a monthly basis and running stress tests on them in the mean time to make sure they are up to snuff. I don't have the money to go out and get a bunch of the Elements series external drives to shuck, and I especially don't have the money to go out and snag the Red drives themselves right now, that would take quite a bit longer to acquire. I may even have to get the 12tb Elements/Easystores since they cost about the same as most 10TB MyBooks unless you get a good sale.

I appreciate any help in this matter.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
then I run into the crux of my problem.
It isn't really that much of a problem.
a different brand of 10tb hard drives that have a different platter densities that could be smaller than my current EMAZ white labels, would that create a problem?
You can use any 10TB drive. I highly recommend the Seagate Exos drives. I have over a hundred of them in servers at work and couldn't be more pleased with the reliability and performance. The vdevs do not need to be the same exact capacity.
I was under the impression that if I added a vdev consisting of 12TB drives in a raidz2 array of 6 drives to the 6 disk z2 10TB vdev I already have, that I would only get the usable space of the 10TB array added to the pool.
No, you are a little confused. Capacity of the pool is additive per vdev. So, if you have one 40TB vdev and one 44TB vdev, the pool capacity is 84TB. These are not exact math, it is an illustration. The way you were thinking is correct within the vdev. A vdev with two 10TB drives and two 12TB drives would only use 10TB of the space in the 12TB drives. If you upgrade the two 10TB drives with 12 TB drives, the vdev capacity would expand. All drives in a RAIDz2 vdev need to be the same capacity to allow the use of the full drive capacity. Writes are spread across the vdevs based on which vdev is ready for data next, so a faster vdev will gobble data faster than a slow vdev and the fast vdev will fill with data more quickly. The pool can't survive failure of a vdev, but you could think of each vdev as a separate array within a stripe set.
Now from that assumption, if I was to jump to the issue that structures my dilemma. Let's say that if I added drives with a true capacity of 8.9TB for the new vdev to be added to the pool, instead of the first vdev's drives that had a true capacity of 9.1TB per disk. I am asking if that would create problems, or if it is only that the usable space on the first vdev gets cut back a bit to accommodate this?
No, this is completely not the way ZFS works. The capacity of each vdev is independent from the next vdev. The capacity numbers for each vdev get added together to give total pool capacity. I have a system at work with 80 drives, ten vdevs of 8 drives each, and there are vdevs consisting of 2TB drives, other vdevs with 4TB drives, and yet other vdevs with 6TB drives... That system is retiring soon, but it still works like it should. The folks that designed the file system were pretty smart about how it handles these things.
snag the Red drives
I would stay away from all WD branded drives. I don't like the way they fail. All drives fail, but the way they fail matters. It is like the SMR fiasco. WD has been, not telling the whole story, about how their drives work for a long LONG time. There drives have errors, like any other drive, they just hide it by not reporting the full SMART stats. The Seagate drives I run are providing twice the metrics on drive health so I can see a failure before it happens, but WD just quietly rots away until one day it blows up in your face. I have replaced seven WD drives in the month of December alone, but only one Seagate. I don't buy WD any more. Have not for a couple years. The last batch I bought were WD Gold 6TB drives, about three years ago and since then we have probably bought 100 or so of the 10TB Seagate Exos drives and almost 300 of the 12TB Seagate Exos drives. The way it is going, I don't see buying anything from WD in the future of my organization.
unless you get a good sale.
I understand the choice to shuck drives, for the cost savings, I have done that myself at home. Good luck.
 
Last edited:

Abby Normal

Cadet
Joined
Jan 4, 2021
Messages
2
I can't begin to thank you enough for how thorough an explanation that was. It makes perfect sense, and I've walked away with so much better of an understanding on how ZFS handles multiple vdevs. I thought that the smallest drive on the Vdev rule transferred over to other vdevs in a situation like this, and it was the opposite.

I would have looked long and hard for an alternative to WD after there fiascos lately, but I had already shucked those drives before the news broke. Just when you thought you could actually find a cheap source of 5,400rpm drives, even that doesn't turn out to be the case. How I miss Hitachi being independent.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I can't begin to thank you enough for how thorough an explanation that was.
Your welcome.

Right now, NewEgg has these listed for $215:


That is only about $40 more than the WD Elements:


Considering that the Seagate comes with a 5 year warranty and I know they work well, I think they are worth the extra money.

I know it can be difficult to afford. In the early days of building my home storage systems, I was buying used drives from eBay. I had a high failure rate and changed a lot of drives. Back then, I don't think a month went by that I wasn't changing a drive or two because of some type of failure. I had a rig at one time where there were two SAS controllers, each running half the drives. One controller failed and took the pool offline. Once I replaced the controller, the pool came back with no lost data. Based on my past experience with hardware RAID, that really impressed me. Now that I think about it, probably everyone should start with hardware from eBay, just to get practice with replacing failed components, it was a good learning experience.

It has been very nice for the last two years, since I invested in better equipment, I have only changed two drives in two years in my home NAS.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I've walked away with so much better of an understanding on how ZFS handles multiple vdevs.
I thought you might like to see this, to give you a little more insight:

Code:
root@Emily-NAS:~ # zpool list -v Test

NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Test                                            36.6T  14.6T  22.1T        -         -     0%    39%  1.00x    ONLINE  /mnt
  mirror                                        9.09T  3.66T  5.43T        -         -     0%  40.3%      -  ONLINE
    gptid/9fd0fe59-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
    gptid/a57a0eed-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
  mirror                                        9.09T  3.69T  5.41T        -         -     0%  40.5%      -  ONLINE
    gptid/a234816c-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
    gptid/a457202e-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
  mirror                                        9.09T  3.64T  5.46T        -         -     0%  40.0%      -  ONLINE
    gptid/a2239f0a-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
    gptid/a1b53121-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
  mirror                                        9.09T  3.57T  5.52T        -         -     0%  39.3%      -  ONLINE
    gptid/a0410a6b-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
    gptid/a6510554-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
dedup                                               -      -      -        -         -      -      -      -  -
  gptid/a63d35b7-4fe7-11eb-922f-00074306773b      93G      0    93G        -         -     0%  0.00%      -  ONLINE
  gptid/a674c88d-4fe7-11eb-922f-00074306773b      93G      0    93G        -         -     0%  0.00%      -  ONLINE
special                                             -      -      -        -         -      -      -      -  -
  mirror                                          93G  19.3G  73.7G        -         -     5%  20.8%      -  ONLINE
    gptid/a4be1960-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE
    gptid/a5cf31c2-4fe7-11eb-922f-00074306773b      -      -      -        -         -      -      -      -  ONLINE

I created that pool to test the new special vdev for metadata. That vdev is on SSD along with the vdev for dedup. I am not sure why I didn't get any benefit from dedup. More testing to do.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
[..] I would stay away from all WD branded drives. I don't like the way they fail. All drives fail, but the way they fail matters. It is like the SMR fiasco. WD has been, not telling the whole story, about how their drives work for a long LONG time. There drives have errors, like any other drive, they just hide it by not reporting the full SMART stats. The Seagate drives I run are providing twice the metrics on drive health so I can see a failure before it happens, but WD just quietly rots away until one day it blows up in your face. I have replaced seven WD drives in the month of December alone, but only one Seagate. I don't buy WD any more. Have not for a couple years. The last batch I bought were WD Gold 6TB drives, about three years ago and since then we have probably bought 100 or so of the 10TB Seagate Exos drives and almost 300 of the 12TB Seagate Exos drives. The way it is going, I don't see buying anything from WD in the future of my organization. [..]
Although the numbers are hugely different here, the basic experience has been exactly the same. In my old FreeNAS with 6 * 4 TB WD Red CMR drives, I had to replace 4 disks over the years. I don't know how many disks I have had over the last 30 years, but those were literally the only ones that failed in all those decades. I have also gone for Seagate Exos drives (8 * 16 TB) for the new NAS, and those drives were at the beginning of October 2020 the cheapest (much cheaper then Ironwolf or Ironwolf Pro) I could get here in Germany. No long-term experience, but the assumption had simply been that the top-of-the-line drives (i.e. the data center model) should really be solid. No guarantee of course, but I guess that's my best chance.
 

blanchet

Guru
Joined
Apr 17, 2018
Messages
516
I would stay away from all WD branded drives. I don't like the way they fail. All drives fail, but the way they fail matters. It is like the SMR fiasco. WD has been, not telling the whole story, about how their drives work for a long LONG time. There drives have errors, like any other drive, they just hide it by not reporting the full SMART stats. The Seagate drives I run are providing twice the metrics on drive health so I can see a failure before it happens, but WD just quietly rots away until one day it blows up in your face. I have replaced seven WD drives in the month of December alone, but only one Seagate. I don't buy WD any more. Have not for a couple years. The last batch I bought were WD Gold 6TB drives, about three years ago and since then we have probably bought 100 or so of the 10TB Seagate Exos drives and almost 300 of the 12TB Seagate Exos drives. The way it is going, I don't see buying anything from WD in the future of my organization.

I understand the choice to shuck drives, for the cost savings, I have done that myself at home. Good luck.

Many vendors (including iXsystems) sell servers with HGST DC disks (HGST is now a subsidiary of WD).
Are these hard disks a poor choice ? Do they lie on the SMART reports ?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Are these hard disks a poor choice ? Do they lie on the SMART reports ?
It isn't so much that they lie as that they don't tell the whole story. I will try to get a couple SMART report outputs I can upload so you can see the difference. It is about the number of data points provided by WD and HGST. If they just don't tell what it is, the number can't be bad, right?

Where I work, I am going through some of our servers installing new Seagate drives to replace the WD drives.

Pictures or it didn't happen? OK

1610040817481.png


HGST drives were historically great drives. That is one of the reasons WD bought them, to get their technology, because it was really good and many of the current WD drives are just HGST drives with a different label and firmware and even the firmware isn't that different.
In fairness, not that I feel all that charitable toward them, the drives we have had such high failure rate from were produced between three and six years ago, so the current production drives could be much different. That said, I don't like the failure mode. The drives will be humming along acting like they are fine, then you come in on Monday morning and have five failed drives. Catastrophic failures where the drive is just offline.
With the Seagate drives, we start seeing indicators that they are going, like bad sectors, something that lets us know that we need to change the drive before it completely fails on a long holiday weekend when you are out of town...
If you have WD and have had good luck with them, don't let me stop you, but I won't be buying them.
 
Last edited:
Top