Is there any way to determine storage used on each vdev?

Status
Not open for further replies.

philhu

Patron
Joined
May 17, 2016
Messages
258
I now have a 71TB unit using a vdev of 11 4tb in raidZ3 and 11 6tb in raidZ3

The unit was at 74% when it only had the 4tb vdev. Adding the vdev1, use went to 31%

Is there any way to see the used % of each vdev? Not just a total? Looking to balance use a little better. right now, I assume something like 74% of vdev0 and 4% of vdev1 used because I just added it.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Looking to balance use a little better.
ZFS will take care of that for you automatically as you write to the pool.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
Ok. Does it reallocate to the empty vdev? Or does it just always write to the empty vdev? Qnap as an example, considers each raid group a disk and they talk to them as a jbod in append mode meaning they fill up one before writing to the other! Then they keep writing to the 'new' one until it is full, then it looks back to the first one. Very bad concept and the reason I left QNAP

I assume after reading, that zfs writes to all vdevs and that makes it very efficient

So there is not away to see vdev allocated/used space?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I assume after reading, that zfs writes to all vdevs and that makes it very efficient
From my understanding (not 100% sure and willing to be corrected)...
  • Data at rest will stay at rest. In this I mean if you start off with 1 vdev; then later you add another vdev to the pool/volume
    • Any data/file that is not being/been modified/changed will still reside only the first vdev. It is not re-written to span both vdevs unless it has been changed/modified.
  • Newly created/added data will span the vdevs though
As far as having the data somehow get re-written to span all vDevs, there is nothing I know of (short of backup, wipe and copy back) to "restructure" all the data at once... Again, if there is a simple or recommended way to do this I am all ears.

So there is not away to see vdev allocated/used space?
Good question, I have never pondered this myself so as I check maybe someone else will chime in. My Server Rack is powered down right now (trying to pull some 10 gauge wiring myself... what a pain..) so I can't really test anything.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm not sure there is a way to see the vdev usage, but the best way to rebalance is to move all of your data to a different dataset (maybe to a temp spot on the same pool and then back to original - normally fragmentation is a large issue, but you seem to have a decent amount of space). Of course, ZFS will balance things for you as you use the pool, and there really isn't a need to worry about it.
For more info, I highly recommend Matt Ahrens explain some innerworkings of ZFS (from BSDCan #13 June 2016):
https://www.youtube.com/watch?v=AOidjSS7Hsg#
 

philhu

Patron
Joined
May 17, 2016
Messages
258
Well I think I see a forced writing to new vdev

Build a new folder, copy a chunk into the folder. This will force the new copy onto more empty disk. Then kill old data and mv new copy back to top of disk, which will force seeing it on main disk, but in new vdev area
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I'm not sure there is a way to see the vdev usage, ......

Sure there is:

Code:
root@nas ~ # zpool list -v nas1pool
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
nas1pool  48.8T  33.7T  15.0T  -  27%  69%  1.00x  ONLINE  /mnt
raidz2  16.2T  11.2T  5.01T  -  27%  69%
  gptid/f6d4e358-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/d21837de-1e96-11e6-bd02-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/f81deaca-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/f8cf0397-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/c94c54a5-ff6b-11e5-902c-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/cc7cc42d-f675-11e5-902c-0cc47a1d9eb6  -  -  -  -  -  -
raidz2  16.2T  11.2T  5.01T  -  27%  69%
  gptid/fac7b2d0-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/fb742f2d-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/9d258c1e-f62f-11e5-902c-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/fcb32a63-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/38033124-2aa8-11e6-8308-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/fe1f7d10-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
raidz2  16.2T  11.2T  5.01T  -  27%  69%
  gptid/fec3047d-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/ff679bf9-92d4-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/00027f4c-92d5-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/00a4d077-92d5-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/015e8d38-92d5-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -
  gptid/0221a8ac-92d5-11e5-addd-0cc47a1d9eb6  -  -  -  -  -  -


3 vdev's, each with equal allocated and free amounts. Expected, as this was created initially with all 3 vdevs.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Ahh, thanks. I forgot about that. :smile:
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

philhu

Patron
Joined
May 17, 2016
Messages
258
If there's a severe imbalance in the vdev utilization, ZFS will heavily favor the emptier vdev until things balance out.

Good, thats what I was hoping:
Code:
[root@freenas] ~# zpool list -v volume1
NAME                                     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
volume1                                  100T  34.0T  66.0T         -    18%    34%  1.00x  ONLINE  /mnt
  raidz3                                  40T  30.6T  9.39T         -    42%    76%
    gptid/c998b63d-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cabe3c7b-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cb560878-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cbef8b60-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cc7ea314-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cd0f5146-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cd9eac0e-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/ce2a118e-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cebb4f95-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cf4ba2ec-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
    gptid/cfdd85b3-290d-11e6-b18e-0025902ae8aa      -      -      -         -      -      -
  raidz3                                  60T  3.43T  56.6T         -     3%     5%
    gptid/4979d7f2-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4a4c80cc-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4b12e2e9-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4bd76874-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4c972f39-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4d535eb0-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4e7c520d-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/4fa6ccf1-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/50c3d025-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/51e73130-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
    gptid/52a39fee-38c4-11e6-b3f6-0025902ae8aa      -      -      -         -      -      -
log                                         -      -      -         -      -      -
  gptid/e3ff287f-3189-11e6-b68d-0025902ae8aa  14.9G   246M  14.6G         -    43%     1%
cache                                       -      -      -         -      -      -
  gptid/d53c311a-3189-11e6-b68d-0025902ae8aa   224G   224G  5.13M         -     0%    99%
 

philhu

Patron
Joined
May 17, 2016
Messages
258
ok, so watching the numbers. As you said, it is writing mostly to the vdv1, almost empty storage as shown below. But it IS writing about 10% of new data to vdev0, which is getting very close to the 80% mark, which is not the best. I know it is automatic, but doesn't seem quite automatic. The old has gone from 9.78T free to 8.1T free. New data written, about 8.1T, 1.7 to old vdev, 6.4 to new vdev

Should I manually move about 5TB to the new storage? Easy to do, copy to new data set, which will mostly write to vdev1, delete old datathen 'mv' data to current folders.
 

Attachments

  • zpool1.jpg
    zpool1.jpg
    101.8 KB · Views: 263

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I know it is automatic, but doesn't seem quite automatic.

It is, it's just that it still write a small part of the data to the old vdev rather than not writing anything to it, it's not a brickwall rule.

Should I manually move about 5TB to the new storage?

That's what I'd do ;)
 

philhu

Patron
Joined
May 17, 2016
Messages
258
ok, first test.

Moved 1.3T to a new folder, using 'cp -RPpa A* ../newA', watch it write mostly to vdev1, good
deleted the old data, that was on vdev0, 'rm -rf ./A*'
moved the data using 'mv ../newA/A* ., worked fine

So here is the result, the vdev1 DID increase 1.3T, but, the vdev0 did not decrease, probably due to shadow copy setup. Will the old just expire? Or do I need to do something for force it to be really deleted?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
ok, so watching the numbers. As you said, it is writing mostly to the vdv1, almost empty storage as shown below. But it IS writing about 10% of new data to vdev0, which is getting very close to the 80% mark, which is not the best. I know it is automatic, but doesn't seem quite automatic. The old has gone from 9.78T free to 8.1T free. New data written, about 8.1T, 1.7 to old vdev, 6.4 to new vdev

Should I manually move about 5TB to the new storage? Easy to do, copy to new data set, which will mostly write to vdev1, delete old datathen 'mv' data to current folders.

Unless you are absolutely desperately in a crisis due to IOPS imbalance, you're really better off just not worrying about it and letting ZFS do its magic over time. Storage admins have gone insane obsessing over IOPS balancing.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
Well, what I think I am seeing is ZFS NOT using 'set in stone' tech. For it to write 10% to a disk at the redline of space usage 78%+, when the other is 93% empty, seems wrong. I am moving over a bunch this weeekend. I do NOT want to get in the situation where vdev0 goes to 90% and then all hell breaks loose.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, what I think I am seeing is ZFS NOT using 'set in stone' tech. For it to write 10% to a disk at the redline of space usage 78%+, when the other is 93% empty, seems wrong. I am moving over a bunch this weeekend. I do NOT want to get in the situation where vdev0 goes to 90% and then all hell breaks loose.

No. You can be at 99.9999723% full on the a single vdev pool and unable to allocate new blocks because ZFS can't find any, then add a second vdev, and you're golden. There is no "redline of space usage" per vdev. It is per pool.

ZFS knows that it should try to balance writes between devices where at all possible, so even though there is a space imbalance, it still tries to allocate some space on that first vdev, knowing that when deletions happen from the pool, that's likely to mostly affect the first vdev. Over time, this causes balancing to happen much more quickly. No need to fight the process. ZFS knows what it is doing.
 

philhu

Patron
Joined
May 17, 2016
Messages
258
ok, that sounds reasonable. I will run for a week and see where we get :)

Ok, feel better. The vdev0 got to approx 80% and it stopped doing any writes to vdev0, writing as far as I can tell, only vdev1 at this point
 
Last edited:
Status
Not open for further replies.
Top