I vote for a 3rd vDev of RaidZ2. Of course it looks like you may be out of drive slots on that case, so a JBOD is under consideration? That or get a bigger case which houses more drives, has bigger PSUs and can take your parts..
As far as JBODs go, I prefer to have them use their own Pool/Volume just for the safe feeling that part of any vDev is not housed somewhere else. But that is just me...
You will have 18 drives correct? For 3 raidz2 6 drive vdevs?
You should split vdevs so 1/3 is in the jbod. Since you can lose a third of your vdevs, then you will still be okay if you lose the jbod.
Once the driver come back there will be a very quick resilver that happens to catch up on the writes that got missed during the outage. It might even say you lost Xbytes because it had no place to write it to disk.I currently have 12 drives split as 2 x 6 Drive RAIDZ2 vdevs. All of those drives are in my first chassis. My second chassis has 12 more drive slots. I am going to add 6 more drives right now and eventually another 6 drives down the road.
My only thought about changing my current setup the way you recommend is what happens if I lose power to the jbod for example?
Right now, if I leave it as a third 6 drive RAIDZ2 vdev and add it to my current pool and it blinks offline, I lose my pool until I can get the drives back online...no biggie...it's for my plex server anyway. When I power it back up (any maybe reboot) the pool should see all the drives and come back online.
If I have split my vdevs and then I lose the jbod, the pool would stay online but in a degraded state, but once I bring the jbod back online I think now I would have 6 drives that would need to resliver, correct...? I'm thinking that might take a very lone time and I would have zero redundancy in any of my vdevs while that was happening.
Or do I have it wrong?
Thanks!
Yes, but basically ZFS knows what it wrote while the drives were offline and very quickly rewrites it
But yea, if you lose 2/3s of your drives it's possible to have the pool stay online, but without further redundancy
[root@plexnas] ~# df -h Filesystem Size Used Avail Capacity Mounted on vol1/media 42T 23T 20T 53% /mnt/vol1/media
zpool list should be used instead of df most of the time.Thanks Everyone for the great info. Based on my configuration, I decided to extend my pool to include the vdev in the jbod now giving me 65.2TB RAW and 42TB usable.
Code:[root@plexnas] ~# df -h Filesystem Size Used Avail Capacity Mounted on vol1/media 42T 23T 20T 53% /mnt/vol1/media
zpool list should be used instead of df most of the time.