Striping data across new Vdevs

Status
Not open for further replies.

akotter

Cadet
Joined
Jan 20, 2013
Messages
4
I had a zpool made up of 6 Vdevs (12 disk in a mirrored pair). I needed more space/performance so I added a JBOD that gave me 45 more disk. I expanded the zpool and added the new disks in as mirrored pairs. So now I have 27 mirrored Vdevs in a single zpool (volume). Life is good except that I originally I created a dataset out of the 6 vdevs and used ISCSI to mount the volume on my Hyper-V host. I want that data to stripe across all the vdevs for performance reasons.

No problem, I thought. I'll just create a new dataset and copy the data from the old dataset to the new data set. This I did, except when I ran gstat (expecting to see all the drives responding) I saw the 12 original disk reading and writing in concert and all the read/writes going to just one of the new drives. Instead of writing the data across 27 vdevs it was hitting one single drive in one vdev.

At least that explained why I was getting such poor performance. Does anyone know a way to spread/stripe all my existing data across all the vdevs in a zpool? I'm in a time crunch on this one so I very much appreciate any thoughts. We are running the latest release of FreeBSD.
 

akotter

Cadet
Joined
Jan 20, 2013
Messages
4
I realized I just made a mistake. The drive getting all the hits is our cache, and the other two drives are the ZIL in a mirrored pair.

So we're not hammering one drive in a vdev (which would be really unusual). But the original question remains. Even with a new dataset all the moved data is stripping across the same vdevs. How can I force existing data to spread acrosss an expanded zpool, and does it put the data at risk?

Thanks !
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There is no way to reorganize the data except to move it off of the zpool, then back on.

With that many disks in a mirror I surely hope you have a backup. All you need is 1 disk to fail and another to have bad sectors and you are potentially in a very nasty situation.
 

akotter

Cadet
Joined
Jan 20, 2013
Messages
4
Thanks for the feedback, I was hoping there is a way to force a re striping, but it doesn't seem to exist.

And yes, we replicate the entire group to another FreeNAS system next to it and then to one off-site. In this particular case I was going for performance and using replication for extra reliability. Thank you for pointing that out. It is a very real concern.

As a side note, I love your power point guide. Very helpful.
 
Status
Not open for further replies.
Top