I had a zpool made up of 6 Vdevs (12 disk in a mirrored pair). I needed more space/performance so I added a JBOD that gave me 45 more disk. I expanded the zpool and added the new disks in as mirrored pairs. So now I have 27 mirrored Vdevs in a single zpool (volume). Life is good except that I originally I created a dataset out of the 6 vdevs and used ISCSI to mount the volume on my Hyper-V host. I want that data to stripe across all the vdevs for performance reasons.
No problem, I thought. I'll just create a new dataset and copy the data from the old dataset to the new data set. This I did, except when I ran gstat (expecting to see all the drives responding) I saw the 12 original disk reading and writing in concert and all the read/writes going to just one of the new drives. Instead of writing the data across 27 vdevs it was hitting one single drive in one vdev.
At least that explained why I was getting such poor performance. Does anyone know a way to spread/stripe all my existing data across all the vdevs in a zpool? I'm in a time crunch on this one so I very much appreciate any thoughts. We are running the latest release of FreeBSD.
No problem, I thought. I'll just create a new dataset and copy the data from the old dataset to the new data set. This I did, except when I ran gstat (expecting to see all the drives responding) I saw the 12 original disk reading and writing in concert and all the read/writes going to just one of the new drives. Instead of writing the data across 27 vdevs it was hitting one single drive in one vdev.
At least that explained why I was getting such poor performance. Does anyone know a way to spread/stripe all my existing data across all the vdevs in a zpool? I'm in a time crunch on this one so I very much appreciate any thoughts. We are running the latest release of FreeBSD.