StrangeWill
Cadet
- Joined
- May 5, 2016
- Messages
- 4
Added 3 new SSDs to a vdev, fresh out of the package, when I clicked "add" the UI froze (loading for a long time), all my VMs stunned due to iSCSI disappearing, logging into the box and tying "zpool status" would just hang there...
In daemon.log:
Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098177 secs (341775029 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091098 secs (376516253 bytes/sec)
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p1 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p2 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098692 secs (339991613 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091141 secs (376338963 bytes/sec)
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p1 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p2 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 notifier: 32+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+0 records out
Aug 9 16:05:22 freenas-01 notifier: 33554432 bytes transferred in 0.104720 secs (320420120 bytes/sec)
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: short write on character device
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: end of device
Aug 9 16:05:22 freenas-01 notifier: 33+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+1 records out
Aug 9 16:05:22 freenas-01 notifier: 34299904 bytes transferred in 0.089526 secs (383127096 bytes/sec)
Watched the array run through each disk one by one, zeroing it out. I understand doing this before adding the disk but causing the entire pool to hang over it was really rough, I'm on 9.10-stable. Am I missing something that is causing this? Other ZFS systems I've been on have never done this.
In daemon.log:
Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098177 secs (341775029 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091098 secs (376516253 bytes/sec)
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p1 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p2 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098692 secs (339991613 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091141 secs (376338963 bytes/sec)
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p1 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p2 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 notifier: 32+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+0 records out
Aug 9 16:05:22 freenas-01 notifier: 33554432 bytes transferred in 0.104720 secs (320420120 bytes/sec)
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: short write on character device
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: end of device
Aug 9 16:05:22 freenas-01 notifier: 33+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+1 records out
Aug 9 16:05:22 freenas-01 notifier: 34299904 bytes transferred in 0.089526 secs (383127096 bytes/sec)
Watched the array run through each disk one by one, zeroing it out. I understand doing this before adding the disk but causing the entire pool to hang over it was really rough, I'm on 9.10-stable. Am I missing something that is causing this? Other ZFS systems I've been on have never done this.