Adding vdev To Pool Causes iSCSI/zpool Hang

Status
Not open for further replies.

StrangeWill

Cadet
Joined
May 5, 2016
Messages
4
Added 3 new SSDs to a vdev, fresh out of the package, when I clicked "add" the UI froze (loading for a long time), all my VMs stunned due to iSCSI disappearing, logging into the box and tying "zpool status" would just hang there...

In daemon.log:

Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098177 secs (341775029 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da4: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091098 secs (376516253 bytes/sec)
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p1 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 zfsd: DEVFS: Notify cdev=da4p2 subsystem=CDEV timestamp=1470791121 type=CREATE
Aug 9 16:05:21 freenas-01 notifier: 32+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+0 records out
Aug 9 16:05:21 freenas-01 notifier: 33554432 bytes transferred in 0.098692 secs (339991613 bytes/sec)
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: short write on character device
Aug 9 16:05:21 freenas-01 notifier: dd: /dev/da5: end of device
Aug 9 16:05:21 freenas-01 notifier: 33+0 records in
Aug 9 16:05:21 freenas-01 notifier: 32+1 records out
Aug 9 16:05:21 freenas-01 notifier: 34299904 bytes transferred in 0.091141 secs (376338963 bytes/sec)
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p1 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 zfsd: DEVFS: Notify cdev=da5p2 subsystem=CDEV timestamp=1470791122 type=CREATE
Aug 9 16:05:22 freenas-01 notifier: 32+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+0 records out
Aug 9 16:05:22 freenas-01 notifier: 33554432 bytes transferred in 0.104720 secs (320420120 bytes/sec)
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: short write on character device
Aug 9 16:05:22 freenas-01 notifier: dd: /dev/da6: end of device
Aug 9 16:05:22 freenas-01 notifier: 33+0 records in
Aug 9 16:05:22 freenas-01 notifier: 32+1 records out
Aug 9 16:05:22 freenas-01 notifier: 34299904 bytes transferred in 0.089526 secs (383127096 bytes/sec)​


Watched the array run through each disk one by one, zeroing it out. I understand doing this before adding the disk but causing the entire pool to hang over it was really rough, I'm on 9.10-stable. Am I missing something that is causing this? Other ZFS systems I've been on have never done this.
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
How is your system behaving today?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
This sounds like the problem related to TRIM on init of device.

I know it doesn't help now, but since you seem to have an all-SSD solution, set the tunable vfs.zfs.vdev.trim_on_init=0

This should mitigate the stall on new vdev adds. As to why FreeNAS stalls, that's going to be a dev question about the order of events on SSD device addition, and whether or not it's possible for the UI to decide to do an asynchronous TRIM first, put the volume in an "extending, please wait" state, and then actually do the "zfs add" later.
 

StrangeWill

Cadet
Joined
May 5, 2016
Messages
4
How is your system behaving today?
Perfectly fine and healthy.

This sounds like the problem related to TRIM on init of device.

I know it doesn't help now, but since you seem to have an all-SSD solution, set the tunable vfs.zfs.vdev.trim_on_init=0

This should mitigate the stall on new vdev adds. As to why FreeNAS stalls, that's going to be a dev question about the order of events on SSD device addition, and whether or not it's possible for the UI to decide to do an asynchronous TRIM first, put the volume in an "extending, please wait" state, and then actually do the "zfs add" later.
I'll go ahead and make that change and test it next time I add a new vdev.

I do have another pool that is not flash, would this present a problem when expanding that?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I do have another pool that is not flash, would this present a problem when expanding that?

Nope. TRIM is not supported (or necessary) on platter devices, so the commands won't be passed to it.
 
Status
Not open for further replies.
Top