Enlightend
Dabbler
- Joined
- Oct 30, 2013
- Messages
- 15
Hey guys,
I updated my test server to 9.2-BETA yesterday and all ran well, then I went looking around and noticed that there were firmware updates for nearly all my drives, SSD's and controllers, so thought to run those while I was at it.
I removed my cache devices from the pool, updated them, secure erased them and tried to re-add them to the pool, which promtly hangs the zpool.
Once this happens I can not use "zfs" or "zpool" any longer, unless I reboot the entire system.
Nor can I cd to the mounted volumes, without my console hanging (at which point I can still simply SSH to the system and do stuff, as long as I do not try to touch anything ZFS related)
At this point my mounts on NFS, iscsi and CIFS also become unreachable.
There is no CPU usage in top to indicate ZPOOL is doing anything to prep the drives.
I was thinking the firmware update on the SSD's got fucked up, but then I did a fresh reboot, added those drives to a new zpool as member disks instead and this operation works without issue and blazing fast.
I removed the drives from the system, did another secure erase under Windows, created partitions and ran tests to assure that I can read/write/delete data from them.
After all the tests I have done, I can only conclude that the SSD's are perfectly fine and the problem is with Freenas.
I attempted both to add the SSD's one by one as cache devices and all together.
Any logs, outputs, files, info I can post to help diagnose this problem?
Anyone else maybe have this problem with the BETA or RC versions?
EDIT: Additional information:
When I reboot the system comes up and my zpool is accessible as normal. It doesn't have the cache drives added.
I also just tried to add other disks to the zpool as cache and the same happens.
Disks I tried went from 320GB HDD to 128GB SSD's and even some USB sticks.
So the problem is clearly not with the disks, but with adding cache devices to an existing zpool.
Also, when I freshly did the update, FREENAS was also nagging me that my L2ARC drives weren't using the correct sector size, 512 instead of 4k.
I updated my test server to 9.2-BETA yesterday and all ran well, then I went looking around and noticed that there were firmware updates for nearly all my drives, SSD's and controllers, so thought to run those while I was at it.
I removed my cache devices from the pool, updated them, secure erased them and tried to re-add them to the pool, which promtly hangs the zpool.
Once this happens I can not use "zfs" or "zpool" any longer, unless I reboot the entire system.
Nor can I cd to the mounted volumes, without my console hanging (at which point I can still simply SSH to the system and do stuff, as long as I do not try to touch anything ZFS related)
At this point my mounts on NFS, iscsi and CIFS also become unreachable.
There is no CPU usage in top to indicate ZPOOL is doing anything to prep the drives.
I was thinking the firmware update on the SSD's got fucked up, but then I did a fresh reboot, added those drives to a new zpool as member disks instead and this operation works without issue and blazing fast.
I removed the drives from the system, did another secure erase under Windows, created partitions and ran tests to assure that I can read/write/delete data from them.
After all the tests I have done, I can only conclude that the SSD's are perfectly fine and the problem is with Freenas.
I attempted both to add the SSD's one by one as cache devices and all together.
Any logs, outputs, files, info I can post to help diagnose this problem?
Anyone else maybe have this problem with the BETA or RC versions?
EDIT: Additional information:
When I reboot the system comes up and my zpool is accessible as normal. It doesn't have the cache drives added.
I also just tried to add other disks to the zpool as cache and the same happens.
Disks I tried went from 320GB HDD to 128GB SSD's and even some USB sticks.
So the problem is clearly not with the disks, but with adding cache devices to an existing zpool.
Also, when I freshly did the update, FREENAS was also nagging me that my L2ARC drives weren't using the correct sector size, 512 instead of 4k.