Hi,
I have a backup computer I use to receive zfs replication onto backup volumes (3 disk in RAIDZ1 usually). I have primaraly 2 main backups I rotate over time. One goes offsite, while the other is receiving replication.
The backup computer has been running FreeNAS 9.10.2.U1 and the backup volumes have been created on FreeNAS 9 as well.
I swap backup volumes on a regular basis, but a couple of weeks ago one of the drive became unavailable.
The first thing that came to mind is a bad drive. To make sure of it I swapped the power rails and SATA ports and the same drive was shown as unavailable. The pool and data is still available. In the meantime, I ordered new drives to provide extra backup solutions.
The new drives have been placed in the backup computer and replication was completed. They are also encrypted.
After replication to the new drives was completed, I swapped to the other backup volume and this one is also showing one unavailable drive.
I came to realize that for a while I was running with a mirror backup to test the newer FreeNAS 11. I am currently on FreeNAS 11.1-Release and the mirror and the newer drives were created alongside FreeNAS 11.
Thinking of it, the issue is identical to the time I tried to migrate my server to FreeNAS Corral, but the spare drives would always remain unavailable.
I have been waiting for FreeNAS 11 to reach the Release state and synchronize all my backup volumes but it seems I will have to wait until I find out about the encryption migration issue.
My backup computer is now is back on 9.10.2-U1 and the 3 drives are recognized. Resilvering is in effect which proves the issue is not hardware related.
It is going to be a while before resilvering is complete. A few days I would think.
In the meantime, I am trying to understand what is causing FreeNAS 11.1-Release to break redundancy and make drives unavailable?
Is this a bug?
Should I detach and attach the volume again and hope everything will be fine?
I have a backup computer I use to receive zfs replication onto backup volumes (3 disk in RAIDZ1 usually). I have primaraly 2 main backups I rotate over time. One goes offsite, while the other is receiving replication.
The backup computer has been running FreeNAS 9.10.2.U1 and the backup volumes have been created on FreeNAS 9 as well.
I swap backup volumes on a regular basis, but a couple of weeks ago one of the drive became unavailable.
The first thing that came to mind is a bad drive. To make sure of it I swapped the power rails and SATA ports and the same drive was shown as unavailable. The pool and data is still available. In the meantime, I ordered new drives to provide extra backup solutions.
The new drives have been placed in the backup computer and replication was completed. They are also encrypted.
After replication to the new drives was completed, I swapped to the other backup volume and this one is also showing one unavailable drive.
I came to realize that for a while I was running with a mirror backup to test the newer FreeNAS 11. I am currently on FreeNAS 11.1-Release and the mirror and the newer drives were created alongside FreeNAS 11.
Thinking of it, the issue is identical to the time I tried to migrate my server to FreeNAS Corral, but the spare drives would always remain unavailable.
I have been waiting for FreeNAS 11 to reach the Release state and synchronize all my backup volumes but it seems I will have to wait until I find out about the encryption migration issue.
My backup computer is now is back on 9.10.2-U1 and the 3 drives are recognized. Resilvering is in effect which proves the issue is not hardware related.
It is going to be a while before resilvering is complete. A few days I would think.
In the meantime, I am trying to understand what is causing FreeNAS 11.1-Release to break redundancy and make drives unavailable?
Is this a bug?
Should I detach and attach the volume again and hope everything will be fine?
Last edited by a moderator: