Hello all,
I'm looking for some guidance on replacing a failed drive. The FreeNAS docs walks through the process and I've read a few posts in the forums about other experiences. One thing I have yet to get a clear picture on is can I 'offline' the disk, pull it, and replace it in 2 weeks (I'll be RMA'ing the disk)? Right now the NAS is being used as a backup point for servers, there are (4) drives in RAIDZ2. It can stumble along in a degraded state. I'm under the impression it is safe to run in a degraded state and reboot the NAS a few times and things stay aware of the current status - no drive since it was offline'd and pulled.
* FreeNAS-11.0-U2 (e417d8aa5)
* msg from system: The volume ds01 state is DEGRADED: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state.
* and a big &^%$!# since all of the drives are less than 3 months old
* the drives are running off the SATA ports on the mobo, no RAID controllers (running in JBOD)
I'm looking for some guidance on replacing a failed drive. The FreeNAS docs walks through the process and I've read a few posts in the forums about other experiences. One thing I have yet to get a clear picture on is can I 'offline' the disk, pull it, and replace it in 2 weeks (I'll be RMA'ing the disk)? Right now the NAS is being used as a backup point for servers, there are (4) drives in RAIDZ2. It can stumble along in a degraded state. I'm under the impression it is safe to run in a degraded state and reboot the NAS a few times and things stay aware of the current status - no drive since it was offline'd and pulled.
* FreeNAS-11.0-U2 (e417d8aa5)
* msg from system: The volume ds01 state is DEGRADED: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state.
* and a big &^%$!# since all of the drives are less than 3 months old
* the drives are running off the SATA ports on the mobo, no RAID controllers (running in JBOD)