Just a heads-up. Earlier today, one of my FreeNAS servers (current 9.3 stable) reported a degraded boot volume. I ran a scrub from the CLI which failed & removed the failed device. I shut the system down & replaced the "bad" USB flash drive & the system rebooted normally with the boot volume missing mirror report. Again, through the CLI, I told the system to replace the failed drive with the new one, which was larger (16GB v 8). Everything appeared to complete normally (no error new error msg) so I opened a shell to clear the error. When I executed the zpool clear command the server locked up. After a hard power cycle, neither boot drive would boot the system. At this point, I pulled both USB drives & installed 9.3 stable on completely fresh drives. The sstem rebooted normally & I told it to restore config from last save, dated 4/2015. On reboot, the pool showed an error as unable to get free space. At this point, I deted the old pool & recreated. System seems fine & am copying over data now from another server. Only thing I can think of for the restore error in there was cone config diffence. From now on, I'll save config anytime the OS updates.