Hi,
I'm pretty noob to Freenas and ZFS (but not on storage, I'm at ease with Datacore or Data Domain for my work), using it for my personnal lab.
So, I build a little NAS and put 2x 2 Tb disks in it some times ago. I've read all the docs I could but it look like I missed all the part about raid :)
Now, I have a faulty disk (read errors and some files on my storage are corrupted). I bought two others disks (2 Tb also), put them in the NAS and... that's all ! I have two questions :
- with 2 disks, I'm pretty sure I dont have any raid by default. That's something I missed when I build the NAS (shame on me). I need one. Can I do it now ?
- with the 2 additionnal disks, I think I could make a new pool, duplicate data, but that will not give me a raid (again, 2x disks only). I should include the safe "old" disk without loosing data.
For informations :
(i've launched a scrub after powering down and up)
Messages I've received before powering down and up the NAS (to add my 2 disks) :
I'm pretty noob to Freenas and ZFS (but not on storage, I'm at ease with Datacore or Data Domain for my work), using it for my personnal lab.
So, I build a little NAS and put 2x 2 Tb disks in it some times ago. I've read all the docs I could but it look like I missed all the part about raid :)
Now, I have a faulty disk (read errors and some files on my storage are corrupted). I bought two others disks (2 Tb also), put them in the NAS and... that's all ! I have two questions :
- with 2 disks, I'm pretty sure I dont have any raid by default. That's something I missed when I build the NAS (shame on me). I need one. Can I do it now ?
- with the 2 additionnal disks, I think I could make a new pool, duplicate data, but that will not give me a raid (again, 2x disks only). I should include the safe "old" disk without loosing data.
For informations :
(i've launched a scrub after powering down and up)
root@nas[~]# zpool status
pool: Pool_1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub in progress since Sat Dec 7 17:57:41 2019
681G scanned at 581M/s, 211G issued at 180M/s, 1.44T total
0 repaired, 14.30% done, 0 days 02:00:03 to go
config:
NAME STATE READ WRITE CKSUM
Pool_1 ONLINE 0 0 2
gptid/ae929bad-4ff0-11e9-b7bf-00012e23ba5f ONLINE 0 0 4
gptid/d7fc7ef3-570d-11e9-8567-00012e23ba5f ONLINE 0 0 0
errors: 9 data errors, use '-v' for a list
pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:31 with 0 errors on Mon Dec 2 03:45:31 2019
config:
NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
ada0p2 ONLINE 0 0 0
errors: No known data errors
Messages I've received before powering down and up the NAS (to add my 2 disks) :
Checking status of zfs pools:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Pool_1 3.62T 1.44T 2.18T - - 10% 39% 1.00x DEGRADED /mnt
freenas-boot 55.5G 4.54G 51.0G - - - 8% 1.00x ONLINE -
pool: Pool_1
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://illumos.org/msg/ZFS-8000-8A
scan: scrub repaired 0 in 0 days 03:45:22 with 1 errors on Sun Nov 24 03:45:30 2019
config:
NAME STATE READ WRITE CKSUM
Pool_1 DEGRADED 0 0 123
gptid/ae929bad-4ff0-11e9-b7bf-00012e23ba5f DEGRADED 0 0 210 too many errors
gptid/d7fc7ef3-570d-11e9-8567-00012e23ba5f ONLINE 0 0 36
errors: 9 data errors, use '-v' for a list
-- End of daily output --
FreeNAS @ nas.domain.lan
New alert:
* Pool Pool_1 state is DEGRADED: One or more devices has experienced an error resulting in data corruption. Applications may be affected.
The following alert has been cleared:
* Pool Pool_1 state is ONLINE: One or more devices has experienced an error resulting in data corruption. Applications may be affected.
Current alerts:
* Device: /dev/ada2, 16 Currently unreadable (pending) sectors
* Device: /dev/ada2, 16 Offline uncorrectable sectors
* Device: /dev/ada2, Self-Test Log error count increased from 5 to 6
* Pool Pool_1 state is DEGRADED: One or more devices has experienced an error resulting in data corruption. Applications may be affected.