Hello! I've searched through the forums and never found an answer to this issue.
First a background. I've upgraded my 10 drive raid-z2 before to increase storage.
This time, I'm replacing 4TB drives with 10TB drives. The first drive was offlined, replaced, and resilvered, but kept having errors.
Thinking the drive may be bad, or connection, I tried another drive. It wasn't recognized at all. So so start over and make sure my pool was complete, I replaced the original 4TB drive and it resilvered quickly, about 9GB. I scrubbed the pool, and restarted it, but it still shows degraded with 2 offline drives (those that I tried to upgrade) and the online drive.
See my zpool status output (currently scrubbing again):
I'd like my pool to be back online before I try to upgrade the drives again.
Adding in devlist:
First a background. I've upgraded my 10 drive raid-z2 before to increase storage.
This time, I'm replacing 4TB drives with 10TB drives. The first drive was offlined, replaced, and resilvered, but kept having errors.
Thinking the drive may be bad, or connection, I tried another drive. It wasn't recognized at all. So so start over and make sure my pool was complete, I replaced the original 4TB drive and it resilvered quickly, about 9GB. I scrubbed the pool, and restarted it, but it still shows degraded with 2 offline drives (those that I tried to upgrade) and the online drive.
See my zpool status output (currently scrubbing again):
Code:
root@freenas:~ # zpool status pool: JailsSSD state: ONLINE scan: scrub repaired 0 in 0 days 00:22:26 with 0 errors on Sun Oct 20 00:22:27 2019 config: NAME STATE READ WRITE CKSUM JailsSSD ONLINE 0 0 0 gptid/225c5dcc-cdbf-11e8-8739-0cc47a40699d ONLINE 0 0 0 errors: No known data errors pool: Media state: DEGRADED scan: scrub in progress since Thu Nov 7 11:18:17 2019 2.44T scanned at 1.37G/s, 1.01T issued at 581M/s, 32.1T total 0 repaired, 3.15% done, 0 days 15:33:07 to go config: NAME STATE READ WRITE CKSUM Media DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/ad45702a-85a8-11e5-975c-0cc47a40699d ONLINE 0 0 0 gptid/adfbdbc6-85a8-11e5-975c-0cc47a40699d ONLINE 0 0 0 gptid/aea4dab7-85a8-11e5-975c-0cc47a40699d ONLINE 0 0 0 gptid/af4f2962-85a8-11e5-975c-0cc47a40699d ONLINE 0 0 0 gptid/b0001833-85a8-11e5-975c-0cc47a40699d ONLINE 0 0 0 gptid/7bb4f9ec-c0e2-11e6-b93d-0cc47a40699d ONLINE 0 0 0 replacing-6 DEGRADED 0 0 0 gptid/e6fa7798-c669-11e6-bb0c-0cc47a40699d ONLINE 0 0 0 4485633875194652128 OFFLINE 0 0 0 was /dev/gptid/9a68b9a4-fb48-11e9-a319-0cc47a40699d 8645082184063384523 OFFLINE 0 0 0 was /dev/gptid/6503962f-fea5-11e9-bae6-0cc47a40699d gptid/dbf15c07-c31c-11e6-9f56-0cc47a40699d ONLINE 0 0 0 gptid/bf2787af-d12b-11e6-9507-0cc47a40699d ONLINE 0 0 0 gptid/73dd3ee2-d2f6-11e6-80bc-0cc47a40699d ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 01:07:51 with 0 errors on Thu Nov 7 04:52:51 2019 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 gptid/78ecb373-aa82-11e4-b2d8-0cc47a40699d ONLINE 0 0 0 errors: No known data errors
I'd like my pool to be back online before I try to upgrade the drives again.
Adding in devlist:
Code:
root@freenas:~ # camcontrol devlist <ATA TOSHIBA MG03ACA4 FL1A> at scbus0 target 0 lun 0 (pass0,da0) <ATA TOSHIBA MG04ACA5 FP1A> at scbus0 target 3 lun 0 (pass1,da1) <ATA TOSHIBA MG03ACA4 FL1A> at scbus0 target 4 lun 0 (pass2,da2) <ATA HGST HMS5C4040AL A3W0> at scbus0 target 9 lun 0 (pass3,da3) <ATA HGST HMS5C4040AL A3W0> at scbus0 target 11 lun 0 (pass4,da4) <ATA HGST HMS5C4040AL A3W0> at scbus0 target 12 lun 0 (pass5,da5) <ATA HGST HMS5C4040AL A3W0> at scbus0 target 14 lun 0 (pass6,da6) <ATA HGST HMS5C4040AL A3W0> at scbus0 target 15 lun 0 (pass7,da7) <TOSHIBA MG03ACA400 FL1A> at scbus1 target 0 lun 0 (pass8,ada0) <TOSHIBA MG03ACA400 FL1A> at scbus4 target 0 lun 0 (pass9,ada1) <Samsung SSD 860 EVO 500GB RVT01B6Q> at scbus5 target 0 lun 0 (pass10,ada2) <SanDisk Cruzer Fit 1.27> at scbus8 target 0 lun 0 (pass11,da8)
Last edited: