Mike Gomon
Dabbler
- Joined
- Mar 10, 2016
- Messages
- 16
This is the first time I have had a drive fail since building this SAN. It's about 3 years old and a couple days back a drive appears to have failed. The drive is in UNAVAIL state and when I try to ONLINE it, "remains failed, device no longer present" so I assume it's toast. I do have the serial but I am unable to see the front of the drives, I assume in that case I will need to power down the SAN and pull drives one by one which I can do, so that is not an issue.
Some questions:
1. Is powering off the SAN and looking at each drive one by one the only way to ID the failed drive?
2. Do I have to replace the drive with a 1 for 1? Meaning, does it need to the exact same drive or can I buy a larger drive (which is cheaper at this point)?
a. If I buy a larger drive, will the pool use all of the space or only the 3TB from the original disk set?
From dmsg =
From status Email:
# zpool online volgrp1 14232926012089479196
Thanks!
Some questions:
1. Is powering off the SAN and looking at each drive one by one the only way to ID the failed drive?
2. Do I have to replace the drive with a 1 for 1? Meaning, does it need to the exact same drive or can I buy a larger drive (which is cheaper at this point)?
a. If I buy a larger drive, will the pool use all of the space or only the 3TB from the original disk set?
From dmsg =
da4 at mps0 bus 0 scbus0 target 4 lun 0
da4: <ATA WDC WD3001FFSX-6 0A81> s/n WD-WMCXXXXXXXX detached
GEOM_MIRROR: Device swap1: provider da4p1 disconnected.
(da4:mps0:0:4:0): Periph destroyed
From status Email:
Checking status of zfs pools:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 111G 6.21G 105G - - - 5% 1.00x ONLINE -
volgrp1 16.2T 13.8T 2.43T - - 26% 85% 1.00x DEGRADED /mnt
volgrp2 3.62T 442G 3.19T - - 37% 11% 1.00x ONLINE /mnt
volgrp3 7.25T 4.94T 2.31T - - 0% 68% 1.00x ONLINE /mnt
pool: volgrp1
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: scrub repaired 0 in 0 days 10:48:08 with 0 errors on Sun Nov 10 10:48:13 2019
config:
NAME STATE READ WRITE CKSUM
volgrp1 DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
gptid/456156e2-f956-11e7-abc1-003048d6d386 ONLINE 0 0 0
gptid/461b1637-f956-11e7-abc1-003048d6d386 ONLINE 0 0 0
gptid/46d7d732-f956-11e7-abc1-003048d6d386 ONLINE 0 0 0
14232926012089479196 UNAVAIL 0 266 0 was /dev/gptid/479efb2d-f956-11e7-abc1-003048d6d386
gptid/48f05162-f956-11e7-abc1-003048d6d386 ONLINE 0 0 0
gptid/4a47c6b4-f956-11e7-abc1-003048d6d386 ONLINE 0 0 0
errors: No known data errors
-- End of daily output --
# zpool online volgrp1 14232926012089479196
warning: device '14232926012089479196' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present
Thanks!
Last edited: