I run a server with 64 bit intel core2duo CPU, 2GB RAM, autotune
Version: FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)
ZFS28
Initial disk setup
ADA0, 1000 GB SATA drive
ADA1, 500 GB SATA drive
Did an upgrade last month, where i offlined the 500 GB disk in GUI, then replaced the physical disk to a 620 GB disk.
Attached ADA1 to pool, resilver started.
Disk setup:
ADA0, 1000 GB SATA drive
ADA1, 620 GB SATA drive
A few hours later, resilver completed with no errors. Old drive still in pool, so I removed from pool. Ended up with healthy Two-way mirror running fine.
Server run for a week. Then i manually restarted the machine (shutdown in GUI).
After restart, one disk was missing from the mirror.
Email during next night:
Took an extra backup and begun troubleshooting.
I run a gpart list and had a fault on ADA0.
Thought it was a software error since scrub went fine a few days earlier and this happened during a reboot, so i run gpart recover on ada0.
gpast list showed that ADA0 was OK, had two partitions: ada0p1 2 GB swap and ada0p2 929 GB freebsd-zfs.
Took 10142334550473523414 away from pool with CLI. Ended up with healthy single drive pool.
Then i attached ada0p2 to pool, thought that it would attach, form a mirror. Resilver and become healthy again.
run zpool attach zfs_root_pool [old drive id] /dev/ada0p2
The pool started resilvering. Took the night. During the night i got this email:
Confirmed the status of the pool next morning with zpool status.
It showed a mirror with both drives online, healthy. "resilver completed in 10 hours".
Everything fine, except i had manually imported the pool in CLI and it had no mount point, nor showed up in web GUI.
So, as the last operation to bring the web gui in sync, and imported on the correct mount point, i just tried to export the pool from CLI:
And then hit "auto import volume" in GUI.
It failed.
And now i have a situation where i cannot import my pool.
Tried restart. both drives enumerated as ADA0 and ADA1.
Run short S.M.A.R.T test on both drives with smartctl. PASSED.
camcontrol devlist:
sysctl kern.disks:
gpart list
swapinfo:
zdb:
Cannot import pool:
Don't know how to proceed importing this volume. I have a feeling that at least one drive in the mirror are just fine, if not both, since it resilvered without errors five minutes before export/import procedure.
Please help. I really don't want to revert from backup, since i will loose one month of cpu time re-render a png dataset stored on the pool which is not backed up in a while.
Code:
vfs.zfs.arc_max 1073741824 Generated by autotune vm.kmem_size_max 1207959552 Generated by autotune vm.kmem_size 966367641 Generated by autotune
Version: FreeNAS-8.3.1-RELEASE-p2-x64 (r12686+b770da6_dirty)
ZFS28
Initial disk setup
ADA0, 1000 GB SATA drive
ADA1, 500 GB SATA drive
Did an upgrade last month, where i offlined the 500 GB disk in GUI, then replaced the physical disk to a 620 GB disk.
Attached ADA1 to pool, resilver started.
Disk setup:
ADA0, 1000 GB SATA drive
ADA1, 620 GB SATA drive
A few hours later, resilver completed with no errors. Old drive still in pool, so I removed from pool. Ended up with healthy Two-way mirror running fine.
Server run for a week. Then i manually restarted the machine (shutdown in GUI).
After restart, one disk was missing from the mirror.
Email during next night:
Code:
Checking status of zfs pools: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_root_volume 592G 359G 233G 60% 1.00x DEGRADED /mnt pool: zfs_root_volume state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scan: resilvered 365G in 12h9m with 0 errors on Sat Apr 20 01:15:37 2013 config: NAME STATE READ WRITE CKSUM zfs_root_volume DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 10142334550473523414 UNAVAIL 0 0 0 was /dev/dsk/ada0 gptid/218e035e-9c9b-11e2-a687-001a6b4f161a ONLINE 0 0 0
Took an extra backup and begun troubleshooting.
I run a gpart list and had a fault on ADA0.
Thought it was a software error since scrub went fine a few days earlier and this happened during a reboot, so i run gpart recover on ada0.
gpast list showed that ADA0 was OK, had two partitions: ada0p1 2 GB swap and ada0p2 929 GB freebsd-zfs.
Took 10142334550473523414 away from pool with CLI. Ended up with healthy single drive pool.
Then i attached ada0p2 to pool, thought that it would attach, form a mirror. Resilver and become healthy again.
run zpool attach zfs_root_pool [old drive id] /dev/ada0p2
The pool started resilvering. Took the night. During the night i got this email:
Code:
pool: zfs_root_volume state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Apr 29 21:50:08 2013 296G scanned out of 359G at 16.2M/s, 1h7m to go 296G resilvered, 82.23% done config: NAME STATE READ WRITE CKSUM zfs_root_volume ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/218e035e-9c9b-11e2-a687-001a6b4f161a ONLINE 0 0 0 ada0p2 ONLINE 0 0 0 (resilvering) errors: No known data errors
Confirmed the status of the pool next morning with zpool status.
It showed a mirror with both drives online, healthy. "resilver completed in 10 hours".
Everything fine, except i had manually imported the pool in CLI and it had no mount point, nor showed up in web GUI.
So, as the last operation to bring the web gui in sync, and imported on the correct mount point, i just tried to export the pool from CLI:
Code:
zpool export zfs_root_volume
And then hit "auto import volume" in GUI.
It failed.
And now i have a situation where i cannot import my pool.
Tried restart. both drives enumerated as ADA0 and ADA1.
Run short S.M.A.R.T test on both drives with smartctl. PASSED.
camcontrol devlist:
Code:
<ST31000528AS CC38> at scbus2 target 0 lun 0 (pass0,ada0) <SAMSUNG HD642JJ 1AA01108> at scbus3 target 0 lun 0 (pass1,ada1) <General USB Flash Disk 1.0> at scbus8 target 0 lun 0 (pass2,da0)
sysctl kern.disks:
Code:
kern.disks: da0 ada1 ada0
gpart list
Code:
[root@freenas] /dev# gpart list Geom name: ada0 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1953525134 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada0p1 Mediasize: 2147483648 (2.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 65536 Mode: r1w1e1 rawuuid: f28b9762-a8df-11e2-9b2f-001a6b4f161a rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 2147483648 offset: 65536 type: freebsd-swap index: 1 end: 4194431 start: 128 2. Name: ada0p2 Mediasize: 998057319936 (929G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 2147549184 Mode: r0w0e0 rawuuid: f295e0dc-a8df-11e2-9b2f-001a6b4f161a rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 998057319936 offset: 2147549184 type: freebsd-zfs index: 2 end: 1953525134 start: 4194432 Consumers: 1. Name: ada0 Mediasize: 1000204886016 (931G) Sectorsize: 512 Mode: r1w1e2 Geom name: ada1 modified: false state: OK fwheads: 16 fwsectors: 63 last: 1250263694 first: 34 entries: 128 scheme: GPT Providers: 1. Name: ada1p1 Mediasize: 2147483648 (2.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 65536 Mode: r1w1e1 rawuuid: 217764ac-9c9b-11e2-a687-001a6b4f161a rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: (null) length: 2147483648 offset: 65536 type: freebsd-swap index: 1 end: 4194431 start: 128 2. Name: ada1p2 Mediasize: 637987462656 (594G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 2147549184 Mode: r0w0e0 rawuuid: 218e035e-9c9b-11e2-a687-001a6b4f161a rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: (null) length: 637987462656 offset: 2147549184 type: freebsd-zfs index: 2 end: 1250263694 start: 4194432 Consumers: 1. Name: ada1 Mediasize: 640135028736 (596G) Sectorsize: 512 Mode: r1w1e2 Geom name: da0 modified: false state: OK fwheads: 255 fwsectors: 63 last: 7831551 first: 63 entries: 4 scheme: MBR Providers: 1. Name: da0s1 Mediasize: 988291584 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 32256 Mode: r0w0e0 rawtype: 165 length: 988291584 offset: 32256 type: freebsd index: 1 end: 1930319 start: 63 2. Name: da0s2 Mediasize: 988291584 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 988356096 Mode: r1w0e1 attrib: active rawtype: 165 length: 988291584 offset: 988356096 type: freebsd index: 2 end: 3860639 start: 1930383 3. Name: da0s3 Mediasize: 1548288 (1.5M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 1976647680 Mode: r0w0e0 rawtype: 165 length: 1548288 offset: 1976647680 type: freebsd index: 3 end: 3863663 start: 3860640 4. Name: da0s4 Mediasize: 21159936 (20M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 1978195968 Mode: r1w1e2 rawtype: 165 length: 21159936 offset: 1978195968 type: freebsd index: 4 end: 3904991 start: 3863664 Consumers: 1. Name: da0 Mediasize: 4009754624 (3.8G) Sectorsize: 512 Mode: r2w1e4 Geom name: da0s1 modified: false state: OK fwheads: 255 fwsectors: 63 last: 1930256 first: 0 entries: 8 scheme: BSD Providers: 1. Name: da0s1a Mediasize: 988283392 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 40448 Mode: r0w0e0 rawtype: 0 length: 988283392 offset: 8192 type: !0 index: 1 end: 1930256 start: 16 Consumers: 1. Name: da0s1 Mediasize: 988291584 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 32256 Mode: r0w0e0 Geom name: da0s2 modified: false state: OK fwheads: 255 fwsectors: 63 last: 1930256 first: 0 entries: 8 scheme: BSD Providers: 1. Name: da0s2a Mediasize: 988283392 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 988364288 Mode: r1w0e1 rawtype: 0 length: 988283392 offset: 8192 type: !0 index: 1 end: 1930256 start: 16 Consumers: 1. Name: da0s2 Mediasize: 988291584 (942M) Sectorsize: 512 Stripesize: 0 Stripeoffset: 988356096 Mode: r1w0e1
swapinfo:
Code:
Device 1K-blocks Used Avail Capacity /dev/ada0p1.eli 2097152 0 2097152 0% /dev/ada1p1.eli 2097152 0 2097152 0% Total 4194304 0 4194304 0%
zdb:
Code:
[root@freenas] /dev# zdb cannot open '/boot/zfs/zpool.cache': No such file or directory
Cannot import pool:
Code:
[root@freenas] /dev# zpool import pool: zfs_root_volume id: 397881850367711172 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: zfs_root_volume UNAVAIL insufficient replicas mirror-0 UNAVAIL insufficient replicas 10142334550473523414 UNAVAIL cannot open 18197332204305252530 UNAVAIL cannot open
Don't know how to proceed importing this volume. I have a feeling that at least one drive in the mirror are just fine, if not both, since it resilvered without errors five minutes before export/import procedure.
Please help. I really don't want to revert from backup, since i will loose one month of cpu time re-render a png dataset stored on the pool which is not backed up in a while.