can't import ZFS pool after failed time mashine backup delete

Status
Not open for further replies.

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
Hey hey everybody

I have freeNAS 9.2 running with six harddisks. Each of three HDs build one unit of a RAID 5.
So two RAID 5 are running in my NAS.
"halone" is build out of three 1 TB HDs. "haltwo" is build out of three 2TB HDs.

After trying to delete a time mashine backup for my macbook from "halone" the NAS crashed completely. The only thing I could do is a hardware restart. Since this, booting stops after trying to mount "halone" with the message:

"Solaris: WARNING: can't open objset for halone/xanumacbackup"

I have read this:
https://forums.freenas.org/index.php?threads/raidz2-hung-after-zvol-delete.15759/

So I decided to remove the halone HDs from my NAS and delete the pool in the GUI.
My NAS was booting up well and I deleted halone in the GUI. Then I tried to automaticly import it back.
If I do it, always the message from above appears.

I have zero idea how I can solve this problem. Please help. Thanks
 
D

dlavigne

Guest
Are you really running hardware RAID?

Also, if you deleted the pool, it's gone...
 

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
No, I don't running a hardware RAID.
Maybe I am using the wrong term. FreeNAS is running a ZVS pool with three 1TBs HDs. I think this means, that it has do be a RAID 5. If not, please correct me.

First I disconnected the HDs and then the freeNAS was able to boot well again. In the GUI (without connected HDs!) I deleted the pool under the "storage" section.
I made a restart with connected HDs. The System was booting well. Then I tried to import the pool and the error message appears again. The import fails and the GUI doesnt react anymore.
But I could restart the NAS directly with a keyboard plugged in.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
We'll need more information to have the best chance of helping you, but my wild guess is that your system is short of RAM.

What is the output of zfs import -fFn halone?

Please use CODE tags to post any command output.
 

Attachments

  • code.png
    code.png
    16.5 KB · Views: 238

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
Thanks a lot!

RAM : 16 GB
System: FreeNAS-9.2.0-RELEASE-x64 (ab098f4)
CPU: AMD C-60 APU with Radeon(tm) HD Graphics
Mainboard: Asus C60M1I

My disks:
Bildschirmfoto 2016-06-26 um 18.47.06.png


"zfs import -fFn halone" seems to be a wrong command. Do you mean "zpool" instead of "zvs"?
This is the output:

Code:
unrecognized command 'import'
usage: zfs command args ...
where 'command' is one of the following:

    create [-p] [-o property=value] ... <filesystem>
    create [-ps] [-b blocksize] [-o property=value] ... -V <size> <volume>
    destroy [-fnpRrv] <filesystem|volume>
    destroy [-dnpRrv] <snapshot>[%<snapname>][,...]

    snapshot [-r] [-o property=value] ... <filesystem@snapname|volume@snapname> ...
    rollback [-rRf] <snapshot>
    clone [-p] [-o property=value] ... <snapshot> <filesystem|volume>
    promote <clone-filesystem>
    rename [-f] <filesystem|volume|snapshot> <filesystem|volume|snapshot>
    rename [-f] -p <filesystem|volume> <filesystem|volume>
    rename -r <snapshot> <snapshot>
    rename -u [-p] <filesystem> <filesystem>
    list [-rH][-d max] [-o property[,...]] [-t type[,...]] [-s property] ...
        [-S property] ... [filesystem|volume|snapshot] ...

    set <property=value> <filesystem|volume|snapshot> ...
    get [-rHp] [-d max] [-o "all" | field[,...]] [-t type[,...]] [-s source[,...]]
        <"all" | property[,...]> [filesystem|volume|snapshot] ...
    inherit [-rS] <property> <filesystem|volume|snapshot> ...
    upgrade [-v]
    upgrade [-r] [-V version] <-a | filesystem ...>
    userspace [-Hinp] [-o field[,...]] [-s field] ...
    [-S field] ... [-t type[,...]] <filesystem|snapshot>
    groupspace [-Hinp] [-o field[,...]] [-s field] ...
    [-S field] ... [-t type[,...]] <filesystem|snapshot>

    mount
    mount [-vO] [-o opts] <-a | filesystem>
    unmount [-f] <-a | filesystem|mountpoint>
    share <-a | filesystem>
    unshare <-a | filesystem|mountpoint>

    send [-DnPpRv] [-i snapshot | -I snapshot] <snapshot>
    receive [-vnFu] <filesystem|volume|snapshot>
    receive [-vnFu] [-d | -e] <filesystem>

    allow <filesystem|volume>
    allow [-ldug] <"everyone"|user|group>[,...] <perm|@setname>[,...]
        <filesystem|volume>
    allow [-ld] -e <perm|@setname>[,...] <filesystem|volume>
    allow -c <perm|@setname>[,...] <filesystem|volume>
    allow -s @setname <perm|@setname>[,...] <filesystem|volume>

    unallow [-rldug] <"everyone"|user|group>[,...]
        [<perm|@setname>[,...]] <filesystem|volume>
    unallow [-rld] -e [<perm|@setname>[,...]] <filesystem|volume>
    unallow [-r] -c [<perm|@setname>[,...]] <filesystem|volume>
    unallow [-r] -s @setname [<perm|@setname>[,...]] <filesystem|volume>

    hold [-r] <tag> <snapshot> ...
    holds [-r] <snapshot> ...
    release [-r] <tag> <snapshot> ...
    diff [-FHt] <snapshot> [snapshot|filesystem]

    jail <jailid|jailname> <filesystem>
    unjail <jailid|jailname> <filesystem>

Each dataset is of the form: pool/[dataset/]*dataset[@name]

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow




dmesg output:

Code:
Copyright (c) 1992-2013 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
    The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 9.2-RELEASE #0 r+2315ea3: Fri Dec 20 12:48:50 PST 2013
    root@build.ixsystems.com:/tank/home/jkh/checkout/freenas/os-base/amd64/tank/home/jkh/checkout/freenas/FreeBSD/src/sys/FREENAS.amd64 amd64
gcc version 4.2.1 20070831 patched [FreeBSD]
CPU: AMD C-60 APU with Radeon(tm) HD Graphics (999.99-MHz K8-class CPU)
  Origin = "AuthenticAMD"  Id = 0x500f20  Family = 0x14  Model = 0x2  Stepping = 0
  Features=0x178bfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT>
  Features2=0x802209<SSE3,MON,SSSE3,CX16,POPCNT>
  AMD Features=0x2e500800<SYSCALL,NX,MMX+,FFXSR,Page1GB,RDTSCP,LM>
  AMD Features2=0x35ff<LAHF,CMP,SVM,ExtAPIC,CR8,ABM,SSE4A,MAS,Prefetch,IBS,SKINIT,WDT>
  TSC: P-state invariant, performance statistics
real memory  = 17179869184 (16384 MB)
avail memory = 16104341504 (15358 MB)
Event timer "LAPIC" quality 400
ACPI APIC Table: <ALASKA A M I>
FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs
FreeBSD/SMP: 1 package(s) x 2 core(s)
cpu0 (BSP): APIC ID:  0
cpu1 (AP): APIC ID:  1
WARNING: VIMAGE (virtualized network stack) is a highly experimental feature.
ACPI Warning: Optional field Pm2ControlBlock has zero address or length: 0x0000000000000000/0x1 (20110527/tbfadt-586)
ioapic0: Changing APIC ID to 0
ioapic0 <Version 2.1> irqs 0-23 on motherboard
kbd1 at kbdmux0
cryptosoft0: <software crypto> on motherboard
aesni0: No AESNI support.
padlock0: No ACE support.
acpi0: <ALASKA A M I> on motherboard
ACPI Error: [RAMB] Namespace lookup failure, AE_NOT_FOUND (20110527/psargs-392)
ACPI Exception: AE_NOT_FOUND, Could not execute arguments for [RAMW] (Region) (20110527/nsinit-380)
acpi0: Power Button (fixed)
cpu0: <ACPI CPU> on acpi0
cpu1: <ACPI CPU> on acpi0
attimer0: <AT timer> port 0x40-0x43 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
atrtc0: <AT realtime clock> port 0x70-0x71 irq 8 on acpi0
Event timer "RTC" frequency 32768 Hz quality 0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 14318180 Hz quality 950
Event timer "HPET" frequency 14318180 Hz quality 550
Event timer "HPET1" frequency 14318180 Hz quality 450
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0: <32-bit timer at 3.579545MHz> port 0x808-0x80b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
vgapci0: <VGA-compatible display> port 0xf000-0xf0ff mem 0xd0000000-0xdfffffff,0xfeb00000-0xfeb3ffff irq 18 at device 1.0 on pci0
pcib1: <ACPI PCI-PCI bridge> irq 16 at device 4.0 on pci0
pci1: <ACPI PCI bus> on pcib1
em0: <Intel(R) PRO/1000 Network Connection 7.3.8> port 0xe020-0xe03f mem 0xfeaa0000-0xfeabffff,0xfea80000-0xfea9ffff irq 16 at device 0.0 on pci1
em0: Using an MSI interrupt
em0: Ethernet address: 00:1f:29:54:b0:20
em1: <Intel(R) PRO/1000 Network Connection 7.3.8> port 0xe000-0xe01f mem 0xfea40000-0xfea5ffff,0xfea20000-0xfea3ffff irq 17 at device 0.1 on pci1
em1: Using an MSI interrupt
em1: Ethernet address: 00:1f:29:54:b0:21
ahci0: <ATI IXP700 AHCI SATA controller> port 0xf140-0xf147,0xf130-0xf133,0xf120-0xf127,0xf110-0xf113,0xf100-0xf10f mem 0xfeb4b000-0xfeb4b3ff irq 19 at device 17.0 on pci0
ahci0: AHCI v1.20 with 6 6Gbps ports, Port Multiplier supported
ahcich0: <AHCI channel> at channel 0 on ahci0
ahcich1: <AHCI channel> at channel 1 on ahci0
ahcich2: <AHCI channel> at channel 2 on ahci0
ahcich3: <AHCI channel> at channel 3 on ahci0
ahcich4: <AHCI channel> at channel 4 on ahci0
ahcich5: <AHCI channel> at channel 5 on ahci0
ohci0: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfeb4a000-0xfeb4afff irq 18 at device 18.0 on pci0
usbus0 on ohci0
ehci0: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfeb49000-0xfeb490ff irq 17 at device 18.2 on pci0
usbus1: EHCI version 1.0
usbus1 on ehci0
ohci1: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfeb48000-0xfeb48fff irq 18 at device 19.0 on pci0
usbus2 on ohci1
ehci1: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfeb47000-0xfeb470ff irq 17 at device 19.2 on pci0
usbus3: EHCI version 1.0
usbus3 on ehci1
pci0: <serial bus, SMBus> at device 20.0 (no driver attached)
pci0: <multimedia, HDA> at device 20.2 (no driver attached)
isab0: <PCI-ISA bridge> at device 20.3 on pci0
isa0: <ISA bus> on isab0
pcib2: <ACPI PCI-PCI bridge> at device 20.4 on pci0
pci2: <ACPI PCI bus> on pcib2
ohci2: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfeb46000-0xfeb46fff irq 18 at device 20.5 on pci0
usbus4 on ohci2
pcib3: <ACPI PCI-PCI bridge> at device 21.0 on pci0
pci3: <ACPI PCI bus> on pcib3
ohci3: <AMD SB7x0/SB8x0/SB9x0 USB controller> mem 0xfeb45000-0xfeb45fff irq 18 at device 22.0 on pci0
usbus5 on ohci3
ehci2: <AMD SB7x0/SB8x0/SB9x0 USB 2.0 controller> mem 0xfeb44000-0xfeb440ff irq 17 at device 22.2 on pci0
usbus6: EHCI version 1.0
usbus6 on ehci2
acpi_button0: <Power Button> on acpi0
atkbdc0: <Keyboard controller (i8042)> port 0x60,0x64 irq 1 on acpi0
atkbd0: <AT Keyboard> irq 1 on atkbdc0
kbd0 at atkbd0
atkbd0: [GIANT-LOCKED]
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
orm0: <ISA Option ROMs> at iomem 0xce800-0xcf7ff,0xcf800-0xd07ff on isa0
amdsbwd0: <AMD SB8xx Watchdog Timer> at iomem 0xfec000f0-0xfec000f3,0xfec000f4-0xfec000f7 on isa0
sc0: <System console> at flags 0x100 on isa0
sc0: VGA <16 virtual consoles, flags=0x300>
vga0: <Generic ISA VGA> at port 0x3c0-0x3df iomem 0xa0000-0xbffff on isa0
ppc0: cannot reserve I/O port range
wbwd0: HEFRAS and EFER do not align: EFER 0x2e DevID 0xff DevRev 0xff CR26 0xff
acpi_throttle0: <ACPI CPU Throttling> on cpu0
acpi_throttle1: <ACPI CPU Throttling> on cpu1
acpi_throttle1: failed to attach P_CNT
device_attach: acpi_throttle1 attach returned 6
Timecounters tick every 1.000 msec
ipfw2 (+ipv6) initialized, divert enabled, nat enabled, default to accept, logging disabled
DUMMYNET 0xfffffe0003e55080 with IPv6 initialized (100409)
load_dn_sched dn_sched QFQ loaded
load_dn_sched dn_sched RR loaded
load_dn_sched dn_sched WF2Q+ loaded
load_dn_sched dn_sched FIFO loaded
load_dn_sched dn_sched PRIO loaded
usbus0: 12Mbps Full Speed USB v1.0
usbus1: 480Mbps High Speed USB v2.0
usbus2: 12Mbps Full Speed USB v1.0
usbus3: 480Mbps High Speed USB v2.0
usbus4: 12Mbps Full Speed USB v1.0
usbus5: 12Mbps Full Speed USB v1.0
usbus6: 480Mbps High Speed USB v2.0
ugen2.1: <ATI> at usbus2
uhub0: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus2
ugen1.1: <ATI> at usbus1
uhub1: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus1
ugen0.1: <ATI> at usbus0
uhub2: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus0
ugen4.1: <ATI> at usbus4
uhub3: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus4
ugen3.1: <ATI> at usbus3
uhub4: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus3
ugen6.1: <ATI> at usbus6
uhub5: <ATI EHCI root HUB, class 9/0, rev 2.00/1.00, addr 1> on usbus6
ugen5.1: <ATI> at usbus5
uhub6: <ATI OHCI root HUB, class 9/0, rev 1.00/1.00, addr 1> on usbus5
uhub3: 2 ports with 2 removable, self powered
ada0 at ahcich0 bus 0 scbus0 target 0 lun 0
ada0: <WDC WD20EZRX-00DC0B0 80.00A80> ATA-9 SATA 3.x device
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada0: quirks=0x1<4K>
ada0: Previously was known as ad4
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <WDC WD10EAVS-00D7B1 01.01A01> ATA-8 SATA 2.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada1: Previously was known as ad6
ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2: <WDC WD20EARS-00MVWB0 51.0AB51> ATA-8 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada2: quirks=0x1<4K>
ada2: Previously was known as ad8
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3: <WDC WD10EAVS-00D7B1 01.01A01> ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada3: Previously was known as ad10
ada4 at ahcich4 bus 0 scbus4 target 0 lun 0
ada4: <WDC WD20EZRX-00DC0B0 80.00A80> ATA-9 SATA 3.x device
ada4: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada4: Command Queueing enabled
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada4: quirks=0x1<4K>
ada4: Previously was known as ad12
ada5 at ahcich5 bus 0 scbus5 target 0 lun 0
ada5: <WDC WD10EZRX-00L4HB0 01.01A01> ATA-8 SATA 3.x device
ada5: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada5: Command Queueing enabled
ada5: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada5: quirks=0x1<4K>
ada5: Previously was known as ad14
SMP: AP CPU #1 Launched!
Timecounter "TSC" frequency 999992997 Hz quality 800
uhub6: 4 ports with 4 removable, self powered
uhub0: 5 ports with 5 removable, self powered
uhub2: 5 ports with 5 removable, self powered
uhub5: 4 ports with 4 removable, self powered
uhub1: 5 ports with 5 removable, self powered
uhub4: 5 ports with 5 removable, self powered
Root mount waiting for: usbus1
Root mount waiting for: usbus1
ugen1.2: <TDK LoR> at usbus1
umass0: <TDK LoR Micro, class 0/0, rev 2.00/1.00, addr 2> on usbus1
Trying to mount root from ufs:/dev/ufs/FreeNASs2a [ro]...
mountroot: waiting for device /dev/ufs/FreeNASs2a ...
da0 at umass-sim0 bus 0 scbus7 target 0 lun 0
da0: <TDK LoR Micro PMAP> Removable Direct Access SCSI-4 device
da0: 40.000MB/s transfers
da0: 7381MB (15116736 512 byte sectors: 255H 63S/T 940C)
da0: quirks=0x2<NO_6_BYTE>
GEOM_RAID5: Module loaded, version 1.1.20130907.44 (rev 5c6d2a159411)
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
GEOM_ELI: Device ada0p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
GEOM_ELI: Device ada2p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
GEOM_ELI: Device ada4p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
GEOM_ELI: Device ada1p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
GEOM_ELI: Device ada3p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
GEOM_ELI: Device ada5p1.eli created.
GEOM_ELI: Encryption: AES-XTS 256
GEOM_ELI:     Crypto: software
 
Last edited:

styno

Patron
Joined
Apr 11, 2016
Messages
466
Yes, it is zpool import...
 

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
After "zpool import -fFn halone" I could hear the HDs working. For shure because of the mount process.
Then the error message appears again:
"Solaris: WARNING: can't open objset for halone/xanumacbackup"

GUI and CLI react, but you cannot give a command, like "reboot".
Using the ON/OFF button turns the NAS savely off. I can see the outputs (like stopping deamons) before shutdown.
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
It seems your pool is corrupted beyond what -F can repair, and you should expect that your pool is lost.

I'm seeing discussion elsewhere online of an undocumented -X flag that may enable import of a corrupted pool. The command described is:
zpool import -FX <poolname>
I don't now if FreeBSD's version of zpool will recognize it, or what the effect will be, but it might be worth a try, unless someone else has a better idea.
 

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
Thanks. But it doesn't work.
zpool import -FX halone produces the same error message:
Solaris: WARNING: can't open objset for halone/xanumacbackup

I am grateful for every idea. Because I didn't made a backup :(
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
At this point, the only thing you can try is to add 'force' to the above suggested command.
Code:
zpool import -fFX halone


Or mount read-only and if this works copy your data asap.
Code:
zpool -f -o readonly=on import halone


If it was me, I would try the following options as well: '-n' or '-N'. Look it up in the man page.
 

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
by the way: I made a memtest for 10 hours with 0 errors.

Code:
 zpool import -fFX halone


gives me the same error message as usual.


Code:
 zpool -f -o readonly=on import halone


produces:

Code:
unrecognized command '-f'
usage: zpool command args ...
where 'command' is one of the following:

    create [-fnd] [-o property=value] ...
        [-O file-system-property=value] ...
        [-m mountpoint] [-R root] <pool> <vdev> ...
    destroy [-f] <pool>

    add [-fn] <pool> <vdev> ...
    remove <pool> <device> ...

    labelclear [-f] <vdev>

    list [-Hv] [-o property[,...]] [-T d|u] [pool] ... [interval [count]]
    iostat [-v] [-T d|u] [pool] ... [interval [count]]
    status [-vx] [-T d|u] [pool] ... [interval [count]]

    online [-e] <pool> <device> ...
    offline [-t] <pool> <device> ...
    clear [-nF] <pool> [device]

    attach [-f] <pool> <device> <new-device>
    detach <pool> <device>
    replace [-f] <pool> <device> [new-device]
    split [-n] [-R altroot] [-o mntopts]
        [-o property=value] <pool> <newpool> [<device> ...]

    scrub [-s] <pool> ...

    import [-d dir] [-D]
    import [-d dir | -c cachefile] [-F [-n]] <pool | id>
    import [-o mntopts] [-o property=value] ...
        [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]] -a
    import [-o mntopts] [-o property=value] ...
        [-d dir | -c cachefile] [-D] [-f] [-m] [-N] [-R root] [-F [-n]]
        <pool | id> [newpool]
    export [-f] <pool> ...
    upgrade [-v]
    upgrade [-V version] <-a | pool ...>
    reguid <pool>

    history [-il] [<pool>] ...
    get <"all" | property[,...]> <pool> ...
    set <property=value> <pool> 



I guess you mean:

Code:
zpool import -f -o readonly=on halone


it produces:

Code:
cannot mount '/halone': failed to create mountpoint
cannot mount '/halone/jails': failed to create mountpoint
cannot mount '/halone/jails/.warden-template-pluginjail': failed to create mountpoint
cannot mount '/halone/jails/owncloud_1': failed to create mountpoint
cannot mount '/halone/xanumacbackup': failed to create mountpoint



If I retry the command:
Code:
cannot import 'halone': a pool with that name is already created/imported,
and no additional pools with that name were found


/mnt/ and /media/
are empty
I can't find halone mounted.
 
Last edited:

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
Code:
 zpool status


prints:

Code:
  pool: halone
state: ONLINE
status: One or more devices are configured to use a non-native block size.
    Expect reduced performance.
action: Replace affected devices with devices that support the
    configured block size, or migrate data to a properly configured
    pool.
  scan: scrub repaired 0 in 23h31m with 0 errors on Sun May 29 23:31:55 2016
config:

    NAME                                            STATE     READ WRITE CKSUM
    halone                                          ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/48f4bd37-e8e0-11e2-87a5-60a44c3fda01  ONLINE       0     0     0  block size: 512B configured, 4096B native
        gptid/49c0a22d-e8e0-11e2-87a5-60a44c3fda01  ONLINE       0     0     0  block size: 512B configured, 4096B native
        gptid/4ad0fb61-e8e0-11e2-87a5-60a44c3fda01  ONLINE       0     0     0  block size: 512B configured, 4096B native

errors: No known data errors

  pool: haltwo
state: ONLINE
status: The pool is formatted using a legacy on-disk format.  The pool can
    still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
    pool will no longer be accessible on software that does not support feature
    flags.
  scan: scrub repaired 0 in 25h55m with 0 errors on Mon Jun  6 01:55:41 2016
config:

    NAME                                            STATE     READ WRITE CKSUM
    haltwo                                          ONLINE       0     0     0
      raidz1-0                                      ONLINE       0     0     0
        gptid/b481edbf-f131-11e2-9be1-60a44c3fda01  ONLINE       0     0     0
        gptid/b5275eae-f131-11e2-9be1-60a44c3fda01  ONLINE       0     0     0
        gptid/b61c4c8d-f131-11e2-9be1-60a44c3fda01  ONLINE       0     0     0

errors: No known data errors



Code:
zfs list


prints:

Code:
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
halone                                    1.76T  25.5G  1.56T  /halone
halone/jails                              7.12G  25.5G  65.9K  /halone/jails
halone/jails/.warden-template-pluginjail  1.14G  25.5G  1.13G  /halone/jails/.warden-template-pluginjail
halone/jails/owncloud_1                   5.98G  25.5G  7.11G  /halone/jails/owncloud_1
halone/xanumacbackup                       200G      0   200G  /halone/xanumacbackup
haltwo                                    3.01T   566G  3.01T  /mnt/haltwo
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
What are you expecting to find on halone? Did you have most of your data stored at the top level? It looks like the pool was very nearly 100% full, which might explain the crash and resulting corruption.

One way you can try to recover is to use zfs send to transfer the data to another pool or storage device, then build a new pool with more capacity and zfs receive the data back.

It looks like haltwo is more than 80% full too.
 

pillowplane

Dabbler
Joined
Sep 12, 2013
Messages
14
halone is my main data storage. all my pics and documents are housing there (hellyes, and I don't made a backup)

halone was at 90%. Then decided to delete halone/xanumacbackup (200 GB) because it was obsolete. The NAS chrashed and my problem was born.

WTF, sounds like I have to make a huge investment in new HDs. Is there a temp possibility to get access? It seems that I only have to get rid of xanumacbackup to access the pool again.

haltwo
is working fine.
 
Last edited:

styno

Patron
Joined
Apr 11, 2016
Messages
466
You can try to manually mount only one dataset via the zfs command. (make sure the mountpoint exists prior to the mount). I would not yet try to get rid of that dataset now, you can always try that as a last resort.

What is the output of
Code:
ls -al /halone


ps, I don't want to be that guy, but the only WTF here should be 'where is my backup', you should've invested in extra disks when the pool got close to 80% use.
 
Last edited:
Status
Not open for further replies.
Top