MediJaster
Dabbler
- Joined
- Feb 21, 2022
- Messages
- 10
Hello, thank you for taking the time to read this.
TL;DR: An SSD broke, when the replacement came I shut down the server, installed the new SSD, rebooted it and the pool was empty. Now I can't import the pool
The affected pool (called "PlatinumPool", yes it's a jojo reference) consists of 1 raidz1 vdev of 4 SSDs and it was used mainly for apps and VMs.
I currently have connected all 5 of the drives, 3 that are working, the faulted one and the replacement.
The web interface shows that I have 4 unassigned drives and that the pool has no drives and if I go to Storage>Disks the drives show up as<pool name>(Exported).
This is the output of zpool import:
I also tried running zpool import -f PlatinumPool:
Is the data really all gone? It was far from mission critical, and I do have the stuff I care about backed up to my main storage pool, but It certainly would be nice to have my VMs back how they were.
Thanks again for reading this, I sincerely look forward to any suggestions.
Feel free to ask for other infomation or other debugging outputs, I'll be more than happy to provide them.
TL;DR: An SSD broke, when the replacement came I shut down the server, installed the new SSD, rebooted it and the pool was empty. Now I can't import the pool
The issue started about month ago (almost two now), the SSD started showing up as degraded to then become faulted a few days later. Since the SSD reported as faulted was pretty much new (maybe 2 months old) I assumed the fault was with my motherboard's SATA ports or controller. Since these were SSDs I wanted to invest in a proper HBA (not the very cheap SATA pcie 1x expansion card from amazon i use for my main storage HDD pool, which has been working for the past 2 years nonetheless), which I now have successfully installed in my NAS. When I saw that there was a drive missing I assumed it was the SATA power splitter that was causing the issue, so I waited for a new one, which arrived today. I have now confirmed that one of the SSDs is the actual culprit of all these issues (at least now I have a proper hardware to connect all the ssds, the sata ports I had them connected to on my motherboard are mostly sata 2).
The affected pool (called "PlatinumPool", yes it's a jojo reference) consists of 1 raidz1 vdev of 4 SSDs and it was used mainly for apps and VMs.
I currently have connected all 5 of the drives, 3 that are working, the faulted one and the replacement.
The web interface shows that I have 4 unassigned drives and that the pool has no drives and if I go to Storage>Disks the drives show up as<pool name>(Exported).
This is the output of zpool import:
Code:
pool: PlatinumPool id: 10427529320122961827 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: PlatinumPool FAULTED corrupted data raidz1-0 DEGRADED 4fd97ded-9c99-45b6-8f46-19bf3fc195d6 ONLINE 3a7df339-2c15-40c9-ac11-0a0dadf2e8fd ONLINE 81f65718-db5f-4588-9ac9-ce3b2c2c7a3c UNAVAIL 8128c1e9-24f9-4644-8dd9-ec053be05551 ONLINE
I also tried running zpool import -f PlatinumPool:
Code:
cannot import 'PlatinumPool': I/O error Destroy and re-create the pool from a backup source.
Is the data really all gone? It was far from mission critical, and I do have the stuff I care about backed up to my main storage pool, but It certainly would be nice to have my VMs back how they were.
Thanks again for reading this, I sincerely look forward to any suggestions.
Feel free to ask for other infomation or other debugging outputs, I'll be more than happy to provide them.