Freenas advise Needed Please.

Donovanvdb

Cadet
Joined
Sep 9, 2019
Messages
4
Freenas advise Needed Please.

Hi All,

Will try and explain and provided much as possible, I’m still new to freenas and know my way a bit around but still have to learn a lot.
That’s why I’m here to ask the professionals.

Please see below Details for pool and disks. Please let me know if any more is needed and if so which command should I run ?

Maintenance has never been done in the past 3 years, I have updated the Latest Stable release and then Started looking at hardware issues.

1 SSD has known sector errors (DA1)
2 HDD has plenty of Checksum errors.(DA8/DA21)

Before anything I removed the Faulty SSD from the pool that was used for Logs.
Offline the DA8 and DA21 Disks and replaced them.

Then…… Its al started doing weird things.

Resilvering Started at 3 GBs and now running at 5MBs. Will take 3 years to complete.
I suspected it was the faulty SSD (DA1) that might be causing the issues and found that its back in the pool and I’m unable to remove it.
Ran Offline and detach commands and getting error: no replicas
It’s not part of one of the vdev’s, please see below

Lastly I shut down the freenas server and swopped the drive bay and still unable to remove.
Then removed the SSD drive and then freenas does not boot up.

Hope this makes senses sorry if my explaining is a bit bad, it there any other options I can try?
Your help will be appreciated.



Details:
OS Version:
FreeNAS-11.2-U5
Processor:
Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz (32 cores)
Memory:
512 GiB

Drives:
16GB boot/pool
4 x SSD
32 x HDD 8TB

Pool Status
RESILVER
Status: SCANNING
Errors: 0
Name Read Write Checksum Status
DG1 0 0 0 ONLINE
RAIDZ2 0 0 0 ONLINE
da4p2 0 0 0 ONLINE
da5p2 0 0 0 ONLINE
da6p2 0 0 0 ONLINE
da7p2 0 0 0 ONLINE
da8p2 0 0 0 ONLINE
da9p2 0 0 0 ONLINE
da10p2 0 0 3 ONLINE
da11p2 0 0 0 ONLINE
HOLE 0 0 0 ONLINE
RAIDZ2 0 0 0 ONLINE
da12p2 0 0 0 ONLINE
da13p2 0 0 0 ONLINE
da14p2 0 0 0 ONLINE
da15p2 0 0 0 ONLINE
da16p2 0 0 0 ONLINE
da17p2 0 0 0 ONLINE
da18p2 0 0 0 ONLINE
da19p2 0 0 0 ONLINE
RAIDZ2 0 0 0 ONLINE
da20p2 0 0 0 ONLINE
da21p2 0 0 0 ONLINE
da22p2 0 0 0 ONLINE
da23p2 0 0 0 ONLINE
da24p2 0 0 0 ONLINE
da25p2 0 0 0 ONLINE
da26p2 0 0 0 ONLINE
da27p2 0 0 0 ONLINE
RAIDZ2 0 0 0 ONLINE
da28p2 0 0 0 ONLINE
da29p2 0 0 0 ONLINE
da30p2 0 0 0 ONLINE
da31p2 0 0 0 ONLINE
da32p2 0 0 0 ONLINE
da33p2 0 0 0 ONLINE
da34p2 0 0 0 ONLINE
da35p2 0 0 0 ONLINE
da1p2 0 0 0 ONLINE
cache
da0p1 0 0 0 ONLINE
da2p1 0 0 0 ONLINE
da3p1 0 0 0 ONLINE
 

Attachments

  • Freenas Status.png
    Freenas Status.png
    273 KB · Views: 243

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
I would be much more concerned about the single disk VDEV at the end of the list, which will kill your whole pool if it dies. Perhaps that's da1, going from the list in your post.

Depending on your whole system setup, you may or may not be getting any benefit from all those cache drives, but that's a discussion for another time.

I suspect you will not be able to remove the drive now as it is in your pool and potentially has data on it.

This leaves you with one option, which is to backup, destroy the pool, rebuild it and restore. (or 2 options if you count losing all your data)

Depending on some of your system details (like which exact version of FreeNAS are you running), there may be a chance that you can remove the accidentally added VDEV, but in any case, all your attention should now focus on having a full backup of that pool.
 

Donovanvdb

Cadet
Joined
Sep 9, 2019
Messages
4
I would be much more concerned about the single disk VDEV at the end of the list, which will kill your whole pool if it dies. Perhaps that's da1, going from the list in your post.

Depending on your whole system setup, you may or may not be getting any benefit from all those cache drives, but that's a discussion for another time.

I suspect you will not be able to remove the drive now as it is in your pool and potentially has data on it.

This leaves you with one option, which is to backup, destroy the pool, rebuild it and restore. (or 2 options if you count losing all your data)

Depending on some of your system details (like which exact version of FreeNAS are you running), there may be a chance that you can remove the accidentally added VDEV, but in any case, all your attention should now focus on having a full backup of that pool.
Hi sretalla,

Thank you for the quick response,

You are correct the single Vdev is the DA1 SSD drive, if i remove the drive the pool is unavailable. :(

I tried removing one of the Caching disks, and tried replacing the DA1 drive with the spare. Example below of zpool status
replacing-5
DA1
DA2

Seem like its not doing anything, could this be because its waiting for the re-silvering to complete?
Can i stop the re-silvering and set a priority on the replacement?

Unfortunately i have no space to save the 108TB somewhere else, Might have to go with option 2.
Freenas version is on FreeNAS-11.2-U5
Witch system details would you need to advise if this would be possible?

Thanks A mil. Your help is appreciated
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Since you are on the latest stable FreeNAS, the feature to remove a VDEV should be available. You can perhaps try zpool remove -n DG1 gptid/<insert gptid of da1 here>

The -n in that command should specify no operation, so will just report if the command would have done anything or not without actually doing it.

If it indicates the VDEV would be removed, you can run it again without the -n.

It would be helpful to have the output of that command and also of zpool status -v in code tags to help us understand the current status.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,458
the feature to remove a VDEV should be available.
No, that's only available when all vdevs in the pool are either single disks or mirrors.
 

Donovanvdb

Cadet
Joined
Sep 9, 2019
Messages
4
Since you are on the latest stable FreeNAS, the feature to remove a VDEV should be available. You can perhaps try zpool remove -n DG1 gptid/<insert gptid of da1 here>

The -n in that command should specify no operation, so will just report if the command would have done anything or not without actually doing it.

If it indicates the VDEV would be removed, you can run it again without the -n.

It would be helpful to have the output of that command and also of zpool status -v in code tags to help us understand the current status.
Just tried the above, please see attached.
Code:
root@afsjhb00zfs101:~ # zpool remove -n DG1 gptid/26875799-ade2-11e9-ab7f-0cc47a5ebf18
Memory that will be used after removing gptid/26875799-ade2-11e9-ab7f-0cc47a5ebf18: 2.05G
root@afsjhb00zfs101:~ # zpool remove  DG1 gptid/26875799-ade2-11e9-ab7f-0cc47a5ebf18
cannot remove gptid/26875799-ade2-11e9-ab7f-0cc47a5ebf18: invalid config; all top-level vdevs must have the same sector size and not be raidz.


Will there be a way around this error?
 

Attachments

  • 2019-09-11_5-58-02.png
    2019-09-11_5-58-02.png
    43.8 KB · Views: 214

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Will there be a way around this error?
I think not. This matches with what @danb35 said already that it's not going to be an option with RAIDZ2 top-level VDEVS in the pool.
 

Donovanvdb

Cadet
Joined
Sep 9, 2019
Messages
4
Thanks for the advice and assistance Gents,
Will Try and backup data somewhere and start from Scratch.
 
Top