MeatTreats
Dabbler
- Joined
- Oct 23, 2021
- Messages
- 26
So this is a continuation of sorts of issues documented in this thread. https://www.truenas.com/community/threads/3-bad-drive-all-from-the-same-vdev-please-help.98109/
Long story short, I have a 48 drive array with 3 vdevs consisting of 16x6TB, 16x6TB and 16x10TB (I know I should have made smaller vdevs) and I had one of the 10TB drives "fail" and then after replacing it with a spare and rebuilding the array, a couple days later overnight, two more of the 10TB drives "failed" and my array (which is Z3 btw) was left in a severely degraded state.
So I took all 3 drive to a local computer repair shop that does a lot of business with small and medium businesses and he tested the drives and they all came up fine. Their partitions showed up and their didn't appear to be any issues with the drives.
So at this point I am not completely sure why FreeNAS kicked the drives out of the array but a bad backplane, cable or some other piece of bad hardware could be the issue as this problem is only effecting the 10TB drives which are all the same and all in a row on the same backplane.
Others have suggested moving drives around to see if that is the issue and that is certainly a step I plan to take but for now my only concern is getting my array into a healthy state because if I lose one more drive, then I lose everything but since these drives aren't actually bad or at least not bad enough that I want to replace them, my questions are these.
1. What happens if I put the rejected drives back into the server and power up the system? (It has been powered off since, almost a year) Will FreeNAS automatically take the drives back into the array or do they have to be reinserted back into the array?
2. If there a way or command to force FreeNAS to take the drives back and what is the risk in doing that? Since the drives are good and already a part of the array, it is better to just add them back in rather than reformat them and rebuild which increases the risk of another failure.
Again, my goal here is to get my array healthy enough so I am no longer one drive away from failure and then I will move drives around so that some of the 6TB drives sit where the 10s are and monitor the system. If any of those 6TB drives get kicked out of the array, I know it is a hardware issue. I would still want to then be able to reinsert those booted drives back into the array rather than rebuild.
Long story short, I have a 48 drive array with 3 vdevs consisting of 16x6TB, 16x6TB and 16x10TB (I know I should have made smaller vdevs) and I had one of the 10TB drives "fail" and then after replacing it with a spare and rebuilding the array, a couple days later overnight, two more of the 10TB drives "failed" and my array (which is Z3 btw) was left in a severely degraded state.
So I took all 3 drive to a local computer repair shop that does a lot of business with small and medium businesses and he tested the drives and they all came up fine. Their partitions showed up and their didn't appear to be any issues with the drives.
So at this point I am not completely sure why FreeNAS kicked the drives out of the array but a bad backplane, cable or some other piece of bad hardware could be the issue as this problem is only effecting the 10TB drives which are all the same and all in a row on the same backplane.
Others have suggested moving drives around to see if that is the issue and that is certainly a step I plan to take but for now my only concern is getting my array into a healthy state because if I lose one more drive, then I lose everything but since these drives aren't actually bad or at least not bad enough that I want to replace them, my questions are these.
1. What happens if I put the rejected drives back into the server and power up the system? (It has been powered off since, almost a year) Will FreeNAS automatically take the drives back into the array or do they have to be reinserted back into the array?
2. If there a way or command to force FreeNAS to take the drives back and what is the risk in doing that? Since the drives are good and already a part of the array, it is better to just add them back in rather than reformat them and rebuild which increases the risk of another failure.
Again, my goal here is to get my array healthy enough so I am no longer one drive away from failure and then I will move drives around so that some of the 6TB drives sit where the 10s are and monitor the system. If any of those 6TB drives get kicked out of the array, I know it is a hardware issue. I would still want to then be able to reinsert those booted drives back into the array rather than rebuild.