Identify missing disk

Status
Not open for further replies.

philippn

Cadet
Joined
Sep 26, 2018
Messages
6
I am currently building a massive storage system with 36 HDDs (4 RAIDZ2 â 9 disks, cumulated to one volume).

I now simulated a missing drive by removing one. The freenas system send me an email, the raid is now degraded. But under "View disks" in the storage tab the missing drive isn't marked red or something like that.

zpool status -v gives me the missing gptid - but neither the missing serial number nor the missing drive name (/dev/da[x]). Only the second message after 10 minutes showed me, that /dev/da6 is missing - after visiting the storage tab I now got the serial number.

For a real outtage it would be a bit complicated - isn't there a simple option to show which drives are missing or failing? Like marking them red under "view disks"?
 

philippn

Cadet
Joined
Sep 26, 2018
Messages
6
I now removed more drives - don't I get messages for them, too?

I mean, loosing one drive on a production system won't make me drive to work on a weekend - but if the system is loosing more drives, I would like to get messages to react...
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am currently building a massive storage system with 36 HDDs (4 RAIDZ2 â 9 disks, cumulated to one volume).
That isn't tiny, but not massive. At home I have a 48 bay chassis that is completely full right now. At work, I have a system with 122 drives attached using multiple disk shelves that takes up half of a 42U rack. Another department where I work has a system that takes two 42U racks (not completely filled) but it has around 300 drives, but I don't recall the exact number.
Sorry, I got distracted, back to the question...
For a real outtage it would be a bit complicated - isn't there a simple option to show which drives are missing or failing? Like marking them red under "view disks"?
It can be a bit complicated depending on your hardware. Why don't you tell us what hardware you are using so we can give you some guidance?
Please review this post to get some suggestions: https://forums.freenas.org/index.php?threads/updated-forum-rules-8-3-16.45124/
I now removed more drives - don't I get messages for them, too?
You should, eventually, but there may be some small delay. How much time did you give it to react?
If you use decent quality drives and do proper burn-in testing on them, you shouldn't be loosing many. I have a 60 drive system that lost 3 in the first 9 months, but hasn't lost any since.
Can you please be more specific about where on earth you are?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
PS. There are some scripts that you might want to run on your system, I use them, you can find them here:
https://forums.freenas.org/index.ph...d-identification-and-backup-the-config.27365/

Also, some useful commands: https://forums.freenas.org/index.php?threads/useful-commands.30314/#post-195192

A supplement to the official FAQ: https://forums.freenas.org/index.php?threads/the-faq.30209/#post-194036

Also, I have some other useful links listed under the button in my signature. There are a number of experienced system administrators that frequent the forum and will be happy to help you expand your knowledge and understanding of managing a FreeNAS system.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Joined
Jul 3, 2015
Messages
926
You could just name your drives at the start using glabel like I do below.

Code:
NAME					STATE	 READ WRITE CKSUM
   tank					ONLINE	   0	 0	 0
	 raidz2-0			  ONLINE	   0	 0	 0
	   label/1_50PV_2_0	ONLINE	   0	 0	 0
	   label/2_ZHZV_2_1	ONLINE	   0	 0	 0
	   label/3_52WV_2_2	ONLINE	   0	 0	 0
	   label/4_TL7V_2_3	ONLINE	   0	 0	 0
	   label/5_4JLV_2_4	ONLINE	   0	 0	 0
	   label/6_JP3Y_2_5	ONLINE	   0	 0	 0
	   label/7_GV7V_2_6	ONLINE	   0	 0	 0
	   label/8_4SJV_2_7	ONLINE	   0	 0	 0
	   label/9_69DV_2_8	ONLINE	   0	 0	 0
	   label/10_521V_2_9   ONLINE	   0	 0	 0
	 raidz2-1			  ONLINE	   0	 0	 0
	   label/11_TJMV_2_10  ONLINE	   0	 0	 0
	   label/12_6AWV_2_11  ONLINE	   0	 0	 0
	   label/13_4V0V_2_12  ONLINE	   0	 0	 0
	   label/14_65MV_2_13  ONLINE	   0	 0	 0
	   label/15_GVUV_2_14  ONLINE	   0	 0	 0
	   label/16_ZHAV_2_15  ONLINE	   0	 0	 0
	   label/17_65WV_2_16  ONLINE	   0	 0	 0
	   label/18_ZM7V_2_17  ONLINE	   0	 0	 0
	   label/19_0PMV_2_18  ONLINE	   0	 0	 0
	   label/20_4SSV_2_19  ONLINE	   0	 0	 0
	 raidz2-2			  ONLINE	   0	 0	 0
	   label/21_TLSV_2_20  ONLINE	   0	 0	 0
	   label/22_686V_2_21  ONLINE	   0	 0	 0
	   label/23_4ZMV_2_22  ONLINE	   0	 0	 0
	   label/24_ZJKV_2_23  ONLINE	   0	 0	 0
	   label/25_5NXV_2_24  ONLINE	   0	 0	 0
	   label/26_4VEV_2_25  ONLINE	   0	 0	 0
	   label/27_6GVV_2_26  ONLINE	   0	 0	 0
	   label/28_ZSLV_2_27  ONLINE	   0	 0	 0
	   label/29_X58V_2_28  ONLINE	   0	 0	 0
	   label/30_AMEV_2_29  ONLINE	   0	 0	 0
	 raidz2-3			  ONLINE	   0	 0	 0
	   label/31_TKUV_3_0   ONLINE	   0	 0	 0
	   label/32_ZRUV_3_1   ONLINE	   0	 0	 0
	   label/33_6A7V_3_2   ONLINE	   0	 0	 0
	   label/34_033V_3_3   ONLINE	   0	 0	 0
	   label/35_ZP3V_3_4   ONLINE	   0	 0	 0
	   label/36_6DXV_3_5   ONLINE	   0	 0	 0
	   label/37_H3PV_3_6   ONLINE	   0	 0	 0
	   label/38_HJ7V_3_7   ONLINE	   0	 0	 0
	   label/39_0RSV_3_8   ONLINE	   0	 0	 0
	   label/40_TLUV_3_9   ONLINE	   0	 0	 0
	 raidz2-4			  ONLINE	   0	 0	 0
	   label/41_5ZVV_3_10  ONLINE	   0	 0	 0
	   label/42_5R0V_3_11  ONLINE	   0	 0	 0
	   label/43_66RV_3_12  ONLINE	   0	 0	 0
	   label/44_ZPTV_3_13  ONLINE	   0	 0	 0
	   label/45_ZP5V_3_14  ONLINE	   0	 0	 0
	   label/46_5NYV_3_15  ONLINE	   0	 0	 0
	   label/47_03HV_3_16  ONLINE	   0	 0	 0
	   label/48_XUEV_3_17  ONLINE	   0	 0	 0
	   label/49_4R6V_3_18  ONLINE	   0	 0	 0
	   label/50_5EZV_3_19  ONLINE	   0	 0	 0
	 raidz2-5			  ONLINE	   0	 0	 0
	   label/51_69EV_3_20  ONLINE	   0	 0	 0
	   label/52_5D8V_3_21  ONLINE	   0	 0	 0
	   label/53_4T3V_3_22  ONLINE	   0	 0	 0
	   label/54_07GV_3_23  ONLINE	   0	 0	 0
	   label/55_4WZV_3_24  ONLINE	   0	 0	 0
	   label/56_3JWV_3_25  ONLINE	   0	 0	 0
	   label/57_4YBV_3_26  ONLINE	   0	 0	 0
	   label/58_4WDV_3_27  ONLINE	   0	 0	 0
	   label/59_5BVV_3_28  ONLINE	   0	 0	 0
	   label/60_U75V_3_29  ONLINE	   0	 0	 0

errors: No known data errors
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I am currently building a massive storage system with 36 HDDs (4 RAIDZ2 â 9 disks, cumulated to one volume).
That's cute.
You could just name your drives at the start using glabel like I do below.

Code:
NAME					STATE	 READ WRITE CKSUM
   tank					ONLINE	   0	 0	 0
	 raidz2-0			  ONLINE	   0	 0	 0
	   label/1_50PV_2_0	ONLINE	   0	 0	 0
	   label/2_ZHZV_2_1	ONLINE	   0	 0	 0
	   label/3_52WV_2_2	ONLINE	   0	 0	 0
	   label/4_TL7V_2_3	ONLINE	   0	 0	 0
	   label/5_4JLV_2_4	ONLINE	   0	 0	 0
	   label/6_JP3Y_2_5	ONLINE	   0	 0	 0
	   label/7_GV7V_2_6	ONLINE	   0	 0	 0
	   label/8_4SJV_2_7	ONLINE	   0	 0	 0
	   label/9_69DV_2_8	ONLINE	   0	 0	 0
	   label/10_521V_2_9   ONLINE	   0	 0	 0
	 raidz2-1			  ONLINE	   0	 0	 0
	   label/11_TJMV_2_10  ONLINE	   0	 0	 0
	   label/12_6AWV_2_11  ONLINE	   0	 0	 0
	   label/13_4V0V_2_12  ONLINE	   0	 0	 0
	   label/14_65MV_2_13  ONLINE	   0	 0	 0
	   label/15_GVUV_2_14  ONLINE	   0	 0	 0
	   label/16_ZHAV_2_15  ONLINE	   0	 0	 0
	   label/17_65WV_2_16  ONLINE	   0	 0	 0
	   label/18_ZM7V_2_17  ONLINE	   0	 0	 0
	   label/19_0PMV_2_18  ONLINE	   0	 0	 0
	   label/20_4SSV_2_19  ONLINE	   0	 0	 0
	 raidz2-2			  ONLINE	   0	 0	 0
	   label/21_TLSV_2_20  ONLINE	   0	 0	 0
	   label/22_686V_2_21  ONLINE	   0	 0	 0
	   label/23_4ZMV_2_22  ONLINE	   0	 0	 0
	   label/24_ZJKV_2_23  ONLINE	   0	 0	 0
	   label/25_5NXV_2_24  ONLINE	   0	 0	 0
	   label/26_4VEV_2_25  ONLINE	   0	 0	 0
	   label/27_6GVV_2_26  ONLINE	   0	 0	 0
	   label/28_ZSLV_2_27  ONLINE	   0	 0	 0
	   label/29_X58V_2_28  ONLINE	   0	 0	 0
	   label/30_AMEV_2_29  ONLINE	   0	 0	 0
	 raidz2-3			  ONLINE	   0	 0	 0
	   label/31_TKUV_3_0   ONLINE	   0	 0	 0
	   label/32_ZRUV_3_1   ONLINE	   0	 0	 0
	   label/33_6A7V_3_2   ONLINE	   0	 0	 0
	   label/34_033V_3_3   ONLINE	   0	 0	 0
	   label/35_ZP3V_3_4   ONLINE	   0	 0	 0
	   label/36_6DXV_3_5   ONLINE	   0	 0	 0
	   label/37_H3PV_3_6   ONLINE	   0	 0	 0
	   label/38_HJ7V_3_7   ONLINE	   0	 0	 0
	   label/39_0RSV_3_8   ONLINE	   0	 0	 0
	   label/40_TLUV_3_9   ONLINE	   0	 0	 0
	 raidz2-4			  ONLINE	   0	 0	 0
	   label/41_5ZVV_3_10  ONLINE	   0	 0	 0
	   label/42_5R0V_3_11  ONLINE	   0	 0	 0
	   label/43_66RV_3_12  ONLINE	   0	 0	 0
	   label/44_ZPTV_3_13  ONLINE	   0	 0	 0
	   label/45_ZP5V_3_14  ONLINE	   0	 0	 0
	   label/46_5NYV_3_15  ONLINE	   0	 0	 0
	   label/47_03HV_3_16  ONLINE	   0	 0	 0
	   label/48_XUEV_3_17  ONLINE	   0	 0	 0
	   label/49_4R6V_3_18  ONLINE	   0	 0	 0
	   label/50_5EZV_3_19  ONLINE	   0	 0	 0
	 raidz2-5			  ONLINE	   0	 0	 0
	   label/51_69EV_3_20  ONLINE	   0	 0	 0
	   label/52_5D8V_3_21  ONLINE	   0	 0	 0
	   label/53_4T3V_3_22  ONLINE	   0	 0	 0
	   label/54_07GV_3_23  ONLINE	   0	 0	 0
	   label/55_4WZV_3_24  ONLINE	   0	 0	 0
	   label/56_3JWV_3_25  ONLINE	   0	 0	 0
	   label/57_4YBV_3_26  ONLINE	   0	 0	 0
	   label/58_4WDV_3_27  ONLINE	   0	 0	 0
	   label/59_5BVV_3_28  ONLINE	   0	 0	 0
	   label/60_U75V_3_29  ONLINE	   0	 0	 0

errors: No known data errors
I don't suppose you have a handy little script that does this?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Last month I decommissioned an IBM DS8800. Two racks half full of 2.5" drives. Over 240 of them and over 48 SSDs. We also have about 8 racks FULL of storewiz V7K arrays. We only drop about 3 drives every two months.
 
Joined
Jul 3, 2015
Messages
926
How does that work from the GUI?
In short it doesn’t. I create a pool from the GUI and then detach it but don’t wipe the disks as this quickly sorts out the drive partitions for me. Then I manually name each drive partition 2 using glabel. Then I create the pool from the CLI using the label/names and -f as essentially you already have a pool from earlier and then export the pool. Then import it in via the GUI. After that everything is done via the GUI except for disk replacements which need to be done via the CLI if you want to keep using names. zpool status from CLI shows the drive names but viewing from GUI dosent. I needed a better solution than either keeping a spreadsheet and/or putting stickers on drives. All my single pathed systems are setup this way and have been for years and it’s bullet proof. You’ll notice I even add the sas3ircu locate ID on the name so I can flash the bay if I wish.
 
Last edited:
Joined
Jul 3, 2015
Messages
926
I don't suppose you have a handy little script that does this?
Afraid not. I do this manually. My scripting knowledge is very weak so by the time i would have figured it out then I could have done it manually :)
 
Joined
Dec 29, 2014
Messages
1,135
Afraid not. I do this manually. My scripting knowledge is very weak so by the time i would have figured it out then I could have done it manually :)

Do you have notes of the commands you run and such? If so, please post it. Perhaps somebody here could take a shot at building a script.
 
Joined
Jul 3, 2015
Messages
926
Do you have notes of the commands you run and such? If so, please post it. Perhaps somebody here could take a shot at building a script.
Sure. These are the notes I wrote up for myself a while ago.

Building zpool on single pathed systems

Step 1

In notepad or word make a list and identify your disk names by using sas3ircu 0 display (example 1_50PV_2_0)

The first number represents the drives physical location in the system (1)

The second set of numbers and letters represent the last 4 parts of the drives serial number (50PV)

The next number represents the Enclosure (Expander) the drive is attached to (2)

The final number represents the Slot number (0)

Step 2

Build a pool via the webui of all your disks. It doesn’t matter what type of pool at the moment as essentially, we just want FreeNAS to partition the drives for swap and data and also set alignment.

Step 3

Detach (export) the pool via the webui but don’t mark disks as new otherwise this will wipe all of the partitions etc.

Step 4

Now it’s time to label your disks. Using glabel label each disk using the names you made earlier. glabel label –v 1_50PV_2_0 /dev/da0p2

Note that only p2 needs naming as this is our data partition and p1 will become swap.

smartctl -a /dev/da0 | grep Serial

TIP: (if building BIG pools I would suggest in a 10 disk Z2 setup that you label your first 10 disks and then move to step 5 to create the pool. After that work in blocks of 10 (the size of the vdev) label those drives and add another vdev to the pool)

Step 5

Create a new zpool from the command line using the label names.

zpool create –f tank raidz2 label/1_50PV_2_0 label/2_ZHZV_2_1 label/3_52WV_2_2 label/4_TL7V_2_3

TIP: zpool add –f tank raidz2 label/1_50PV_2_0 label/2_ZHZV_2_1 label/3_52WV_2_2 label/4_TL7V_2_3

Step 6

Once the pool is created via the command line set the following pararmeters.

From the command line set the zpool failmode to continue and autoexpand to on.

zpool set failmode=continue tank

zpool set autoexpand=on tank

export the pool.

zpool export tank

Then import the pool via the webui

Step 7

From the webui set the top level dataset compression to lz4.

Step 8

Future disk replacements must be via the command line and follow this process in order to keep drive labels maintained.

Step 9

Note: I like to prep my new replacement disk including naming on a test system and then when ready simply fit the drive and replace. You don’t have to do this but if you choose to do it on your live system please be very careful.

Disk replacement. Make note of the drive you are going to replace i.e. is name example 1_50PV_2_0 and its associated da number. If possible offline the faulty drive via the WebUI.

You can now physically remove the drive from the array. Attach the new drive to the drive caddy and insert into the system making a note of the assigned da number (which will often be the same number of the old drive). Now create a new pool out of that one drive via the WebUI. Then detach the new one drive pool but don’t mark disks at new. Now follow step 4 but note that the new drives name will be slightly different from the old drive due to its different serial number. Use sas3ircu or smartctl to identify the new drives serial number and in-turn its new name for example 1_16LE_2_0. Now using the below command via the command line replace the old drive with the new.

zpool replace –f tank label/1_50PV_2_0 label/1_16LE_2_0
 
Joined
Jul 3, 2015
Messages
926
I am currently building a massive storage system with 36 HDDs
BTW I call my 36 bays my small systems even when using 10TB drives. Medium being 60 bays and Large being my 90 bays. I reserve the word 'massive' :D
 
Joined
Dec 29, 2014
Messages
1,135
Status
Not open for further replies.
Top