lost ZFS pool after upgrade from 11.0-U4 to 11.1-RELEASE

Status
Not open for further replies.

dide

Cadet
Joined
May 2, 2018
Messages
7
If I try to upgrade my FreeNAS to the current version I'm loosing my ZFS pool.
It looks that the storage system under the system is no longer visible and im loosing the ttttt zfs pool.

I already was trying to remove the ZFS from the system, reinitializing but with the new software version I'm not able to even see the disks.


with 10.0-U4
Code:
[root@~]# zfs list																										
NAME													 USED  AVAIL  REFER  MOUNTPOINT											 
freenas-boot											3.16G   266G	64K  none												   
freenas-boot/ROOT									   3.14G   266G	29K  none												   
freenas-boot/ROOT/11.0-U4								729M   266G   728M  /													 
freenas-boot/ROOT/11.1-RELEASE						   825M   266G   826M  /													 
freenas-boot/ROOT/11.1-U1								281K   266G   826M  /													 
freenas-boot/ROOT/11.1-U4							   1.62G   266G   837M  /													 
freenas-boot/grub									   6.84M   266G  6.84M  legacy												 
ttttt												   56.5M  33.7T   281K  /mnt/ttttt											 
ttttt/.system										   53.7M  33.7T   307K  legacy												 
ttttt/.system/configs-3c43883aef7a4eb0be1801105b684ae7   281K  33.7T   281K  legacy												 
ttttt/.system/cores									  281K  33.7T   281K  legacy												 
ttttt/.system/rrd-3c43883aef7a4eb0be1801105b684ae7	  51.5M  33.7T  51.5M  legacy												 
ttttt/.system/samba4									 856K  33.7T   856K  legacy												 
ttttt/.system/syslog-3c43883aef7a4eb0be1801105b684ae7	575K  33.7T   575K  legacy 


with 11.1.RELEASE or newer including 11.1-U4
Code:
[root@ ~]# zfs list																									   
NAME							 USED  AVAIL  REFER  MOUNTPOINT																	 
freenas-boot					3.16G   266G	64K  none																		   
freenas-boot/ROOT			   3.14G   266G	29K  none																		   
freenas-boot/ROOT/11.0-U4	   1.19M   266G   728M  /																			 
freenas-boot/ROOT/11.1-RELEASE  1.52G   266G   826M  /																			 
freenas-boot/ROOT/11.1-U1		281K   266G   826M  /																			 
freenas-boot/ROOT/11.1-U4	   1.62G   266G   837M  /																			 
freenas-boot/grub			   6.84M   266G  6.84M  legacy																		 


I'm using the following hardware
HP ProLiant DL 380 G7
- 2 Intel Prozessor with 6/6 cores
- 36 Gib Memory
- controller HP Smart Array O410i (for local disks)
- controller LSI MPT SAS2 BIOS MPT2BIOS-7.05.04
- storage system 1 hp p2000 G3 SAS
- 12 * 2TiB
- storage system 2 hp p2000 G3 SAS
- 12 * 2 TiB
All disks are delivered to the host with no raid, so I have total all 24disks with 48 TiB in the host as space for the zfs.
 

styno

Patron
Joined
Apr 11, 2016
Messages
466
When booted in the new boot environment, what is the output of camcontrol devlist & zpool import?
 

dide

Cadet
Joined
May 2, 2018
Messages
7
Code:
[root@~]# camcontrol devlist																							 
<HP RAID 1(1+0) OK>				at scbus0 target 0 lun 0 (pass0,da0)															 
[root@~]# zpool import																									
[root@~]#																											 
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
- controller HP Smart Array O410i (for local disks)
Is this what the disk shelves are attached to? I am confused by how you have this configured. That is a hardware RAID controller. How did you configure your drives at the hardware level because FreeNAS is not seeing the drives as being present.
 

dide

Cadet
Joined
May 2, 2018
Messages
7
me too :smile:
Yes, that's what I get back. In the version 11.0-U4 the disk shelves where visible. And if I'm current going back to the old version I can see the shelves.
 

jde

Explorer
Joined
Aug 1, 2015
Messages
93
Please post the outputs of the following commands when booted into 11.0 with your shelves visible:
Code:
zpool status

Code:
zdb -e -C ttttt 
 

dide

Cadet
Joined
May 2, 2018
Messages
7
@rs225
Code:
[root@~]# gpart show																									
=>	   40  585871888  da0  GPT  (279G)																							
		40	   1024	1  bios-boot  (512K)																					
	  1064  585870856	2  freebsd-zfs  (279G)																					
  585871920		  8	   - free -  (4.0K)	 
 


no result
Code:
[root@~]# zpool import -D																								
[root@~]#	
 


@jde
Running on 11.1
Code:
[root@~]# zpool status																									
  pool: freenas-boot																												
 state: ONLINE																													
  scan: scrub repaired 0 in 0 days 00:03:11 with 0 errors on Wed May  2 03:48:11 2018											  
config:																															
																																  
	   NAME		STATE	 READ WRITE CKSUM																					
	   freenas-boot  ONLINE	   0	 0	 0																					
		 da0p2	 ONLINE	   0	 0	 0																					
																																  
errors: No known data errors	


Code:
[root@~]# zdb -e -C ttttt																								
zdb: can't open 'ttttt': No such file or directory


Mit 11.0
Code:
root@:~ # zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Wed May  2 03:48:11 2018
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  da0p2	 ONLINE	   0	 0	 0

errors: No known data errors

  pool: ttttt
 state: ONLINE
  scan: none requested
config:

		NAME											STATE	 READ WRITE CKSUM
		ttttt										   ONLINE	   0	 0	 0
		  raidz3-0									  ONLINE	   0	 0	 0
			gptid/e0c4d712-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e106856b-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e15d9287-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e1a1a5d0-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e1fa1979-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e2578a04-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e2bfaa9d-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e31c4637-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e3a58638-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e40341e2-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e48b3810-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e4e9e766-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e57eb98c-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e5fbc447-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e6836169-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e702e1c7-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e769ecd0-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e7b97004-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e819882a-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e87853e6-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e8f2f18c-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e94fe8d9-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/e9cfa260-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0
			gptid/ea3f1ffa-4df3-11e8-a296-3cd92b08dd26  ONLINE	   0	 0	 0

errors: No known data errors


Code:
root@:~ # zdb -e -C ttttt

MOS Configuration:
		version: 5000
		name: 'ttttt'
		state: 0
		txg: 972
		pool_guid: 5841313666661575293
		hostid: 1158538636
		hostname: ''
		com.delphix:has_per_vdev_zaps
		vdev_children: 1
		vdev_tree:
			type: 'root'
			id: 0
			guid: 5841313666661575293
			create_txg: 4
			children[0]:
				type: 'raidz'
				id: 0
				guid: 16206646194411039851
				nparity: 3
				metaslab_array: 61
				metaslab_shift: 38
				ashift: 12
				asize: 47909500354560
				is_log: 0
				create_txg: 4
				com.delphix:vdev_zap_top: 36
				children[0]:
					type: 'disk'
					id: 0
					guid: 4090656060401477966
					path: '/dev/gptid/e0c4d712-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 37
				children[1]:
					type: 'disk'
					id: 1
					guid: 18232579817984077031
					path: '/dev/gptid/e106856b-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 38
				children[2]:
					type: 'disk'
					id: 2
					guid: 13995313159442870693
					path: '/dev/gptid/e15d9287-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 39
				children[3]:
					type: 'disk'
					id: 3
					guid: 1252702404312543520
					path: '/dev/gptid/e1a1a5d0-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 40
				children[4]:
					type: 'disk'
					id: 4
					guid: 9996367706826749283
					path: '/dev/gptid/e1fa1979-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 41
				children[5]:
					type: 'disk'
					id: 5
					guid: 14430110327711285158
					path: '/dev/gptid/e2578a04-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 42
				children[6]:
					type: 'disk'
					id: 6
					guid: 3270102326117751032
					path: '/dev/gptid/e2bfaa9d-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 43
				children[7]:
					type: 'disk'
					id: 7
					guid: 13741411305928399932
					path: '/dev/gptid/e31c4637-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 44
				children[8]:
					type: 'disk'
					id: 8
					guid: 5464763589360011333
					path: '/dev/gptid/e3a58638-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 45
				children[9]:
					type: 'disk'
					id: 9
					guid: 11304022145464963891
					path: '/dev/gptid/e40341e2-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 46
				children[10]:
					type: 'disk'
					id: 10
					guid: 3547898590903050345
					path: '/dev/gptid/e48b3810-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 47
				children[11]:
					type: 'disk'
					id: 11
					guid: 8792813449632957169
					path: '/dev/gptid/e4e9e766-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 48
				children[12]:
					type: 'disk'
					id: 12
					guid: 6232606062441710058
					path: '/dev/gptid/e57eb98c-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 49
				children[13]:
					type: 'disk'
					id: 13
					guid: 11868209323288529796
					path: '/dev/gptid/e5fbc447-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 50
				children[14]:
					type: 'disk'
					id: 14
					guid: 15170206192189264557
					path: '/dev/gptid/e6836169-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 51
				children[15]:
					type: 'disk'
					id: 15
					guid: 5926634737694280053
					path: '/dev/gptid/e702e1c7-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 52
				children[16]:
					type: 'disk'
					id: 16
					guid: 12958922975604216297
					path: '/dev/gptid/e769ecd0-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 53
				children[17]:
					type: 'disk'
					id: 17
					guid: 10517342857078986096
					path: '/dev/gptid/e7b97004-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 54
				children[18]:
					type: 'disk'
					id: 18
					guid: 9667346789992921291
					path: '/dev/gptid/e819882a-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 55
				children[19]:
					type: 'disk'
					id: 19
					guid: 186158909596245655
					path: '/dev/gptid/e87853e6-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 56
				children[20]:
					type: 'disk'
					id: 20
					guid: 10874705454886735862
					path: '/dev/gptid/e8f2f18c-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 57
				children[21]:
					type: 'disk'
					id: 21
					guid: 16036457026610252919
					path: '/dev/gptid/e94fe8d9-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 58
				children[22]:
					type: 'disk'
					id: 22
					guid: 13686163976871981590
					path: '/dev/gptid/e9cfa260-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 59
				children[23]:
					type: 'disk'
					id: 23
					guid: 14393876824212976133
					path: '/dev/gptid/ea3f1ffa-4df3-11e8-a296-3cd92b08dd26'
					whole_disk: 1
					create_txg: 4
					com.delphix:vdev_zap_leaf: 60
		features_for_read:
			com.delphix:hole_birth
			com.delphix:embedded_data
 
Last edited:

jde

Explorer
Joined
Aug 1, 2015
Messages
93
Did you run the commands while booted into 11.0? My understanding was that you could see the disks/pool in 11.0, but not in 11.1. The results you posted don't reflect any storage pool, so I'm guessing you ran the commands in 11.1.
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Yes, is this the output when you can see the drives in the output of camcontrol devlist ?
 

dide

Cadet
Joined
May 2, 2018
Messages
7
@jde
updated the last post with both os

@rs225:
Code:
root@bbsvla002:~ # camcontrol devlist
<HP RAID 1(1+0) OK>				at scbus0 target 0 lun 0 (pass0,da0)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 0 (pass1,ses0)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 1 (pass2,da1)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 2 (pass3,da2)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 3 (pass4,da3)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 4 (pass5,da4)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 5 (pass6,da5)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 6 (pass7,da6)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 7 (pass8,da7)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 8 (pass9,da8)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 9 (pass10,da9)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun a (pass11,da10)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun b (pass12,da11)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun c (pass13,da12)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun d (pass14,da13)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun e (pass15,da14)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun f (pass16,da15)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 10 (pass17,da16)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 11 (pass18,da17)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 12 (pass19,da18)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 13 (pass20,da19)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 14 (pass21,da20)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 15 (pass22,da21)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 16 (pass23,da22)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 17 (pass24,da23)
<HP P2000 G3 SAS T252>			 at scbus2 target 1 lun 18 (pass25,da24)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 0 (pass26,ses1)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 1 (pass27,da25)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 2 (pass28,da26)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 3 (pass29,da27)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 4 (pass30,da28)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 5 (pass31,da29)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 6 (pass32,da30)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 7 (pass33,da31)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 8 (pass34,da32)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 9 (pass35,da33)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun a (pass36,da34)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun b (pass37,da35)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun c (pass38,da36)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun d (pass39,da37)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun e (pass40,da38)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun f (pass41,da39)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 10 (pass42,da40)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 11 (pass43,da41)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 12 (pass44,da42)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 13 (pass45,da43)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 14 (pass46,da44)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 15 (pass47,da45)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 16 (pass48,da46)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 17 (pass49,da47)
<HP P2000 G3 SAS T252>			 at scbus2 target 2 lun 18 (pass50,da48)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I don't think it is related to the problem but having all 24 drives in a single RAIDz3 vdev is probably not the best way to configure your pool, especially when the drives are split between two drive shelves.

It looks to me as if there must have been some underlying software / driver change in FreeNAS. I would suggest submitting a bug report:
https://redmine.ixsystems.com/projects/freenas
 

jde

Explorer
Joined
Aug 1, 2015
Messages
93
I agree with Chris Moore, the 24 disk wide vdev is not recommended and that's its probably not causing your current headache. I believe the recommendation/conventional wisdom is to not exceed 12 disks/vdev.

Looking at the camcontrol output, it looks like your disk enclosures are not in true HBA mode, but rather passing through identical and generic "HP P2000" in JBOD mode. Just to confirm, what make/model of disks are actually in the enclosures? If your issue is in fact because of hardware raid, which is highly discouraged, I'm doubtful the dev's are going to spend time and resources to restore backwards compatibility with an unsupported configuration.

Good luck
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
- controller HP Smart Array O410i (for local disks)
- controller LSI MPT SAS2 BIOS MPT2BIOS-7.05.04
What @jde said is why I asked about the hardware RAID controller before. If you set each drive up as an individual RAID-0 and passed all those RAID-0 volumes from the RAID controller to FreeNAS, that is not how it is supposed to be done and it could very easily be the source of your sorrow now.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
A 24-wide vdev is pretty disastrous, but would not cause this problem.
 

dide

Cadet
Joined
May 2, 2018
Messages
7
Thank you all for your help.
First: Yes I will open a bug as I guess that it has to do with multipath on the disks.

I was redesigning the underline hw raid configuration. Now I do have always a 1:1 raid 1 over the shelfs (disk 1 shelf A with disk 1 shelf B) and this ending up with 12 volumes. This 12 volumes are now in the version 11.0 taking together as ZFS pool. I'm able to see the multipath button on the storage tab.
After switching back to the 11.1-u4 version:
The zfs pool is gone and the multipath button is not visible.

Again: Thank you to let me rethink about my disk assignments. As soon I get an feedback from the bug report I will update it.
 

vrod

Dabbler
Joined
Mar 14, 2016
Messages
39
The way you have explained that it's setup makes me think that there could be a driver difference between the 2 OS'es. Are you able to do a kldstat and post the output?
 

dide

Cadet
Joined
May 2, 2018
Messages
7
find the output of the kldstat.

the conclusion: on the working 11.0 there is one file more:

fdescfs.ko

Version 11.1
Code:
[root@ ~]# kldstat																										 
Id Refs Address			Size	 Name																							
 1   75 0xffffffff80200000 20a0000  kernel																						 
 2	1 0xffffffff82631000 ffe3c	ispfw.ko																						
 3	1 0xffffffff82731000 7f2a	 freenas_sysctl.ko																			   
 4	1 0xffffffff82811000 84a6	 ipmi.ko																						 
 5	1 0xffffffff8281a000 ef2	  smbus.ko																						
 6	1 0xffffffff8281b000 333885   vmm.ko																						 
 7	1 0xffffffff82b4f000 3108	 nmdm.ko																						 
 8	1 0xffffffff82b53000 101c6	geom_mirror.ko																				 
 9	1 0xffffffff82b64000 46c3	 geom_stripe.ko																				 
10	1 0xffffffff82b69000 fbad	 geom_raid3.ko																				   
11	1 0xffffffff82b79000 16e56	geom_raid5.ko																				   
12	1 0xffffffff82b90000 59c9	 geom_gate.ko																					
13	1 0xffffffff82b96000 4d68	 geom_multipath.ko																			   
14	1 0xffffffff82b9b000 837	  dtraceall.ko																					
15	9 0xffffffff82b9c000 41e31	dtrace.ko																					   
16	1 0xffffffff82bde000 48f4	 dtmalloc.ko																					 
17	1 0xffffffff82be3000 5b4e	 dtnfscl.ko																					 
18	1 0xffffffff82be9000 67f3	 fbt.ko																						 
19	1 0xffffffff82bf0000 58e8a	fasttrap.ko																					 
20	1 0xffffffff82c49000 1741	 sdt.ko																						 
21	1 0xffffffff82c4b000 bf02	 systrace.ko																					 
22	1 0xffffffff82c57000 c082	 systrace_freebsd32.ko																		   
23	1 0xffffffff82c64000 5452	 profile.ko																					 
24	1 0xffffffff82c6a000 1bbc9	hwpmc.ko																						
25	1 0xffffffff82c86000 d006	 t3_tom.ko																					   
26	2 0xffffffff82c94000 4626	 toecore.ko																					 
27	1 0xffffffff82c99000 15e3a	t4_tom.ko																					   
28	1 0xffffffff82caf000 35a3	 ums.ko 


Version 11.0
Code:
[root@ ~]# kldstat																										 
Id Refs Address			Size	 Name																							
 1   77 0xffffffff80200000 2067000  kernel																						 
 2	1 0xffffffff825ef000 ffd1c	ispfw.ko																						
 3	1 0xffffffff826ef000 7151	 freenas_sysctl.ko																			   
 4	1 0xffffffff82811000 592f	 fdescfs.ko																					 
 5	1 0xffffffff82817000 3337ee   vmm.ko																						 
 6	1 0xffffffff82b4b000 30c4	 nmdm.ko																						 
 7	1 0xffffffff82b4f000 fabd	 geom_mirror.ko																				 
 8	1 0xffffffff82b5f000 47a1	 geom_stripe.ko																				 
 9	1 0xffffffff82b64000 ffc4	 geom_raid3.ko																				   
10	1 0xffffffff82b74000 16e03	geom_raid5.ko																				   
11	1 0xffffffff82b8b000 5915	 geom_gate.ko																					
12	1 0xffffffff82b91000 4e75	 geom_multipath.ko																			   
13	1 0xffffffff82b96000 829	  dtraceall.ko																					
14	9 0xffffffff82b97000 41821	dtrace.ko																					   
15	1 0xffffffff82bd9000 4883	 dtmalloc.ko																					 
16	1 0xffffffff82bde000 5aae	 dtnfscl.ko																					 
17	1 0xffffffff82be4000 67e1	 fbt.ko																						 
18	1 0xffffffff82beb000 58b81	fasttrap.ko																					 
19	1 0xffffffff82c44000 1769	 sdt.ko																						 
20	1 0xffffffff82c46000 cf1e	 systrace.ko																					 
21	1 0xffffffff82c53000 ce87	 systrace_freebsd32.ko																		   
22	1 0xffffffff82c60000 53b6	 profile.ko																					 
23	1 0xffffffff82c66000 8496	 ipmi.ko																						 
24	1 0xffffffff82c6f000 f08	  smbus.ko																						
25	1 0xffffffff82c70000 1bdde	hwpmc.ko																		   
26	1 0xffffffff82c8c000 cfe9	 t3_tom.ko																					   
27	2 0xffffffff82c99000 45be	 toecore.ko																					 
28	1 0xffffffff82c9e000 15c71	t4_tom.ko																					   
29	1 0xffffffff82cb4000 3620	 ums.ko
 
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I was redesigning the underline hw raid configuration. Now I do have always a 1:1 raid 1 over the shelfs (disk 1 shelf A with disk 1 shelf B)
I don't know if it is a language barrier or not, but it sounds like you are intentionally using hardware RAID with FreeNAS / ZFS ?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top