rporter117
Cadet
- Joined
- Mar 22, 2016
- Messages
- 5
Something’s up with my NAS after I moved with me across town. Suddenly none of the drives connected to one of the controllers wants to be detected. Both controllers show up in lspci. At least I still have 12 remaining of the 20 bays to work with to import my 10 disk array. Edit: Reseating connections allowed me to access all of my disks at once now. I still may have a bad sas cable.
I'm running the latest version of a fresh, unconfigured install of FreeNAS 9.10
Hardware:
Intel s5000psl motherboard
Xeon 5150 x2
32GB ECC memory
2x Dell PERC H200 crossflashed to LSI firmware in IT mode
16x Seagate ST31000340NS 10 of which are in a mirrored array. These drives have a known firmware issue, but I’ve not seen any issues with them...yet
Initially, a few disks didn’t show up when I issued the zpool import command. I didn’t think much of this since this is not a production pool and the data isn’t tremendously important. I’d just rather not have to redownload a few terabytes. I wanted to grab about 60 GB off of it, but it was transferring at an abysmal 2 MB/s. I thought this could be because I was doing this over sftp. Top showed that ssh was using a minimal amount of cpu and io waiting category was at a whopping 75%. It’s worth noting that I was running the latest updated Fedora server with ZFS on Linux.
I shut it down to deal with it later. Turning it on at said later and import fails to find a mirror pair. Trying many different things I return to FreeNAS. Here is the result of zpool import.
well not so OK. Making sure the array drives didn’t get swapped around in the move, I examined the rest of the drives with zdb looking for drives with the right pool name and id. These two are the only ones that come up with anything relating. Some of the other drives had been used previously in a raidz array and, as I found out, iops aren’t so great with that. I’m having a creeping feeling that I may have given away a couple of the wrong disks. Hopefully not a huge deal as the array imported before.
This is the point where I’m at a loss where to go from here. Why does importing not include these two other drives? What more information can I get to help you guys?
I'm running the latest version of a fresh, unconfigured install of FreeNAS 9.10
Hardware:
Intel s5000psl motherboard
Xeon 5150 x2
32GB ECC memory
2x Dell PERC H200 crossflashed to LSI firmware in IT mode
16x Seagate ST31000340NS 10 of which are in a mirrored array. These drives have a known firmware issue, but I’ve not seen any issues with them...yet
Initially, a few disks didn’t show up when I issued the zpool import command. I didn’t think much of this since this is not a production pool and the data isn’t tremendously important. I’d just rather not have to redownload a few terabytes. I wanted to grab about 60 GB off of it, but it was transferring at an abysmal 2 MB/s. I thought this could be because I was doing this over sftp. Top showed that ssh was using a minimal amount of cpu and io waiting category was at a whopping 75%. It’s worth noting that I was running the latest updated Fedora server with ZFS on Linux.
I shut it down to deal with it later. Turning it on at said later and import fails to find a mirror pair. Trying many different things I return to FreeNAS. Here is the result of zpool import.
Code:
#zpool import -f -m -F -n pool: poolparty id: 3245950555948954969 state: UNAVAIL status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. see: http://illumos.org/msg/ZFS-8000-EY config: poolparty UNAVAIL missing device gptid/8e64ca66-64e0-11e5-bd8b-0015176425ac ONLINE (ada0) da5p2 ONLINE mirror-2 DEGRADED 10757988070345942881 UNAVAIL cannot open gptid/923a8769-64e0-11e5-bd8b-0015176425ac ONLINE (da4) mirror-4 ONLINE da2p2 ONLINE da3p2 ONLINE OK
well not so OK. Making sure the array drives didn’t get swapped around in the move, I examined the rest of the drives with zdb looking for drives with the right pool name and id. These two are the only ones that come up with anything relating. Some of the other drives had been used previously in a raidz array and, as I found out, iops aren’t so great with that. I’m having a creeping feeling that I may have given away a couple of the wrong disks. Hopefully not a huge deal as the array imported before.
Code:
#zdb -l /dev/da0p2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'poolparty' state: 0 txg: 3034666 pool_guid: 3245950555948954969 errata: 0 hostname: 'mememachine.kewryan.com' top_guid: 16062381461016655633 guid: 16062381461016655633 hole_array[0]: 3 vdev_children: 5 vdev_tree: type: 'disk' id: 0 guid: 16062381461016655633 path: '/dev/disk/by-uuid/3245950555948954969' whole_disk: 1 metaslab_array: 39 metaslab_shift: 33 ashift: 12 asize: 998051414016 is_log: 0 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data <snip redundancies for brevity> #zdb -l /dev/da1p2 -------------------------------------------- LABEL 0 -------------------------------------------- version: 5000 name: 'poolparty' state: 0 txg: 3031861 pool_guid: 3245950555948954969 hostid: 2283479323 hostname: '' top_guid: 8003677620862493554 guid: 11734464301926509782 hole_array[0]: 3 vdev_children: 5 vdev_tree: type: 'mirror' id: 1 guid: 8003677620862493554 metaslab_array: 37 metaslab_shift: 33 ashift: 12 asize: 998052462592 is_log: 0 create_txg: 4 children[0]: type: 'disk' id: 0 guid: 11734464301926509782 path: '/dev/gptid/8fba9f8d-64e0-11e5-bd8b-0015176425ac' whole_disk: 1 create_txg: 4 children[1]: type: 'disk' id: 1 guid: 3389215055776677682 path: '/dev/gptid/9090d808-64e0-11e5-bd8b-0015176425ac' whole_disk: 1 create_txg: 4 features_for_read: com.delphix:hole_birth com.delphix:embedded_data <snip redundancies for brevity>
This is the point where I’m at a loss where to go from here. Why does importing not include these two other drives? What more information can I get to help you guys?
Last edited: