ASUS PIKE 2208 problem

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
Hi there! Recently put freenas at home. And periodically there is a problem, disks connected through Asus pike 2208 raid controller fall off.In the logs it says:

Sep 18 12:29:02 rpc-nas (da1:(da0:mrsas0:0:mrsas0:0:1:0:0): 0): Invalidating pack
Sep 18 12:29:02 rpc-nas Invalidating pack
Sep 18 12:29:02 rpc-nas da1 at mrsas0 bus 0 scbus0 target 1 lun 0
Sep 18 12:29:02 rpc-nas da1: <LSI MRROMB 3.24> s/n 0047d83c15288efd2400ed06c062ea00 detached
Sep 18 12:29:02 rpc-nas da0 at mrsas0 bus 0 scbus0 target 0 lun 0
Sep 18 12:29:02 rpc-nas da0: <LSI MRROMB 3.24> s/n 00ec3a7fce02e9e52400ed06c062ea00 detached
Sep 18 12:32:05 rpc-nas mrsas0: Internal command timed out after 180 seconds.
Sep 18 12:32:05 rpc-nas mrsas0: DCMD timed out after 180 seconds from mrsas_issue_blocked_cmd
Sep 18 12:32:05 rpc-nas mrsas0: DCMD opcode 0x1101000
Sep 18 12:32:18 rpc-nas devd: notify_clients: send() failed; dropping unresponsive client
Sep 18 12:32:42 rpc-nas zfsd: Consumer::EventsPending(): POLLHUP detected on devd socket.

the last two lines are repeated, the system hangs until forced reboot

prompt in what there can be a problem?

hardware
mb asus z8pe-d18
ram 98GB ecc
raid asus pike 2208
hdd ssd kingspec 720G mirror

from raid controller each disk is forwarded as virtual raid 0 consisting of the 1st disk. the trim controller does not pass because of this its slow work is observed, I am going to fix by means of over-provisioning of reservation of a place
 
Joined
Oct 18, 2018
Messages
969
Hi @bormental, sorry you're having issues.

You may already know this, but RAID controllers and FreeNAS is generally seen as a bad idea. At best they can obscure important disk information required to monitor disk health, such as smartctl, at worst they interfere with how the system communicates with the device and add their own layer of indirection and when the RAID controller fails you're unable to recover your pool even on perfectly good disks. FreeNAS uses zfs which likes direct, unmodified access to the disks.

The advice around these forums with respect to RAID controllers is to either flash them into IT mode or choose to not use the controller or don't use FreeNAS because doing so adds significant risk. In order to use IT mode you'd need to flash the controller.

For your specific case I can't say what is wrong. I would suggest you consider backing up your data and flashing your controller to IT mode or picking up a used controller off ebay to replace it. The time and little bit of money you'll spend doing this will be well repaid by a more stable system.
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
Hi @bormental, sorry you're having issues.

... you'll spend doing this will be well repaid by a more stable system.

Thank you for not staying away. Yes I am aware of problems of hardware RAID but while other way to use a large number of disks isn't present. Yes PIKE allows a variety of settings for the disk, including disk forwarding on the line. But smartctl will not work anyway, although partitioning remains the same with the controller and without it. I'll try to experiment with iron. Where can I read about IT mode? Thanks.
 
Joined
Oct 18, 2018
Messages
969
Yes I am aware of problems of hardware RAID but while other way to use a large number of disks isn't present.
How many drives do you have? Your board has plenty of SATA ports, you could likely get by with a 50-60 dollar HBA to give you access to all the drives you need; if that isn't enough another 30-40 gets you an expander which dramatically increases the total possible drive count.
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
How many drives do you have? Your board has plenty of SATA ports, you could likely get by with a 50-60 dollar HBA to give you access to all the drives you need; if that isn't enough another 30-40 gets you an expander which dramatically increases the total possible drive count.
planned about 20. there are 14 sata connectors on the Board but only 6 work without PIKE, the other 8 only if the controller is on Board

PCIe NMVE is planned for L2ARC and ZIL. On the machine there will be a storage and virtual machines of 10-15 pieces
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
How many drives do you have? Your board has plenty of SATA ports, you could likely get by with a 50-60 dollar HBA to give you access to all the drives you need; if that isn't enough another 30-40 gets you an expander which dramatically increases the total possible drive count.
ASUS PIKE is essentially the same HBA working through PCIe. only the connector is unique :smile:
 
Joined
Oct 18, 2018
Messages
969
planned about 20. there are 14 sata connectors on the Board but only 6 work without PIKE, the other 8 only if the controller is on Board
You could go with a traditional HBA then to power those extra drives. The advantage being you're using an IT mode HBA rather than any kind of hardware RAID. Though, I admit I am unfamiliar with the ASUS PIKE.

PCIe NMVE is planned for L2ARC and ZIL. On the machine there will be a storage and virtual machines of 10-15 pieces
NVMe is a great choice for the ZIL; I'm not 100% sure if it is as important to have something so fast for an L2ARC. I only mention it if you're low on PCIe slots when all is said and done. The other option is to use a PCIe -> 2x M.2 in an 8x slot if that would save you an extra PCIe slot.

Like I said, I'm not super familiar with your specific hardware though so take my advice with a grain of salt. The biggest thing I would suggest though is to do whatever you can to avoid hardware raid and go with an HBA in IT mode.
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
You could go with a traditional HBA then to power those extra drives. The advantage being you're using an IT mode HBA rather than any kind of hardware RAID. Though, I admit I am unfamiliar with the ASUS PIKE.


NVMe is a great choice for the ZIL; I'm not 100% sure if it is as important to have something so fast for an L2ARC. I only mention it if you're low on PCIe slots when all is said and done. The other option is to use a PCIe -> 2x M.2 in an 8x slot if that would save you an extra PCIe slot.

Like I said, I'm not super familiar with your specific hardware though so take my advice with a grain of salt. The biggest thing I would suggest though is to do whatever you can to avoid hardware raid and go with an HBA in IT mode.

seems the problem was hardware, reversed the shoelaces sata until glitch there is no
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
but to reflash PIKE all the same it is necessary on host sata. I will search for firmware
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
enabled jbod mode. That is all right. the drives are interchangeable with the hba controller. trim works. smart works
 
Joined
Oct 18, 2018
Messages
969
Hi @jotauve. I don't know how to do off-hand but I bet a quick search will turn something up. Did you get a chance to google "Asus PIKE 2108 flash to IT mode" or similar? If so, did nothing come up?
 

jotauve

Cadet
Joined
Oct 28, 2019
Messages
9
Hi @jotauve. I don't know how to do off-hand but I bet a quick search will turn something up. Did you get a chance to google "Asus PIKE 2108 flash to IT mode" or similar? If so, did nothing come up?

Yes, i've done this search but all posts refers to PIKE 2008, not 2108
 
Joined
Oct 18, 2018
Messages
969
Ah, I see. I did a search and put the 2108 in quotes and found this pdf, not sure if it helps any or not.
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
I have a Asus server with PIKE 2108 card. How can i flash in IT mode? Thanks.

checking JBOD support:
megacli -AdpGetProp enablejbod -aALL
enable JBOD mode:
megacli -AdpSetProp EnableJBOD 1 -aALL
the latest version of firmware on asus.com
JBOD mode is supported by the standard firmware PIKE 2208 nothing else need to flash, for 2108 need to check

man megacli
 

jotauve

Cadet
Joined
Oct 28, 2019
Messages
9
checking JBOD support:
megacli -AdpGetProp enablejbod -aALL
enable JBOD mode:
megacli -AdpSetProp EnableJBOD 1 -aALL
the latest version of firmware on asus.com
JBOD mode is supported by the standard firmware PIKE 2208 nothing else need to flash, for 2108 need to check

man megacli

Thanks

Using last firmware from ASUS

megacli -AdpSetProp EnableJBOD 1 -a0 -> JBOD Enabled

megacli PDMakeJBOD -PhysDrv[32:5] -a0 -> Failed to change PD state (same error after reboot)

Is the correct way?
 

bormental

Dabbler
Joined
Sep 18, 2019
Messages
14
Thanks

Using last firmware from ASUS

megacli -AdpSetProp EnableJBOD 1 -a0 -> JBOD Enabled

megacli PDMakeJBOD -PhysDrv[32:5] -a0 -> Failed to change PD state (same error after reboot)

Is the correct way?

If you have previously configured raid on these disks, you will need to disassemble it, save the data. Look at the status of the disks, should be able to good - command megacli-PDMakeGood -PhysDrv[x:y] -Force -a0
 

jotauve

Cadet
Joined
Oct 28, 2019
Messages
9
If you have previously configured raid on these disks, you will need to disassemble it, save the data. Look at the status of the disks, should be able to good - command megacli-PDMakeGood -PhysDrv[x:y] -Force -a0


Doesn't work :-(

Enabled JBOD
Code:
root@freenas[~]# MegaCli "-AdpSetProp -EnableJBOD 1 -a0"
Adapter 0: Set JBOD to Enable success.
Exit Code: 0x00

Make Good
Code:
root@freenas[~]# MegaCli "PDMakeGood PhysDrv[6:0] -Force -a0"
Adapter: 0: Failed to change PD state at EnclId-6 SlotId-0.
Exit Code: 0x01

Make JBOD
Code:
root@freenas[~]# MegaCli "PDMakeJBOD PhysDrv[6:0] -a0"       
Adapter: 0: Failed to change PD state at EnclId-6 SlotId-0.
Exit Code: 0x01


Attached dmesg and adapter info.
 

Attachments

  • adapter.txt
    9.9 KB · Views: 325
  • dmesg.txt
    36.4 KB · Views: 329
Top