SOLVED FreeNAS 9.3 FC Fibre Channel Target Mode DAS & SAN

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
having one large zvolume avail (somehow, if it's safe from corruption) to all physical hosts
It requires clustered file system to be running on top of that LUN. In case of VMware that is VMFS. What Proxmox can do about is a question to them.
 
Joined
Apr 26, 2015
Messages
320
Hello,

I am trying to understand if and how it's possible to use FreeNAS with this FC configuration to provide storage to multiple Proxmox PVE hosts (or any host for the sake of discussion) without worrying about data corruption.

My configuration is as follow:

FreeNAS Setup:
1. Dell PowerEdge 2950 Server, Qlogic QLE2462 (qty 2)
2. Dell MD1000
3. Two zvolumes, mapped to two separate iSCIS extents.

Host setup:
1. HP Server with Qlogic QLE2462.

I have connected server (A) with point-to-point, followed the instructions at the top of this post, and I have working FC storage to my host from FreeNAS. Great! Now I want to setup server (B), same configuration, provide storage via FreeNAS / FC. This is where I am confused.

Server A and Server B both see the same iSCSI target when I go into the QLogic bios settings. Also, I notice that to the QLogic HBA sees each extent as a LUN. I am unsure of how to provide LUN1 to Server (A) and LUN2 to Server (B) when it's all visible to both.

Doesn't that create a risk of file corruption, even if I only mount one LUN (i.e. LUN1) to one physical host (i.e. Server A.) ..? My end goal is to be able to setup separate zvolumes for each server, all of the same FreeNAS box.

Side question to that last statement: assuming there is a way to accomplish all this, would i be better carving out separate zvolumes for each physical host, or having one large zvolume avail (somehow, if it's safe from corruption) to all physical hosts?

Thank you very much in advance!

In my mind, using something like the OnStor NAS manager/controllers seems to be what you want. You would simply create one large storage pool on each of your FreeNAS servers then manage them all on one aggregator.

I used to use BlueArc and OnStor aggregators which allow you to connect all kinds of storage then manage it all by LUN. You can keep adding storage up to the license or physical hardware limit. Maybe an open source solution is out there without any limits too.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Experts,

I have been interested in conducting my own FC experiment, now that I see you guys are having sucess, I'm thinking about giving it a try. After reading all your posts, I had some final questions, and wanted to confirm some hardware choices:

1. I read you can only create 1 target as of 9.3. Has that changed in 9.3.1?

2. Can you use the iSCSI service on the same system running FC? (I'm already assuming you can't point the two services at the same extent)

3. I also assume multiple initiators can connect (R/W) to the same LUN (example: multiple VMware ESXi hosts connected to a LUN for a datastore)

4. Did you need to update the QLogic HBAs to a certain firmware version like P20 for SAS HBAs?

I understand the recommended HBA to use it the QLogic 2462, is this still the case?

Lastly, I am considering the dual port Qlogic-2462 for ESX hosts and FreeNAS host. I will be putting a switch in between. Has anyone had any concerns with the queuing of multiple datastores to one target? I ask because my environment has four datastores for VMware (two live on one pool, and two live on another)

Thanks for any input, looking forward to following your foot steps. If this needs to be a new post and not a reply in this one, let me know.

FreeNAS 9.3-Stable 64-bit
FreeNAS Platform:
SuperMicro 826 (8XDTN+)
(x2) Intel(R) Xeon(R) CPU E5200 @ 2.27GHz
72GB RAM ECC (Always!)
APC3000 UPS (Always!)
Intel Pro 1000 (integrated) for CIFS
Two Intel Pro 1000 PT/MT Dual Port Card (Four total ports for iSCSI)
Two SLOGS (one for each iSCSI pool) - Intel 3500 SSD
IBM M1015 (IT Mode) HBA (Port 0) -> BPN-SAS2-826EL1 (12 port backplane with expander)
IBM M1015 (IT Mode) HBA (Port 0) -> SFF-8088 connected -> HP MSA70 3G 25 bay drive enclosure
HP SAS HBA (Port 0) -> SFF-8088 connected -> HP DS2700 6G 25 bay drive enclosure
Pool1 (VM Datastore) -> 24x 3G 146GB 10K SAS into 12 vDev Mirrors
Pool2 (VM Datastore) -> 12x 6G 300GB 10K SAS into 6 vDev Mirrors
Pool3 (Media Storage) -> 8x 3G 2TB 7200 SATA into 1vDev[Z2]
Network Infrastructure:
Cisco SG200-26 (26 Port Gigabit Switch)

Four separate vLANs/subnets for iSCSI
  • em2 - x.x.101.7/24
  • em3 - x.x.102.7/24
  • em0 - x.x.103.7/24
  • em1 - x.x.104.7/24
Separate vLAN/subnet for CIFS (Always!)
  • itb - x.x.0.7/24
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
1. I read you can only create 1 target as of 9.3. Has that changed in 9.3.1?
Actually now there is FreeNAS 9.10. But no, nothing has changed in this front. There can be multiple LUNs, but only one target. More functionality available in TrueNAS.

2. Can you use the iSCSI service on the same system running FC? (I'm already assuming you can't point the two services at the same extent)
Yes, you can. In fact all iSCSI extents created are automatically shared via FC.

3. I also assume multiple initiators can connect (R/W) to the same LUN (example: multiple VMware ESXi hosts connected to a LUN for a datastore)
Yes, you may connect as many initiator as you like, but they should support some clustered file system (supported by VMware's VMFS).

4. Did you need to update the QLogic HBAs to a certain firmware version like P20 for SAS HBAs?
For all cards except 16Gbps FreeNAS automatically uploads firmware included into it, so version flashed into the card is not important.

I understand the recommended HBA to use it the QLogic 2462, is this still the case?
The only benefit of 24xx is a price, otherwise it is mostly the lowest entry point (while theoretically 22xx and 23xx cards should also work). I would recommend more modern 8Gbps 25xx cards now. Newest 16Gbps 26xx cards should also work, but besides of their price they are somewhat new and experimental.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
aran kaspar / mav@,

Gentlemen,

Thanks for taking your time to post this information and for answering questions.

I followed the guide and was amazed at the lack of effort needed to make this work. It was a flawless victory, and I didn't even have to touch my ESX hosts to setup the datastores. They just appeared with everything intact (since they were previously accessed with iSCSI). What fun.

Edited: Removed question I already asked above. LUNS can be presented on both iSCSI and FC at the same time.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
The only benefit of 24xx is a price, otherwise it is mostly the lowest entry point (while theoretically 22xx and 23xx cards should also work). I would recommend more modern 8Gbps 25xx cards now. Newest 16Gbps 26xx cards should also work, but besides of their price they are somewhat new and experimental.

Hello mav@, I'm looking for recommended FC hardware too, but I'm not sure about the 26xx cards. There are the 2670 and 2690 models. Not sure what's the difference.

And for christ sake, there's already 32Gbps FC cards, the 2700 series! :eek:

Finally only QLogic is recommended today on FreeNAS, right?

Thanks in advance,
V.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Hello mav@, I'm looking for recommended FC hardware too, but I'm not sure about the 26xx cards. There are the 2670 and 2690 models. Not sure what's the difference.
16Gbps cards (at least 2670) should work in FC mode now in FreeNAS 9.10. FCoE mode is still not supported, even though it should not be very complicated.

And for christ sake, there's already 32Gbps FC cards, the 2700 series! :eek:
Do you wish to donate couple of those cards (~$1500 each) and sponsor few weeks/months of development? If not, then all questions to QLogic.

Finally only QLogic is recommended today on FreeNAS, right?
At least I don't know other drivers. The market is very small, so alternatives are limited by definition.
 

viniciusferrao

Contributor
Joined
Mar 30, 2013
Messages
192
16Gbps cards (at least 2670) should work in FC mode now in FreeNAS 9.10. FCoE mode is still not supported, even though it should not be very complicated.


Do you wish to donate couple of those cards (~$1500 each) and sponsor few weeks/months of development? If not, then all questions to QLogic.


At least I don't know other drivers. The market is very small, so alternatives are limited by definition.

Thanks mav@. I was surprised about the 32Gb FC. I wasn't even knowing that 16Gbps was already out. That's my surprise!

So you'll be looking at the 2670 models and skipping 2690's.

Thanks,
V.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
So you'll be looking at the 2670 models and skipping 2690's.
I have couple of 2670 in our lab and they are working in FC mode. About 2690 I can not say anything, they may require some more driver updates.
 

Shi Pik

Cadet
Joined
Sep 21, 2016
Messages
1
Thank's for the manual. It works. But...

The question:
How to assign particular LUN to particular host?

I've tried with Sharing->Block(iSCSI)->Initiators tab, but it didn't work... All hosts have access to all LUNs.
I've configured one initiator only. In Initiators field tried to put IP, WWNs, nothing worked.

Here is my constellation:
Two single port HBAs in target mode (works fine).
Two target LUNs: "fc-test-nl" and "tsm-ssd-pool".
Two hosts with two single port HBAs each.
Host which should have access (host 1): 500110a00017000e, 500143800630fec2
Host which should NOT have access (host 2): 210000e08b1c6349, 210000e08b1c434c

Some output from ctladm:
Code:
[root@freenas] ~# ctladm portlist -i
Port Online Frontend Name     pp vp
0    YES    tpc      tpc      0  0 
1    NO     camsim   camsim   0  0  naa.5000000177157b02
  Target: naa.5000000177157b00
2    YES    ioctl    ioctl    0  0 
3    YES    camtgt   isp0     0  0  naa.21000024ff06dfd9
  Target: naa.20000024ff06dfd9
  Initiator 0: naa.210000e08b1c434c
  Initiator 1: naa.500143800630fec2
4    YES    camtgt   isp1     0  0  naa.21000024ff06dfae
  Target: naa.20000024ff06dfae
  Initiator 0: naa.210000e08b1c6349
  Initiator 1: naa.500110a00017000e
5    YES    iscsi    iscsi    257 1  iqn.2005-10.org.freenas.ctl:fc-test-nl,t,0x0101
  Target: iqn.2005-10.org.freenas.ctl:fc-test-nl
6    YES    iscsi    iscsi    257 2  iqn.2005-10.org.freenas.ctl:tsm-ssd-pool,t,0x0101
  Target: iqn.2005-10.org.freenas.ctl:tsm-ssd-pool


Numbers 3 and 4 are concerned. As you see all two hosts have access to targets.
What I need is LUN-host mapping.

Thanks in advance!
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
The question:
How to assign particular LUN to particular host?

What I need is LUN-host mapping.

LUN-host mapping is not supported in FreeBSD/FreeNAS at all now. LUN-target_port mapping in FreeNAS can be done from command line via ctladm tool. UI for that is one of TrueNAS features.
 

Hoeser

Dabbler
Joined
Sep 23, 2016
Messages
23
Sorry to resurrect this thread - but I'm currently working on deploying this setup at home on the latest stable. (FreeNAS-9.10.1 (d989edd)

My current setup is very basic, a FreeNAS box with a QLE2562 and an ESXi 5.5 host with a QLE2562 as well. I followed the guide on page 2 of this thread.

I am running into an issue where LUNid's are renumbering after rebooting FreeNAS, ESXi doesn't like this at all. I've also noticed I don't even need to create extent/target associations, and when I do, they are ignored. As soon as I add a device OR file extent, the storage is immediately visible on the ESX side, without any association. I do not know if this is intended behavior or not.

I really don't need anything fancy here - I only intend on ever running two hosts and one FreeNAS system. I'm fine with all LUNs being presented to both targets, but I do need to control the LUNid's.

Hopefully an expert can shed a little more light on this subject for me.
-DC
 

Hoeser

Dabbler
Joined
Sep 23, 2016
Messages
23
Sorry to resurrect this thread - but I'm currently working on deploying this setup at home on the latest stable. (FreeNAS-9.10.1 (d989edd)

My current setup is very basic, a FreeNAS box with a QLE2562 and an ESXi 5.5 host with a QLE2562 as well. I followed the guide on page 2 of this thread.

I am running into an issue where LUNid's are renumbering after rebooting FreeNAS, ESXi doesn't like this at all. I've also noticed I don't even need to create extent/target associations, and when I do, they are ignored. As soon as I add a device OR file extent, the storage is immediately visible on the ESX side, without any association. I do not know if this is intended behavior or not.

I really don't need anything fancy here - I only intend on ever running two hosts and one FreeNAS system. I'm fine with all LUNs being presented to both targets, but I do need to control the LUNid's.

Hopefully an expert can shed a little more light on this subject for me.
-DC

I've been poking and prodding this setup for a few hours now, and I found what appears to work - but I'm not sure how to effectively implement it.

I still can't figure out why the extents just show up without being specified any association, but I can force a lunID to be consistent if I use the 'ctl-lun <lunid>' flag in ctl.conf on the lun definition. I've tested this, but if I make any changes in the GUI (even cycling iSCSI off/on) it regenerates /etc/ctl.conf and hoses the setup.

I guess I'm wondering how to get an extended attribute such as "ctl-lun" added to the extent definition in the GUI so that it is not deleted each time the config is regenerated..?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Mentioned LUN reordering problem is not new, it is known. It is a result of FibreChannel never officially supported on FreeNAS and working just "by coincidence", since all required technologies are present in FreeNAS from FreeBSD. In TrueNAS where FC is a first class citizen this problem is solved -- LUN IDs can be mapped explicitly for each target port. lun-id property you've mentioned is indeed related to the problem, but proper solution is different (see ctladm lunmap). What's about setting ctl-lun property, we may set it in config at some point soon, but for different purpose.
 

Hoeser

Dabbler
Joined
Sep 23, 2016
Messages
23
I've accomplished a reasonable solution by disabling iSCSI in the services UI and simply launching ctld at boot with a config file somewhere else. Now I'm onto other problems, I guess its worth posting:

FreeNAS Side:

Code:
Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x1243e4] data overflow by 524288 bytes
Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x124474] data overflow by 524288 bytes
Sep 27 17:23:17 freenas isp0: isp_target_start_ctio: [0x1244a4] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x124e34] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x124e64] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1257c4] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1257f4] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x125854] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x125884] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x126124] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x126184] data overflow by 524288 bytes
Sep 27 17:23:18 freenas isp0: isp_target_start_ctio: [0x1261b4] data overflow by 524288 bytes
Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11af34] data overflow by 524288 bytes
Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b114] data overflow by 524288 bytes
Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b174] data overflow by 524288 bytes
Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b1a4] data overflow by 524288 bytes
Sep 27 17:23:31 freenas isp0: isp_target_start_ctio: [0x11b1d4] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121114] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121144] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121b04] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x121b64] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x1228b4] data overflow by 524288 bytes
Sep 27 17:23:57 freenas isp0: isp_target_start_ctio: [0x1228e4] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b684] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b6e4] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11b744] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11c074] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11c0a4] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11d094] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e474] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e4d4] data overflow by 524288 bytes
Sep 27 17:24:14 freenas isp0: isp_target_start_ctio: [0x11e504] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12a984] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12a9e4] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12aa14] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bc74] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bca4] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12bd04] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12c6c4] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12c724] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12e0d4] data overflow by 524288 bytes
Sep 27 17:25:10 freenas isp0: isp_target_start_ctio: [0x12e104] data overflow by 524288 bytes
Sep 27 17:25:23 freenas isp0: isp_target_start_ctio: [0x1224c4] data overflow by 524288 bytes
Sep 27 17:25:23 freenas isp0: isp_target_start_ctio: [0x1224f4] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x127414] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x127474] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1274a4] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x128554] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1285b4] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1285e4] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x1297b4] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x12a144] data overflow by 524288 bytes
Sep 27 17:25:28 freenas isp0: isp_target_start_ctio: [0x12a1a4] data overflow by 524288 bytes


ESXi Side

Code:
016-09-27T21:23:57.117Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x4136833a8080) 0x8a, CmdSN 0xbc from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:23:57.121Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x413683972ec0) 0x8a, CmdSN 0xd1 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:23:57.260Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x413683860300) 0x8a, CmdSN 0xd6 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:23:57.260Z cpu4:33578)ScsiDeviceIO: 2338: Cmd(0x41368405cbc0) 0x8a, CmdSN 0xf9 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:00.019Z cpu2:34302)World: 14302: VC opID hostd-b3af maps to vmkernel opID a4d908dd
2016-09-27T21:24:14.409Z cpu8:36432)WARNING: iodm: vmk_IodmEvent:193: vmhba0: FRAME DROP event has been observed 30 times in the last one minute. This suggests a problem with Fibre Channel link/switch!.
2016-09-27T21:24:14.410Z cpu1:33578)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6589cfc00000079083036d56cfd0cc88" state in doubt; requested fast path state update...
2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413685316800) 0x8a, CmdSN 0xe4 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682312380) 0x8a, CmdSN 0x9d from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.410Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413683361c40) 0x8a, CmdSN 0x78 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.452Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413684d56ac0) 0x8a, CmdSN 0xc3 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.452Z cpu1:33578)NMP: nmp_ThrottleLogForDevice:2322: Cmd 0x8a (0x413684aeb380, 52422) to dev "naa.6589cfc00000079083036d56cfd0cc88" on path "vmhba0:C0:T0:L5" Failed: H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0. Act:EVAL
2016-09-27T21:24:14.452Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413684aeb380) 0x8a, CmdSN 0x92 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.705Z cpu2:33578)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6589cfc00000079083036d56cfd0cc88" state in doubt; requested fast path state update...
2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682abe340) 0x8a, CmdSN 0x68 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413680416480) 0x8a, CmdSN 0xce from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:14.936Z cpu1:33578)ScsiDeviceIO: 2338: Cmd(0x413682896bc0) 0x8a, CmdSN 0x99 from world 52422 to dev "naa.6589cfc00000079083036d56cfd0cc88" failed H:0x7 D:0x2 P:0x0 Possible sense data: 0x2 0x4b 0x0.
2016-09-27T21:24:15.360Z cpu1:33578)WARNING: ScsiDeviceIO: 1223: Device naa.6589cfc00000079083036d56cfd0cc88 performance has deteriorated. I/O latency increased from average value of 3623 microseconds to 153093 microseconds.
2016-09-27T21:24:20.019Z cpu1:33986)World: 14302: VC opID hostd-89ef maps to vmkernel opID d2d6bcd3



It's interesting. It's not fatal, and it persists for up to a couple hours. Seems to vary on which VM causes this, it's always related to I/O activity - such as extracting files from a large archive.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It's interesting. It's not fatal, and it persists for up to a couple hours. Seems to vary on which VM causes this, it's always related to I/O activity - such as extracting files from a large archive.

It looks interesting, but without more data I am not sure what side caused it. Diagnosing it requires much more input data, in particular, what commands were executed, what data were sent over the link, etc. In iSCSI case I would ask you to do tcpdump, but for FC there is no one.
 

Hoeser

Dabbler
Joined
Sep 23, 2016
Messages
23
It looks interesting, but without more data I am not sure what side caused it. Diagnosing it requires much more input data, in particular, what commands were executed, what data were sent over the link, etc. In iSCSI case I would ask you to do tcpdump, but for FC there is no one.

I suspect the FreeNAS box. I have tried a few things on the ESXi side - I've swapped the HBA for an Emulex with some older firmware, then the latest firmware, then swapped the Qlogic back in after updating it as well. So I've tried two different cards with 2 different firmwares each with the same results. The FreeNAS box has a qlogic with the the latest firmware. I am going to try updating ESXi to the latest patch sets for 5.5 next.
 

Ron Watkins

Dabbler
Joined
Oct 27, 2016
Messages
13
Im completely new to FreeNAS, have never installed/used it (yet).
Question...
We want to use FreeNAS to create multiple luns and present those luns to multiple hosts via 8GBit FC.
One quirk is that several of these luns are to be used as RAW devices under Linux to host a clustered database.
Thus, these luns would be presented to multiple hosts, each host would create a raw device on each lun and the database engine will be running on multiple hosts each accessing the raw devices in a clustered configuration.
Does FreeNAS support:
1) 2x 8GBit FC ports (QLE 2562)?
2) Does FreeNAS allow a "LUN" to be mapped/presented to multiple hosts at the same time through the FC ports?
Our goal is to get around 2GByte/sec throughput using 2 FreeNAS servers, each with dual 8GBit FC cards (1.6GByte/sec per pair of 8GBit ports).
We are using a set of 48x 2.5in enterprise SSD drives, 25 drives in each FreeNAS box.
 

Ron Watkins

Dabbler
Joined
Oct 27, 2016
Messages
13
Thanks. Are there any walkthroughs or tutorials you can point me to? Im a newbie to FreeNAS.
We have 30Tb of SSD we want to build into a SAN array to be used for high performance application.
I tried contacting IXSystems, they turned down the project due to the fact that we already have the SSD drives and they won't warranty any "non supplied" equipment.
So, im pretty much on my own for this and looking for how to setup a FC SAN using FreeNAS.
 
Top