FC target support in FreeNAS 9.1.0

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Special thanks to mav@ and cyberjock who should know they have helped me immensely to complete this. Thanks for your posts/replies on the forums.
My ESXi/FreeNAS lab is complete. Thank you guys!!

This can be used as either Direct Attached Storage in Point-to-Point or Arbitrated-loop (2/3 nodes)
Should also be able to use it in a SAN environment with a Fabric.

Here is a short guide for anyone who is looking to set this up.
----------------------------------------------------------------------------------------------------------------------------------------
FreeNAS Target side
freenas.png


1.
Install the right version
2. Install your Qlogic FC HBAs (only supported brands are Qlogic to my knowledge.)
It's recommended by Qlogic to manual set port speed in your HBA BIOS
(yes, reboot and press the Alt+Q screen when prompted)
  • I'm using two Qlogic QLE2462 HBA cards (1 for each server) 4 Gbps max.
3. Check FC link - status after bootup
  • I have 2 ports on each of my cards, both cables plugged in (not required)
  • Check that Firmware / Driver loaded for card, shown by solid port status light after full bootup
  • I have the Qlogic-2462 [solid link orange=4Gbps] check your HBA manual for color codes
4. Add Tunables in "System" section
  • variable:ispfw_load ____value:YES_________type:loader___(start HBA firmware)?
  • variable:ctl_load ______value:YES _________type:loader___(start ctl service)?
  • variable:hint.isp.0.role__value:0 (zero)______type:loader___(target mode FC port 1)?
  • variable:hint.isp.1.role __value:0 (zero)______type:loader___(target mode FC port 2)?
  • variable:ctladm _______value:port -o on -t fc _type:loader___(bind the ports)?
TASKS.png


Add Script in "Tasks" section​

  • Type:command________command:ctladm port -o on -t fc____When:Post Init
SCRIPT.png


5.
Enable iSCSI and then configure LUNs
enable iscsi service and create the following...

create portal (do not select an IP, select 0.0.0.0)
create initiator (ALL, ALL)
create target (select your only portal and your only initiator) give it a name...(doesn't quite matter what)
create extent (device will be a physical disk, file will be a file on zfs vol of your choice) Research these!
create associated target (chose any LUN # from the list, link the target and extent)

If creating File extent...
Choose "File" Select a pool, dataset or zvol from the drop down tree and tag on to the end
You must tag on a slash in the path and type in the name of your file-extent, to be created.
e.g. "Vol1/data/extents/CSV1"

If creating Device extent...
Choose "Device" and select a zvol (must be zvol - not a datastore)
BE SURE TO SELECT "Disable Physical Block Size Reporting"
[ Took me days to figure out why I could not move my VM's folders over to the new ESXi FC datastore... ]
[ They always failed halfway through and it was due to the block size of the disk. Unchecking this fixed it. _ ]

REBOOT!
now...sit back and relax - your Direct Attached Storage is setup as a target. The hard part is done.
---------------------------------------------------------------------------------------------------------------------------------------------
ESXi Hypervisor Initiator side
esxi-dedicated-server-icon.png

1.
Add to ESXi in vSphere
Configuration > Storage Adapters and click Rescan All to check it's availability by selecting your fibre card.
If you don't see your card, make sure you have installed the drivers for it on ESXi. (Total PIA if manual - google it)

2. VMFS vs. RAW SCSI
You can now use your FC extent as a VMFS Datastore
(formated with V.M.ware F.ile S.ystem) so you can store multiple VM files and such.
Just "Add Storage" and use the fiber channel disk it found during the scan.

I was fine with this but I personally thought, maybe the less file systems involved between the server and the storage, the better. I could be wrong but the performance should theoretically take a hit, for each additional file system. (Your input and experience is always welcome.)

If you want block level access to the storage, you can use a pass-through method and present that LUN to a single VM as a Raw SCSI hard disk. To my understanding this is much like the connection of a SATA drive to the bus on a motherboard.
Unfortunately you can only present it to one VM using this method but it should allow for block level access.

Using this method, I would rinse and repeat the steps above and dice up your FreeNAS ZVols to make LUNs for each additional VM.

Adding a RAW disk in ESXi.
Edit Settings
for your VM and when you Add a new Hard Drive you will now see the Disk type "Raw Device Mappings" is no longer grayed out. Use this for your VM.
Remember it will be dedicated to only this VM Guest.

rdm2.jpg


Multi-Port HBAs
Research MPIO for a performance advantages and redundancy.
ESXi also has load-balancing for VMFS Datastores. I'm not entirely sure how advantageous this is but feel free to experiment. I think you must have extremely fast SSDs for it matter. Then just Create a ESXi Datastore and right-click, select "Properties..." and click "Manage Paths..."
  1. Change the "Path Selection" menu for Round-Robin to load balance with fail-over on both ports.
  2. Click "Change" button and "OK" out of everything.
Good luck!

Please reply if this helped you! I've been trying to get this working for almost 6 months! Thanks again to everyone on the forum.

# fiber channel direct attached storage storage area network DAS SAN
 
Last edited:

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
OMFG. I have been trying to get this running for over 6 months myself... Total PIA.

Your instructions are simply mana from heaven.

Everything worked as you detailed, even the dashes instead of the IP in portal creation. It showed red, but it still accepted it.

Now if I can get the other side running correctly - I am using Xenserver 6.2 - I can tell you if it worked end to end.

Thanks for the help so far. I will update this thread when I have completed some testing.
 

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
Holy Frickin Crap! It showed up as an SR! I am formatting it now!

Somebody pinch me I am dreaming!!

OK I am a bit calmer now. It seems to be working as advertised. It shows up as HBA storage on my xenserver. I could just cry...

I created a virtual disk now. Will install Windows to it now and report back, but it looks great to me. Again, your instructions worked perfectly.
 

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
I was just this close to buying 2 quad port NIC and aggregating them to get around all the FC troubles. This is so great now! I have 2 AMD systems I am using FC adapters on, and I am exporting the storage system to them. The 8 core AMD systems seemed great for running multiple desktops for the family, purpose dedicated servers, etc, inexpensively.

Everything works great with the storage. I have created and wiped out several times now. Performance seems generally to be very good. I have not done any tweaking at all yet, but show 6.3 in Windows 'performance measurement' for disks performance.

2 hints to add:

1. Creating your 'virtual disk' file for an extant - you do this through the extant creation page. It creates that file when you go through the extant creation process. This is not done by creating a zvol. If you try to create a zvol first then point at it through the extant creation steps, you will find there is no file on the disk you can use.

2. I am not positive, but I am pretty sure I still had to issue kldxref command to link the qla2xxx driver to load. Before that it would not seem to 'apply' the driver file even after adding those directives to load to the boot. The *.ko files in 9.3 are working great and there is and no need to gather others; but I am pretty sure I still had to link them first. I remember using dmesg after reboots to see if it had done that properly.

I do also have to remember to issue the ctladm command each time. Have you found a good way to automate that yet?
 
Last edited:

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
I was just this close to buying 2 quad port NIC and aggregating them to get around all the FC troubles. This is so great now! I have 2 AMD systems I am using FC adapters on, and I am exporting the storage system to them. The 8 core AMD systems seemed great for running multiple desktops for the family, purpose dedicated servers, etc, inexpensively.

Everything works great with the storage. I have created and wiped out several times now. Performance seems generally to be very good. I have not done any tweaking at all yet, but show 6.3 in Windows 'performance measurement' for disks performance.

2 hints to add:

1. Creating your 'virtual disk' file for an extant - you do this through the extant creation page. It creates that file when you go through the extant creation process. This is not done by creating a zvol. If you try to create a zvol first then point at it through the extant creation steps, you will find there is no file on the disk you can use.

2. I am not positive, but I am pretty sure I still had to issue kldxref command to link the qla2xxx driver to load. Before that it would not seem to 'apply' the driver file even after adding those directives to load to the boot. The *.ko files in 9.3 are working great and there is and no need to gather others; but I am pretty sure I still had to link them first. I remember using dmesg after reboots to see if it had done that properly.

I do also have to remember to issue the ctladm command each time. Have you found a good way to automate that yet?
Hey there, regarding the file extent...

You have to specify the zood or zvol from the drop down and ADD /extent_name in the path.

This is a weird quirk of FreeNAS and it took me a while to figure it out.

I'll update my instructions up top.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Instead of file extents, it is recommended to use ZVOL-based ones with FreeNAS 9.3 to get UNMAP support. Though I am not sure Xen support it.
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
T
OMFG. I have been trying to get this running for over 6 months myself... Total PIA.

Your instructions are simply mana from heaven.

Everything worked as you detailed, even the dashes instead of the IP in portal creation. It showed red, but it still accepted it.

Now if I can get the other side running correctly - I am using Xenserver 6.2 - I can tell you if it worked end to end.

Thanks for the help so far. I will update this thread when I have completed some testing.

Took me 6 months as well!!!!!
Glad to hear it helped someone!!!!
:D :D :D
There is an option for xenserver in the iSCSI Extent creation page. Not sure if that's helpful to you but it may be worthwhile to look into. It's a tick check box.
 
Last edited:

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
benches... I am using 5 2tb drives in raidz, with 3 sata II 200gb drives in stripe as cache and 1 sata II as log.

Over a 4gb optical link.

speedtest1.JPG


No tweaking at all so far. Cache is most likely warmed up by now - its been a day or two.

It is more than usable as is; I may not bother to tweak it further as long as everything keeps working right.

anyone else could post what your seeing?

Anyone have any thoughts? Should this be better? Any suggestions such as using only 1 ssd for cache instead?
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
benches... I am using 5 2tb drives in raidz, with 3 sata II 200gb drives in stripe as cache and 1 sata II as log.

Over a 4gb optical link.

View attachment 5805

No tweaking at all so far. Cache is most likely warmed up by now - its been a day or two.

It is more than usable as is; I may not bother to tweak it further as long as everything keeps working right.

anyone else could post what your seeing?

Anyone have any thoughts? Should this be better? Any suggestions such as using only 1 ssd for cache instead?
I have been getting very similar speeds from both my pools.
round 1. z2 4x-7.2K file extent <- (not recommended)
round 2. Stripe 2x-15K file extent<- (not recommended)
I think round 2 was around 400+ on crystalmark

Have not been able to figure out how to do a zfs pool / vol as a device extent.
Show me that trick, please.
 
Last edited:

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Instead of file extents, it is recommended to use ZVOL-based ones with FreeNAS 9.3 to get UNMAP support. Though I am not sure Xen support it.

Use a zvol as a device extent instead of doing a file extent.
Mav@ recommends it. I stand by that.
 
Last edited:

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Have not been able to figure out how to do a zfs pool / vol as a device extent.
Show me that trick, please.
First go to storage and create zvol, then go to extent creation and you will see it in the list of devices for device extents.
 

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
First go to storage and create zvol, then go to extent creation and you will see it in the list of devices for device extents.
This is what I tried to do at first and it did not appear there. Can you confirm this? Is there a way of setting this up that maybe I did not follow?

That this did not work for me is exactly why I later said in my hints:

1. Creating your 'virtual disk' file for an extant - you do this through the extant creation page. It creates that file when you go through the extant creation process. This is not done by creating a zvol. If you try to create a zvol first then point at it through the extant creation steps, you will find there is no file on the disk you can use.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
This is what I tried to do at first and it did not appear there. Can you confirm this? Is there a way of setting this up that maybe I did not follow?

That this did not work for me is exactly why I later said in my hints:

1. Creating your 'virtual disk' file for an extant - you do this through the extant creation page. It creates that file when you go through the extant creation process. This is not done by creating a zvol. If you try to create a zvol first then point at it through the extant creation steps, you will find there is no file on the disk you can use.

Where and why are you looking for that FILE? ZVOL devices are residing under /dev/zvol/..., and you should see them as devices if you choose to create DEVICE-backed extent. I am not at home right now to check the latest build, but I am sure that it should work. Check FreeNAS manual for additional description, and may be even some screenshots.
 

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
mav, it works.

I created a new zvol, and I see it as a device choice in the extant creation page. That does explain it. This is where I was going wrong, I thought it would show up as a file.

Are there particular advantages inherent in creating and using a zvol vs the way I am doing it with the creation of an extant? Can you brief those for me?

Thanks mav.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Are there particular advantages inherent in creating and using a zvol vs the way I am doing it with the creation of an extant? Can you brief those for me?

Support for UNMAP (which I already mentioned above, and which is hard to overestimate), support for space threshold warnings, snapshots are per-ZVOL/LUN instead of per-FS.
 

slushieken

Dabbler
Joined
May 28, 2014
Messages
24
Thanks. I did not know about UNMAP, makes sense about the threshold warnings, and for snapshots I have a question.

To clarify for others:
UNMAP - feature pretty much required when using thin provisioning. Allows reclamation of freed up filesystem space when you delete a VM virtual disk in a thin provisioned FS.

Snapshots - I noticed I did not have this option available from Freenas, but found it was supported within Xenserver. I had planned to use this in xenserver instead. Are there any obvious advantages or disadvantages to Xenserver snapshots vs Freenas snapshots?

Thanks again.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Snapshots - I noticed I did not have this option available from Freenas,

May be you finally read the FreeNAS documentation? Snapshots is the biggest feature of ZFS as Copy-on-Write filesystem.

but found it was supported within Xenserver. I had planned to use this in xenserver instead. Are there any obvious advantages or disadvantages to Xenserver snapshots vs Freenas snapshots?

ZFS snapshots are almost free from point of performance. Ans they occupy only as little space as needed. I don't know how implemented snapshots in Xen, but I bet less efficient.
 

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
May be you finally read the FreeNAS documentation? Snapshots is the biggest feature of ZFS as Copy-on-Write filesystem.



ZFS snapshots are almost free from point of performance. Ans they occupy only as little space as needed. I don't know how implemented snapshots in Xen, but I bet less efficient.

Yeah man, that doesn't work for me. Are you sure it's possible? ... I created a brand new zvol and did not show up in extent device list. Nothing does, it's just red. Can you post pics?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Here is screenshot from the latest build of 9.3, that shows creation of device extent on top of zvol.
What am I doing wrong? :)
 

Attachments

  • zvol.png
    zvol.png
    202.8 KB · Views: 784

aran kaspar

Explorer
Joined
Mar 24, 2014
Messages
68
Here is screenshot from the latest build of 9.3, that shows creation of device extent on top of zvol.
What am I doing wrong? :)
Holy crap. Maybe I need to reinstall.

Mine doesn't show any devices unless I have a free un-formatted drive
 
Top