iSCSI Volume keeps showing up as SSD optimized when it's not? Also won't mount!

Status
Not open for further replies.

greengecko00

Cadet
Joined
Apr 5, 2015
Messages
7
Hi there

Having a weird issue here.

I'm running FreeNAS on an old IBM server, it has 2 1GBe ports, 12gb ram, Xeon CPU, and a 1.09TB RAID5 volume via IBM's ServRaid controller.

Now everything appears fine, when I was testing CIFS it was working great.

When I switched over to iSCSI, so I can present it to my VMware lab - it seems to act a bit strange. I got MPIO working just fine, however for some reason the volume cannot be formatted as VMFS 5.

When I go to add the datastore (Add Storage..), the FreeNAS iscsi disk pops up and when I click Next for it to laod the storage information, it literally takes forever (almost 10 mins), and then it times out. Sometimes I get a bit of info, but either way it keeps timing out on me. I can't press next and format it.

Anyone have any ideas?

Here is a screenshot
ZwQLyPi.png


Also note how the system thinks its an SSD! Weird?! - This is on ESXi 6.0 btw.
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Question of SSD was already widely discussed on this forum. Please read. Next software update should include respective option in extent settings.

What's about the connection problem -- that sound like something new. May be it is related to ESXi 6.0, not sure how many of these were tested, at least I don't have one yet. Could you try to collect full packet dump for that connection attempt from adding the target to the hang point with `tcpdump -pvni <interface> -s 0 -w <file name> port 3260` and send me a link?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I'm running FreeNAS on an old IBM server, it has 2 1GBe ports, 12gb ram, Xeon CPU, and a 1.09TB RAID5 volume via IBM's ServRaid controller.

Don't mix hardware RAID and ZFS. Bad Things can happen, ranging from "terrible performance" to "data loss." Use an HBA or non-RAID onboard SATA/SAS ports to present the disks directly to FreeNAS.

People have addressed the drive type, but check that your iSCSI extent is configured with a logical block size of 512 bytes. ESXi doesn't speak Advanced Format.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
And if you are on ESX 6, you should have the ability to change the datastore type to be listed as non-ssd. As others have mentioned, there are reasons for it to be identified as SSD (namely for Windows environments).
 

greengecko00

Cadet
Joined
Apr 5, 2015
Messages
7
Thanks everyone for your input! I was able to fix the problem and this is how.

1. First, I noticed for some reason one of the 2 aliases wasn't working. Weird? It would reply to pings but it wasn't able to present a target as I'd expect. So I deleted the alias, and re-created it. Simple.

2. I deleted the extent that was backed by a physical disk, and then created a zvol with no compression and presented that as an extent instead.

3. I rebooted my esxi host. This cleared all the dead paths. After it came back, I rescanned it and voila - everything came up and formatted as expected. The datastore still thinks its an SSD though, but that doesn't matter to me. I then formatted the 1TB lun as vmfs, ensured MPIO was set to RR and was off to the races.

Here are the results of an HD Tune test with 4K block size, running off a 45GB VMDK living on the freenas iscsi lun, going through 2 1GBe port MPIO in RR.

http://i.imgur.com/uCyX2xt.png

Shmexy? I think so! :) Can't wait to really start running some VM's on this bad boy, also happy with the features having a zvol affords me. Over all very happy right now.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
your benchmark result does not make any sense. you did not disable compression...

you will at least need 64gb ram for iscsi and freenas do be happy. also then you can not reach the thing your benchmark says it does.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
you will at least need 64gb ram for iscsi and freenas do be happy. also then you can not reach the thing your benchmark says it does.

Maybe not 64, but a good bit more than 12. Once fragmentation takes hold and it is having trouble allocating contiguous blocks, that seems to be when it all goes to performance hell.
 

greengecko00

Cadet
Joined
Apr 5, 2015
Messages
7
I'm not sure what you guys are talking about. Can you elaborate more? How does my benchmark not make any sense? It's only a 1TB zvol, thats it. I read somewhere we need 1GB / 1TB of storage, either way, it's performing just great right now..

I'm going to enable compression since I have a lot of spare CPU cycles.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
...it's performing just great right now..

Oh, famous last words of those that didn't read up on iSCSI before trying to use it on ZFS. This should be fun!

/gets popcorn
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Those results are complete bunk because they're showing multiple-gigabyte-per-second speeds. To actually obtain the speeds you're posting there, you'd need to have a 20Gbps connection, not 2Gbps. Not to mention a system that could actually support it.

You've got caching enabled somewhere. Also, there's no indication of what that benchmark is showing. Reads? Writes? Both? Look at your burst rate for a more realistic indication of sustained throughput (44.5MB/s).

In addition, iSCSI is async by default, and you're doing ZFS on a hardware RAID controller, possibly one with cache. Your write speeds are going to be artificially fast because you've got no assurance of integrity.
 

greengecko00

Cadet
Joined
Apr 5, 2015
Messages
7
Should I revert back to just presenting the physical disk as an extent rather then a ZFS vol? I can't break the RAID sets unfortunately as it requires special software from IBM that I don't have. I just have to work with the RAID5 set until it eventually blows up. Doesn't matter to me, it's a lab.



Anyways I'm going to keep testing this out, increasing the VM density of the datastore and continuing to run tests until I hit the wonderful brick wall and then I'll go back to the drawing board.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Should I revert back to just presenting the physical disk as an extent rather then a ZFS vol? I can't break the RAID sets unfortunately as it requires special software from IBM that I don't have.

A physical disk as an extent may be better, but may not be. You are asking questions that professional file server admins answer 'in their spare time' because the question does not have a simple yes/no answer. It depends on many factors.

What the heck kind of servraid card are you using? I have some servraid cards, and they can all be created and destroyed from within the card's firmware menus.
 

greengecko00

Cadet
Joined
Apr 5, 2015
Messages
7
I'm not sure, it's old. The server is an IBM x346.

Truth be told, I kind of want to push this until it breaks/blows up. The davg/cmd times I'm getting on this LUN are incredible. I know it might not be ideal according to you smart gents, but I really want to get the trial by fire :)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I think what you need is the ServeRAID Support CD, but it might not help you as I don't think the ServeRaid 7K series can act as a dumb HBA. In this case I'd say present the physical device directly.

The problem with "trial by fire" in this case, is a lot like what @cyberjock mentioned in another thread - it's not going to give you real-world results.

Try doing a sustained write test against it (25GB+)
 
Status
Not open for further replies.
Top