iSCSi multiple logical disks from single array

Status
Not open for further replies.

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Hi Guys,

Please let me know if this is possible, and if so, how do i go about doing it?

I have 8x300gb sas drives in an array, on a RAID 50. In freenas 8.2, is it possible to set the drives up so that windows can see them as multiple logical disks? My plan is to have 3-4 drives show in disk manager for failover clustering.

I know that this is possible by creating a ZFS or UFS disk and then making files to use as targets, but i read that this should not be done for production environments?

any help you can give would be appreciated.

thanks.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're referring to this?

Unfortunately FreeNAS 8 doesn’t support device extents on a ZFS mirror so a file extent had to be used. A file extent is simply a single file that is shared and looks like a single disk to the ESXi host. This is not recommended in production environments because if something happens to that file all of the virtual machines could be lost.

I'm guessing the guy is clumsy or fat fingered or something and is afraid his fingers will type "rm -fr *" or something dumb. But the reality is that modern computer systems have all sorts of dangerous things that you can do accidentally. You can screw up real quick using the FreeNAS GUI too, and wipe out a filesystem or even an entire pool in a moment of thoughtless carelessness. So... ignore that.

Personally I'd rather have a file, because if I have a problem, I can move the file over to a UFS filesystem and serve it from there, or move it to a Linux system and serve it from there, or whatever. It's a very portable, flexible format. Fat fingerers can do "chflags uunlnk myextentfilename" and suddenly it's protected against inadvertent deletion too.

But as far as I know, FreeNAS 8 does support device extents just fine. We just don't use them because of sync write issues. You can go to Storage->Volumes->YourVolume->Create ZFS Volume and create one. Then you go over to Services->iSCSI->Device Extents and select it. I think the latest issue was that all writes went out sync, which means very low performance unless you have a ZIL, and even with a ZIL the performance is a bit less than the file-based model.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh, yes, never mind, this guy sounds a bit crazy:

However in a production environment I can not recommend the use of iSCSI in FreeNAS unless a hardware raid solution is used for the ZFS drivespace so that file extents are not used.

This is just a sequence of words he's pasted together that don't mean anything; using a hardware RAID solution underneath ZFS is highly detrimental in many cases, and even if you did that, so what? You still have a ZFS pool. I guess he thinks that there's some magic about if there's not a mirror then he can do device extents? Then why not use RAIDZ? But really ZFS doesn't care about any of this, it's either capable of creating a volume/dataset/whatever on a pool, or it isn't. (And it _is_ capable.) The underlying storage tech is irrelevant.

Do not do what he suggests in any production environment where you care about your data, unless you have a RAID controller that's known to work well with FreeBSD and will properly handle notifications, drive replacements and all that stuff correctly.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Thanks for the reply, very helpful. I had to "Force 4096 bytes sector size" to create the ZFS drive. I have created multiple targets and am about to test this. I'm assuming that ZFS is better than UFS for features? If compression was turned off would performance increase?

Sorry if i'm asking a lot of questions, i know not much about linux file systems.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Do not do what he suggests in any production environment where you care about your data, unless you have a RAID controller that's known to work well with FreeBSD and will properly handle notifications, drive replacements and all that stuff correctly.

So.. no RAID controller that we know of? LOL

Sorry if i'm asking a lot of questions, i know not much about linux file systems.

I did not know that FreeNAS/ZFS was Linux.. WOW. I learn new stuff every day! /sarcasm
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So.. no RAID controller that we know of? LOL

Yeah, you know, the unfortunate bit about it all, LSI makes nice hardware but crappy software.

We've got ... I want to say M1015's ... something cross-flashed to IR running boot and direct-attached datastores for ESXi boxes, because redundancy is just a basic requirement around here. So ESXi throws a warn about the datastore being degraded. Look at it with MegaRaid Manager, oh that's right, MRM will actually show the health as "green" on the main page - you have to log in on this awesome bit of damageware to see the actual status. Oh, look, bad disk. Ok, migrate all the VM's off the box, pull the disk, oh joy it's a Seacrate RMA (all our important stuff is SAN/iSCSI so I like to toss less-trustworthy disks in RAID1 as scratch storage). Put new drive in. Wait for migrations. Go twiddle around with MRM and ... oh look, it already started a rebuild on its own.

The cool bit? I *know* that if there had been a standby disk, it would have started rebuilding immediately. As it was, the thing doesn't wait around... the card knows a replaced disk means make it work. And it does. And it did, with no further effort.

The problem bits? Getting a useful notification via LSI's software support infrastructure. On ESXi, turns out, can't really be done (with LSI's MRM that is). You have to *rely* on vSphere to notice and warn you, and getting ESXi set up correctly with the right drivers and all is a Royal Pain. Hell, just *finding* the correct drivers and files and all that is a nightmare. That day I was b****in' about it in off-topic, I lost something like a day trying to get an updated system working, and document it, and get MRM working, because we need to be able to repeat the process for more ESXi boxes. And of course even though MRM is worthless for monitoring, you still need it to chat with the RAID controller while the system is running. Ugh. But let me tell you, as it stands, if a drive were to fail, I would prefer it to fail under the LSI controller, because:

1) I'll be informed of the situation immediately (unlike FreeNAS right now)

2) Fixing the problem is literally a matter of yank disk, cram new disk in, maybe remember to check on it in a few hours to make sure it had a happy ending (also unlike FreeNAS right now).

But of course we were talking about under FreeNAS, and I don't really have any idea about how that'd work out. I just want to make the point to you that FreeNAS has some things to work out, and with zfsd in FreeBSD 9, I'm hoping they do.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I had to "Force 4096 bytes sector size" to create the ZFS drive.

Why? You shouldn't have to.

I'm assuming that ZFS is better than UFS for features?

Yes, way more awesome, but along with it comes significant complexity. Do you need the features? If not, remember what Scotty says, "The more they overthink the plumbing, the easier it is to stop up the drain."

That is JUST so appropriate too. Heh.

If compression was turned off would performance increase?

Probably quite a bit.

Sorry if i'm asking a lot of questions, i know not much about linux file systems.

Hey, great, that makes two of us.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
I agree, I don't think you can replace hardware raid with a software raid just yet, even if it offers large functionality. ESXi has a lot of pitfalls, compatibility for windows 8 is one of them.

So i suppose now i test with both ZS and UFS to see the differences. I turned off compression as it is not required, and will probably create an overhead on the cpu.

Gonna put some files on, and do some DR scenarios!
 
Status
Not open for further replies.
Top