Another person banging his head against iSCSI

Status
Not open for further replies.

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I understand most of the challenges and your points. Just looking for solutions that will benefit people here, and my clients. As soon as you use comparative analysis, the target OS is meaningless and tweaking of the tunables to get a qualitative number doesn't affect results. You just need a consistent repeatable set up and determined workloads. A livecd and script can produce a measurable load every time. Or a specific deployment of VM's. I know that is lab like and not home user stuff, but we need known good repeatable scenarios in order to offer a viable solution to clients. Small and Medium size businesses are primary candidates. Enterprise should just pay iX imho.

It does sounds like it may not be useful for more than a specific person. My hope was that more 'rules of thumb' could be put in place. Literally 3 known platforms. Mini class (Avoton). Achievable but strong (Haswell). TrueNAS class (E5+). I'm just ignoring under spec for now. This forum is remarkably homogenous in it's hardware choices. You guys (mods) have been persuasive. I'm far from sheeple and even I jumped onboard the ECC serverboard train. I likely should have gone E5 even for testing. But not sure that platform is viable for many smaller solutions.

I don't want to see cookie cutter, but I do want a reasonable target. 1Gbe should be a no brainer to saturate via any protocol on modern hardware. Yes I know we can crush throughput with certain workloads. But there are known scenarios. Media, Backup, Small VM, Large VM, etc.

Anyway.

tl;dr I'd love to put this in place and prove viability, and repeatability, and enjoyed the conversation. Thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The real reasons why we're so homogeneous is:

1. Supermicro works, and is pretty inexpensive in the big scheme of things.
2. Companies like Tyan and others that *do* sell server-grade boards don't give then out to be tested. I'd be more than happy to test stuff if I got it for free. But I'm not going to spend lots of money buying lots of boards just to find out what works and what doesn't.
3. Probably 95% of FreeNAS users are in a world where nothing exists outside of Windows. When it comes to building a FreeNAS box you are jumping into the ocean and being told to swim to shore and there is nothing but ocean as far as the eye can see. For many people they have show-stopping obstacles to overcome. What hardware do I pick that's cheap? What hardware is even compatible? How do I install FreeNAS? What's this .img file for? How do I even us this OS? What happens if it doesn't boot? Who do I go to for help when this doesn't work? There's tons of questions and very little answers when you first start. Something as silly as forgetting to connect a network cable to your NIC can be a major disaster because you might not think you'd do something so silly and you have *zero* clue how to even find out if the reason you don't have an IP is because the OS is not installed properly, the NIC isn't compatible with the OS, your NIC is bad, or you forgot something silly like plugging the darn thing in.

You *very* quickly jump off a cliff. As a newbie when FreeNAS didn't boot the first time I had zero clue where to even go to start troubleshooting the problem, let alone fix it once I identified it. I know I'm not alone and the threads on this forum are often flashbacks to the problems I had. The whole concept of writing an image file to a USB stick to boot was so far beyond foreign that I was sure I was doing it wrong because it was just "too easy" in my experience.

In short, to make FreeNAS work for those that want something that "just works" is to stick to hardware that "just works". As Supermicro got into this forum first and doesn't have any significant downsides it's probably here to stay unless Tyan or another company plans to mail the mods a bunch of free hardware "just to have".

Personally, I'd *love* to have more choices than just Supermicro. But I'm sure as heck not going to spend my money to find out how good/bad the hardware actually is.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's a complicated problem. We're not homogeneous by choice. Dell and HP make nice gear but typically it doesn't "fit" the FreeNAS model well, because they're assuming use models that are for Windows or ESXi. Intel makes some boards but nothing compelling and it is often hard to come by.

Supermicro is THE only manufacturer that has specialized in whitebox server hardware and been successful at it. "Specialized" meaning that far more than 50% of their offerings are clearly intended for non-desktop non-workstation use. "Successful" meaning that they've been doing it for at least a decade.

Even at that I'm not entirely thrilled with Supermicro, because a lot of their offerings are aimed in the same direction as Dell and HP... but at least with Supermicro, they sell the bare parts so you can roll your own things.

I've been VERY happy to see ASRock trying to enter the arena because they've actually been offering products that indicate that they've looked at the market and decided to try to go after the ZFS storage segment. That 2U, 12 3.5" storage server that also includes 6x 2.5" bays on the back ... frickin' genius. It was designed well enough that it looks like it ought to be able to run a virtualized FreeNAS on top of ESXi, and if so, then it will quite possibly be my platform of choice if and when we need to expand our environment. Especially if I can get it with the 4x 10GbE mainboard that they have available.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Has anyone come across any topics on file based extents vs zvol/device based extents?

OP mentioned using SSD and doing both, with same result. I wonder if that's the norm/common? Obviously very setup dependent, but are there any trends?

I have seen a few posts that seem to insinuate zvol is better?

So while we know iSCSI is likely going to suck trying to implement (maybe terrible performance or huge hardware cost), which way sucks the least (in most cases, generally, or any other way to avoid a million 'it depends')? Which way would you recommend it for those trying to use it for ESX?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Currently? I'd try with the kernel implementation and zvols, because that's the way things are moving for new deployments.

Who knows, I might actually have some more interesting information in a month or three.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
I've been reading the rants/arguments between mjws00, wintermute000, and cyberjock, and I have to say I love reading all these iSCSI related posts. They basically say the same thing: iSCSI+ZFS is going to suck (my non-technical description). My remaining question, "How bad does it suck, and can you make it suck less?"

Every post about iSCSI/ZFS alludes to the same few things regurgitated in one way or another:


1. MEMORY, need 64GB+ or it's going to suck hard. (I'd love to see some kind of exact same test doing iSCSI, one with 32GB, 64GB, and 128GB just because I am curious)

2. Better have a SLOG.
It better be enterprise SSD or you'll end up hurting performance, not helping.
Most of the decent looking SSDs mentioned (enterprise or similar) I keep seeing look like 500MB/s+ write speed, NVRAM PCI cards start getting into 800MB/s speeds!

3. Disk config should probably be ZFS equivalent of RAID 10.
More vDevs to spread the load was mentioned in some posts, if not this one. It supports higher IOPS mentioned.

4. It won't be as awesome as you want it to be.
Even after doing all this, spending way more money, I still bet people would say 'FreeNAS will still suck at being an iSCSI target for VMware' (for example)...'It's performance will be terrible!'

OK, but compared to what? A $100,000 EMC/NetApp/HP SAN, hell yeah it'll suck! I guess the question is: Will it be better/worse than some other solution? Same hardware, same money, will it be worse? Who know's!

I can see merit on both sides of the argument about baseline benchmarks, clearly there isn't a piece-of-cake perfect solution, but a lot of people want something to know how good (or more likely bad) their configuration is. Of course, that is really based on what you're going to do with it, but still...

I read and liked jgreco's post:
https://forums.freenas.org/index.ph...esting-your-freenas-system.17750/#post-148773

That tough me how to use DD to at least see how my disk system was doing compared to other people's posted results for the same test. For me, the results just confirmed my little play system is about as 'so so' on raw disk action as I figured it would be. It also let me play with things like sync=always vs sync=standard and see how much the numbers changed, but it was just that, numbers. Could have compared that to doing mirrors or other Z levels, but anyway...

Since we know iSCSI and ZFS sucks, so you start with that in mind, and figure it's going to be terrible, but how terrible is yours compared to mine?

I would be very cool to see a set of generic standards for (arbitrarily) comparing iperf results and iSCSI initiator results as say the DD test mentioned.
You aren't going to be able to answer 'How YOUR VMs will perform?', because no one know what YOUR VMs work load will be. But you could suggest a basic config and basic test that people could use to get a rough idea for comparison. There's just so many parts and pieces that come into play, it would be very difficult to get apples and apples. You'd just have to hope that you're still comparing fruit at the end of the day.

With all that said, I wonder if there is anyone in the forum who has ever argued that they built a pretty awesome FreeNAS based system to use for iSCSI/NFS? Then I'd ask, what hardware, memory, cost, DD numbers, and can they post a generic initiator test? Like a VM with IOmeter/SQLIO/etc. You could try to compare yours to theirs, and match capability in each area (disk, network, etc).

SAN measuring contest...
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I'd love to see a documented set of standards and expectations. Unfortunately it takes a whack of hardware to do this and ensure that it is only the iscsi being tweaked and tested and not other hardware interfering. For me without MPIO or 10Gbe there is no point. DAS and local storage is just so much faster. It's lazy to just throw SSD's at things, but if the capacities needed are manageable. It is easy and FAST. :)

I'm trying to make it to Q1 for a new E5. Then I'd be in a position to test and document.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The problem is that if you build 10 servers to the exact same hardware specs and put 10 VMs on each one, every single server will perform differently based on the VMs you used. As time goes on things like historical usage, fragmentation of your data, quantity of data, and other things that are unique that one single server will play a part in its performance.

So I can post all the benchmarks in the world, but at the end of the day they will only give you a hint. It's not something you can truely lay down as cold hard fact and use it to determine what hardware you need. It's very much like I keep telling people in the forum... if its too slow it's time to figure out if you need more RAM, more L2ARC, or both. And there are plenty of commands that will tell you exactly what YOUR setup needs.

Edit: I've helped a dozen or more people do VMs over iSCSI. In one case I helped someone build an iSCSI server for VMs. With a total of about $8k (that's including a 4U-24 disk chassis and all of the disks) he got a server that is so fast that his friend's Netapp appears to perform poorly and is VERY upset as he spent over $100k for his server. Needless to say he's given up on Netapp when his contract expires. His Netapp box runs about 20 VMs and the server I helped build currently runs 53 VMs. :D

If you know what you are doing and are willing to spend money (but not spend it on stupid crap) you can get a very capable server without breaking the bank. The problem is too many people define "breaking the bank" as $2k, and that's just not realistic unless you have a bunch of spare hard drives and chassis already.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
The problem is that if you build 10 servers to the exact same hardware specs and put 10 VMs on each one, every single server will perform differently based on the VMs you used. As time goes on things like historical usage, fragmentation of your data, quantity of data, and other things that are unique that one single server will play a part in its performance.
I think this statement is exactly what I'm interested in. You do need a like test. Not multiple VMs, one. Maybe someone builds a template/OVF of a *nix/whatever with a standard IO gen tool. Then you run a predetermined set of tests. All the FreeNAS to ESXi people can use it. Apples-Apples (or at least apples-oranges/still fruit). Sounds as close as possible, still pretty hard unless people use standard simple network setup, etc. But could be done to some extent. Again, might not be perfect, but it might be close enough...

... if its too slow it's time to figure out if you need more RAM, more L2ARC, or both. And there are plenty of commands that will tell you exactly what YOUR setup needs.
A comprehensive guide to performance testing, that's been talked about before. I am starting to write down a lot of notes and commands from learning the forums, maybe someday this could be available as a published document to other people. Testing your drives, evaluating your RAM, commands to see if SLOG is helping or hurting, same for L2ARC, iPerf for network, etc. It should probably follow a logical flow, be organized so people can follow along and understand how to interpret the results they are getting, and then make recommendations. Would be the next raved about guide like the newbie guide.

Edit: I've helped a dozen or more people do VMs over iSCSI. In one case I helped someone build an iSCSI server for VMs. With a total of about $8k (that's including a 4U-24 disk chassis and all of the disks) he got a server that is so fast that his friend's Netapp appears to perform poorly and is VERY upset as he spent over $100k for his server. Needless to say he's given up on Netapp when his contract expires. His Netapp box runs about 20 VMs and the server I helped build currently runs 53 VMs. :D

If you know what you are doing and are willing to spend money (but not spend it on stupid crap) you can get a very capable server without breaking the bank. The problem is too many people define "breaking the bank" as $2k, and that's just not realistic unless you have a bunch of spare hard drives and chassis already.
Now that system would be AWESOME to see data on in the forum or somewhere appropriate. A hardware/config list, pool layout, various DD results, and some kind of standard VM template from suggestion above with it's internal results. Some numbers that quantify it as kicking the Netapp's butt.

I think if you could somehow publish that system's information/configuration, that might be a pretty decent default/generic answer to all these iSCSI/NFS related 'How do I get my performance up' questions. The simple answer would be: 'Be more like this example system' and the closer you get to it, you can expect to be in this ball park for results/capability.

Here's an example - I have 15 drive rather than 24, but they're 3G not 6G, probably different controllers. I'd still like to configure my test server as close to that one as possible just compare raw DD results. It would be interesting (to me) to see how much they vary in sequential and random tests.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you want to know, that box had 256GB of RAM, 1.25TB of L2ARC, and 12 mirrored vdevs. Not cheap.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
sfcredfox: i like the idea of a basic benchmark, including that people will have to mention they fill grade of the pool or test on a completly fresh pool. This gives many of us a smart point to optimze, without putting to much guessing and trial&error to it.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
sfcredfox: i like the idea of a basic benchmark, including that people will have to mention they fill grade of the pool or test on a completly fresh pool. This gives many of us a smart point to optimze, without putting to much guessing and trial&error to it.
It's been discussed before. I think all the details goes beyond the scope of the OP in this thread, but I think someone should start some kind of joint project, document, thread, whatever that we can combine a comprehensive guide. jgreco has a few good posts about burn in that outlines a good amount of stuff, but no one has every completed one that's all inclusive and explains everything in a manner that everyone can understand rather than already being an expert.
 
Status
Not open for further replies.
Top