BUILD Supermicro X9SRL-F Build Check

Status
Not open for further replies.
Joined
Nov 11, 2014
Messages
1,174
By the way I still have trouble with understanding what LSI 9261 does with raid1. I tried all raid configurations according to the manual , but I don't understand what is done when you select all you 6 hdds and put them in raid1 ?

Is not making 6 identical drives and give me the capacity of 1 drive ,cause It gave me half the capacity and half the speed, more like raid10, but is not how raid10 is set according to the manual ?!

Do you know what I am talking about ?
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So you did say 2 datastores. I just read between the lines and thought was one. That changes everything. If you have 2 separate datastore in raid1 using 2xIntel 535 480G (I hope I didn't misread again) then you not gaining any write speed, all the raid card is giving you is the redundancy , right ?

I get that nice speed bump you see there at the start of the copy as the RAID cache fills. Since very little of our VM traffic involves massive writes, that basically means writes almost always go to cache and then hits the disks (whether HDD or SSD) a tiny bit later, which gives the impression of very fast writes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
By the way I still have trouble with understanding what LSI 9261 does with raid1. I tried all raid configurations according to the manual , but I don't understand what is done when you select all you 6 hdds and put them in raid1 ?

Is not making 6 identical drives and give me the capacity of 1 drive ,cause It gave me half the capacity and half the speed, more like raid10, but is not how raid10 is set according to the manual ?!

Do you know what I am talking about ?

Well, I know what you're talking about, but I just gave away the pair of LSI 9280's that I might have used to look see what happens. I don't think we actually have any other spare LSI RAID in stock.

I don't really understand the point of RAID10 with SSD, aside from (maybe) convenience in terms of having a single larger datastore, in which case, I'd have suggested larger SSD's instead.

As you can see above, the I/O performance on a single VM probably isn't going to actually hit the RAID controller or SSD's peak speeds, so RAID10 seems to me to be mostly useful on hard drives where the speeds aren't fast enough.
 
Joined
Nov 11, 2014
Messages
1,174
I just gave away the pair of LSI 9280

You are giving away stuff , and I am keep buying stuff of ebay ! What an inefficient recycling scheme is that !:smile:

I don't really understand the point of RAID10 with SSD, aside from (maybe) convenience in terms of having a single larger datastore, in which case, I'd have suggested larger SSD's instead.

Just to help SSDs with the write speed, which will also double the read speed. The moment I start using 10Gb nics, I wanted to have the faster underlying storage to enjoy higher transfer speeds. After long struggle with the 1Gb limiting the transfer around 100MB/s now I just feel I owe it to the 10Gb lanes, not really need it that bad. You know how one thing lead to another :smile:

By the way when you mention cache...The LSI controller has an option disk drive cache: disable/enable or unchanged. I read what it means , but I can't determine if it's good to have that on or off for performance benefits ?

Would you share how is setup in your raid1 SSD setup for the following options:

Disk cache policy: (enable/disable/unchanged)
I/O policy: (Cached IO or Direct IO)
Read policy: (Always read ahead or No read ahead)
Strip size: (from 8KB to 1024 KB)
 
Joined
Nov 11, 2014
Messages
1,174
Perhaps jgreco is on vacation. Good thing I am not on a market for switch this year.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are giving away stuff , and I am keep buying stuff of ebay ! What an inefficient recycling scheme is that !:)

Works for me. I tend to give stuff to good causes.

Just to help SSDs with the write speed, which will also double the read speed. The moment I start using 10Gb nics, I wanted to have the faster underlying storage to enjoy higher transfer speeds. After long struggle with the 1Gb limiting the transfer around 100MB/s now I just feel I owe it to the 10Gb lanes, not really need it that bad. You know how one thing lead to another :)

I guess I don't worry too much since I still remember the days when things took forever. Kernel compile? Go out to lunch. Make world? Overnight.

By the way when you mention cache...The LSI controller has an option disk drive cache: disable/enable or unchanged. I read what it means , but I can't determine if it's good to have that on or off for performance benefits ?

Would you share how is setup in your raid1 SSD setup for the following options:

Disk cache policy: (enable/disable/unchanged)
I/O policy: (Cached IO or Direct IO)
Read policy: (Always read ahead or No read ahead)
Strip size: (from 8KB to 1024 KB)

I'm not sure where you got that list from. The properties of a VD are a bit more extensive.

In writeback mode, disk cache policy shouldn't be a big deal. For HDD we usually set it to disabled, and for SSD we leave it at default.

We've traditionally used a 64KB stripe size, though I guess with VMFS5 maybe that's no longer the "best."

There's probably no real value to read ahead on a SSD datastore, except to rob cache from any HDD datastores. I'd say if you've got HDD on the same controller, disable read ahead for any SSD datastores and enable it for the HDD's.

There's no reason to think that this is optimal for other use cases.
 
Joined
Nov 11, 2014
Messages
1,174
Some of us have real work to do. :)

Real work Sucks !:smile:
I don't mean I found a way to avoid it , but I am keep looking for solution.
 
Joined
Nov 11, 2014
Messages
1,174
I'm not sure where you got that list from. The properties of a VD are a bit more extensive.

On my LSI 9261 are less I assume compared to yours being much newer gen, but these options are the ones I struggle with.

Disk cache policy:
I read the LSI manual ,and read "best practices" from some server vendors. By the way they state what you just said about "writeback mode, disk cache policy shouldn't be a big deal" and that makes sense, also they said the fact the raid card cache is battery protected but disk cache is not (that's fine for reliability stand point only) but I am also thinking:

I don't even know how much internal cache my SSD has or if it's faster than my raid card cache to make a decision which one to use. If this was 8xHDD's with 128MB cache per HDD that's 1GB total cache, twice more than the raid card has (512GB in my case). Won't be better(for performance) to have both(raid card and drive cache) cache in use ?
 
Joined
Nov 11, 2014
Messages
1,174
Even if you are not sure what to answer, is good to respond ones in a while with something so I know you are alright so I don't have to worry about you :)
 
Joined
Nov 11, 2014
Messages
1,174
Something I wanted to ask you for advice:

If you have server in SC813MTQ-441CB chassis (router) that you really want to make redundant from UPS failure standpoint, would you put it in dual PSU chasis like 813MTQ-R400CB or you buy a ATS switch and have it connected to the switch which is connected to 2 UPS's ?

P.S. My router burns only 22W now sitting in 1U chassis with 440W single PSU (cause there is no smaller platinum ones), and putting it in 1U with dual 400W PSU seems to be even more wrong considering the low load requirement. I wonder what what would you do if you want to make you router redundant ?
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
If you have a router motherboard which can run from a single 12v supply (which is true of some of the Rangeley boards) then the most reliable solution would be to arrange redundancy at the 12v level, using a 12v Lithium ion battery as the direct supply (like a laptop) and charging it from a UPS-protected charger. It should be easy to get a few hours battery life.
 
Joined
Nov 11, 2014
Messages
1,174
Mb is Atom C2758.
So that wont work. My main concern is not so much about psu failure but a UPS failure. I just want to be able to bring one UPS down while other is keep supplying power while I do UPS swap or other reconfiguration.
 
Joined
Nov 11, 2014
Messages
1,174
Actually here it's the X10SRW and the E5-1650v3. The SRW is the "WIO" variant that allows more expansion especially in the 1U form factor:

sys-1027r-n3rf_back.jpg


Two full height full length x8 slots and a half height as well, all in 1U! This puts some significant constraints on the choice of cooler; it *has* to be passive, but due to the design it actually gets redundant fans since the fan bulkhead is basically a big bad row of fans.

sys-1018r-wc0r_top_1_.gif


We're using these as hypervisors. Works out really nice. A nice 2GB LSI 3108 RAID controller for 8 of the bays provides HDD and SSD RAID1's, and two more non-RAID drives for less critical storage. Toss a dual 10GbE card in there and there's STILL a full slot free.

I've also got our VM filer here which is a 2U SC216BE26 (26 2.5" drives) and a 2U SC213A-R740W hypervisor which are all real nice, but I'm kinda wondering if I should've avoided WIO for those because it limits them from being converted to Xeon D boxes, which I think they'd both excel as. That general 2U design also has its own narrow ILM passive cooler. They're all stock SuperMicro. You should be able to see how these benefit from the multiple 40MM fans in the fan bulkhead for the 1U's, and the bigger fans in the 2U.

Would you believe that this picture is still hunting me for some many months. If it didn't have to be sooooooo... cool the way it looks in 1u chassis it would be so much easier to make someone's mind:-(
 
Joined
Nov 11, 2014
Messages
1,174
In writeback mode, disk cache policy shouldn't be a big deal. For HDD we usually set it to disabled, and for SSD we leave it at default.

I re-visit and rethink and have something to ask about this statement. Disregarding the speed performance, isn't it the idea that disk cache is disabled because when power loss occur (on SSD that don't power protecting cap inside like your 535 480Gb) you will loose that data inside the drive cache buffer. Sure raid card cache will be protected by BBU but the data in drive cache (if enabled) will be lost. Am I correct in this or I misunderstood something ?
 
Joined
Nov 11, 2014
Messages
1,174
I got myself a brand new LSI 9271-8i with cachvault and the "battery" status is shown "GOOD" in ESXI 5.5 just like you said it will be.

Now I need little help with getting some SSDs for datastores. I was gonna go with consumer ones like you did with Intel 535 ,but then I realize that if I don't get SSD with data protection, meaning having cap inside, then I could easy suffer data loss.

I mean what is the point of having battery/capacitor on the raid card if you leave your SSD internal cache without cap , you will loose the data there ?! Am I wrong to be worried ?
 
Joined
Nov 11, 2014
Messages
1,174
I don't really understand the point of RAID10 with SSD, aside from (maybe) convenience in terms of having a single larger datastore, in which case, I'd have suggested larger SSD's instead.

Believe it or not it's been a year and this is still haunting my sleep. I have 6 (2.5") bays available for datastore and 5 Intel DC s3700 800GB, but I am only using 2 bays with RAID1 with 2 Intel DC S3700 800GB.

P.S.I am not sure if it's bad to post on a thread that is that old from administrators point, but if it is I am only doing it cause I don't see why would be.
 
Status
Not open for further replies.
Top