BUILD Supermicro X9SRL-F Build Check

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This is the picture I was looking for but couldn't find :)

Well sure, it'd be difficult for you to find it since I hadn't taken it yet.

I meant this thing to be used as riser in regular 1u supermicro to be able to take 2 PCIe cards. Just the bottom side will be regular 3.0 16X male slot and will divide output in two 2.0 8X slots for 2 cards.

Again, that'd require PCIe bifurcation support. This support wouldn't exist for a random motherboard, but would probably be there for something like a Xeon D platform where they've already limited you to a single slot. A bunch of the X10SDV boards apparently support it.

Seems to me like the ideal would be for Supermicro to support Xeon D as a WIO, but we aren't that likely to see that.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
You don't always have to do that yourself Bidule0hm :)
I know you probably can , but I was talking about to buy one already made and soldered.:D

Of course, I was talking from the constructor POV :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
By the way I knew about the proprietary WIO MB/chassis before but I've been trying to avoid them mostly because they are proprietary and have very limited places that could be work. Unlike something like some board like X9SRL-F , that could be ESXI host, freenas with a lot of memory, or anything you want it to be, including putting in desktop/tower case,
BUT
After @jgreco show me his hypervisor and mention the possibilities of having raid card,10Gb nic , and still have one more slot to use all that in 1U chassis I start rethinking the WIO possibilities. It's just too cool for 1u chassis to resist. In 2u or 3u don't shine that much, because in 2u you can get most things in low profile , in 3u you have standard size so easy to fill all PCIe slots you have.

It's definitely somewhat more specialized, and if you don't have a good reason for getting it, may be the wrong choice compared to a more general purpose solution.

I'll study more about WIO what risers it take and if it's part of MB or chassis and stuff like that, but in general what is the most important things I should be aware by going with WIO board and chassis.(1U) ?

Life will be easier if you get a prebuilt. For a single socket E5, that's the 1018R-WC0R. And that's probably THE option. You can do a dual board, any of the X10DRW's, but that nukes the half height PCIe slot because that's where the second CPU winds up.

Tradeoffs, tradeoffs.
 
Joined
Nov 11, 2014
Messages
1,174
You can do a dual board, any of the X10DRW's, but that nukes the half height PCIe slot because that's where the second CPU winds up.

Are you sure about that ?
Cause X10DRW-i for example says:

4. Riser card support:
Left side - 1 PCI-E 3.0 x32
Right side - 1 PCI-E 3.0 x16

and the chassis is using like : SC815TQ-R706WB or 113TQ-R700WB have a 2 full and one half size ?!


P.S. I wasn't going to go with dual cpu probably , but is good to know if dual CPU will limit the 1U chassis to 2 full cards and disable the 1 half size on the right side ?
 
Joined
Nov 11, 2014
Messages
1,174
For a single socket E5, that's the 1018R-WC0R. And that's probably THE option.

You have a great taste. :smile:
Any of my friends IRL who what's to buy me a nice supprice present should check with you for advice.:smile:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Are you sure about that ?

Of course. Look at the board.

http://www.supermicro.com/products/motherboard/Xeon/C600/X10DRW-i.cfm

I *suppose* if you can find a PCIe card that's maybe one inch high you can make a fool out of me. Or you could claim that you don't have to populate the RAM/CPU, but then, what's the point of the dual socket board.

Cause X10DRW-i for example says:

4. Riser card support:
Left side - 1 PCI-E 3.0 x32
Right side - 1 PCI-E 3.0 x16

and the chassis is using like : SC815TQ-R706WB or 113TQ-R700WB have a 2 full and one half size ?!

Sure. However, you're only likely to be able to make use of the right riser in a 2U chassis, or with the X9/X10SRW boards. And with the X10SRW in the 2U, there's an additional caveat that you only get one PCIe socket on the right hand riser (RSC-R2UW-E8R-UP). With the X9SRW you could go with the RSC-R2UW-2E4R. Actually I *suspect* you can do that with the X10SRW too, even though Supermicro says X9SRW only, but I'm not in sufficient need of slots to give it a go.

So there's a lot to know before you go WIO.
 
Joined
Nov 11, 2014
Messages
1,174
I just had the assumption the if that if they recommend the chassis for X9D and it has 3 slots will be able to be used. But it might be just for the X9S in order to used it the small card spot. Il gonan have to think about this WIO again.
And even go step back: I am gonna play with RAID card 9261-8i first in windows and ESXi, to see what the results will be , and then go back to WIO choices. Because the need for RAID card in 1U ESXi box I got after I learned about your hypervisor, then I got the urge for WIO since I can't have more that one card in regular 1u and that's already taken by 10Gb card.


P.S. Adding dual 10Gb nic and LSI 9261-8i in test machine added 30W more to the system power consumption :(
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
P.S. Adding dual 10Gb nic and LSI 9261-8i in test machine added 30W more to the system power consumption :(

That's pretty normal, unfortunately. Especially earlier generation 10G ethernet cards... they tend to eat a little more power.

What I've kinda been hoping to use for new hypervisor deployments would be some dual slot Xeon D board, with an X710-DA4, and a nice LSI 3108 controller. The other thing that'd be pretty cool would be a Xeon D in a 2U form factor, such as the X10SDV-7TP4F, in a SC213AC-R920LPB with an X710-DA2 providing 10G #3 and #4, and an LSI 3108 driving the first eight bays, while the on-board 3008 drives the second eight bays as a passed-thru-to-FreeNAS controller. I think the whole system would eat less than 100W but I haven't actually tried one yet.
 
Joined
Nov 11, 2014
Messages
1,174
Probably half for the nic and other half for the Raid card. I put them together so I can't tell.

This is sooooooo... new, supermicro don't even have the pictures on their web for the chassis you mention :smile:


Speaking of freenas , is freenas faster than LSI 9261 ? Same raid10 , same machine, same drives, which one is faster ?


P.S. This Raid controllers have a "spike speeds" and really confuses me to determine how fast is it ?! Speed jump very high in the beginning then it slows down, I can't even tell what is going on actually on the transfer side. I did enable write cache knowing the importance of it , but everything else is default.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This is sooooooo... new, supermicro don't even have the pictures on their web for the chassis you mention :)

That's odd, wonder what happened. It'd look like any of the other HH CSE216's. https://www.supermicro.com/products/chassis/2U/213/SC213A-R740LP.cfm

Speaking of freenas , is freenas faster than LSI 9261 ? Same raid10 , same machine, same drives, which one is faster ?

The LSI's probably faster, but is limited in features/capabilities. ZFS is of course consuming the local CPU to get what it does done.

Also, in the context of providing ESXi datastores, the LSI has the advantage of being direct attach storage (DAS), so it should almost always win that hands down. FreeNAS has to be a separate machine over the network. It's very difficult to compete with local disk.

P.S. This Raid controllers have a "spike speeds" and really confuses me to determine how fast is it ?! Speed jump very high in the beginning then it slows down, I can't even tell what is going on actually on the transfer side. I did enable write cache knowing the importance of it , but everything else is default.

Easy peasy. What's happening is write cache. Depending on the size of your particular controller's on-board cache, what happens is that you start writing to the RAID controller and those writes go in the cache and the controller says "OK written" right away. In the meantime the controller starts poking the hard drive and saying "hey slow sluggish sleepyhead, I need you to write this." And a bunch of this builds up, as long as the RAID controller has write cache free.

So if you start a large operation like, oh, let's say a dd to a VM disk, what'll happen is that you'll experience insane write speeds for the first few seconds while the cache on the RAID controller fills. Then you'll slam into a wall. The wall is actually the speed of the underlying disks. Here's an example from a VM to a 18GB VM disk.

lsi3108.PNG


So you see here that the VM starts out for the first two or three seconds writing out at a blistering 400MB/sec, since the controller has 2GB of cache, of which I want to say it uses half for write cache. But by 4 seconds, we see the average speed reported has dropped to 265MB/sec, which is because what really happened is that around 3 seconds in, the write cache was full and speeds suddenly dropped to about 70MB/sec. The overall average reported speed starts dropping but in reality it just goes off a cliff:

lsi3108-2.PNG


See, it runs at a high speed there for about 4 seconds, then suddenly slams into the ~60-70MB/sec wall that is sustainable. Now in theory the underlying drives are able to write at around ~100-110MB/sec so that's vaguely disappointing. But it shows the behaviour. Reads, on the other hand, are a whole different story. The drives just read at a sustained ~60-70MB/sec.

SSD via the LSI, on the other hand...

lsi3108-3.PNG


So the weird thing there is that it writes faster than it reads. :smile:
 
Joined
Nov 11, 2014
Messages
1,174
I was expecting with cache to behave the way you describe it , but that is adding another layer of confusion and data over network is moving on bursts. I can't tell if it's in cache , or in ram, or when (at most important what speed) will get to the hdd. I want to be able to make hdd write as fast as hdd is capable , but like you said that's not always the case.

The LSI's probably faster, but is limited in features/capabilities. ZFS is of course consuming the local CPU to get what it does done.

Features and ZFS is all understandable. I just wonder when you just compare the horsepower. Raid card has 800 Mhz single core and 512 MB ram ddr2 (in my case) and my Freenas (using system resourse power) has 4 cores 3.3 Ghz and 32Gb ram DDR3, so if I have 16 drives in raid0 or 10 don't mater , which one will provide faster throughput.

ZFS has much more overhead because of the cool features, but with raid card and no parity raid like rad5 or raid6, how can I tell if the 800 Mhz cpu is enough or it's bottleneck?!


This remind me how much I love freenas, and it seems Raid card will only be good for ESXI datastore. I was thinking to open the "pandora box" (freenas as ISCSI target), but when I heard you use Raid card , I put the pandora box back and decide not to open it. What do you think about getting 8x15K SAS drives , and never worry about TRIM or writes flash block ? Many hypervisors in the past use the 15K sas drives for datastores ? OR I can get 4 Intel 535 480GB for $119 each and put them raid 10 ?

I just can't effort to buy (8x15K sas and 4x Intel 535 480Gb) just to try them.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I can't tell if it's in cache , or in ram, or when (at most important what speed) will get to the hdd.

Why do you care "what speed"? Figure out what your baseline speed is and then take anything above and beyond that as a gift from the computing gods.

FreeNAS over iSCSI is not likely to beat a high end LSI controller attached locally with SSD. There's too much latency in the whole thing. I want to say I see about 80-100MB/sec write speeds to the FreeNAS iSCSI filer under good conditions. However, it does so at tremendous cost ... 7TB of totally usable HDD-backed storage for about $7000.

By way of comparison, the LSI 3108 and some inexpensive SSD will get you there better. Just picked up a PNY 960GB SSD for $210 today at Best Buy. Even assuming we are talking about Samsung 850 EVO at $650/2TB, putting a bunch of EVO into a RAID6 for 8TB means 6 units at $650 = $3900 -> $4700 for 8TB of SSD storage.

My issue here has always been that I need kind of a modest level of shared storage for VM usage, but I have to look at the fact that while it's really nice to have a big whomping FreeNAS box, I can buy some modest Synology DS416slim's at $289 each, throw 2 x 2TB Spinpoint M9T's in there for $200, and then 2 x PNY 960GB's in there for $420, which means that for about $900 I can have 3TB of slowish dual tier storage. Individually these things aren't that great, but I don't need great performance for the vast majority of the VM's here... and when you cluster four or six of them together, well, that's a very compelling thing to contemplate. Runs around 15 watts per unit. With the SSD in RAID1, a VM can write at around 35MB/sec. But two of them can hit 60MB/sec aggregate. It's all about all the complexity adding latency... sigh.

So if you look at 4 of those, that's 60 watts for 8TB of HDD and 4TB of SSD storage. The SSD storage is basically almost as slow as the HDD storage except no seek related latency.

I was really hoping that FreeNAS would turn into a killer NAS platform for VM storage, but it has been five years and the only place it is truly winning is at large scale. If I need true speed, local datastore on a DAS disk array. If I need some shared storage, low power NAS can now handle SSD which is also large enough to be meaningful. Will it be fast? Not really. But it'll be fully redundant, and it won't be one big huge magic box that might someday take a crap and leave us hanging. That's a very powerful thing.

Compared to many of the big arrays like an EqualLogic or whatever, there's no doubt that FreeNAS can beat the crap out of that, but I am not seeing it as a major win here. I don't think there's any one right answer for VM storage. If you're doing massive levels of constant rewrites, SSD on direct attach storage is probably the wrong answer... and yes 15K SAS DAS HDD will be "kinda fast". But most people don't actually have the rewrite loads like that, so SSD might be a pleasant speed bump.
 
Joined
Nov 11, 2014
Messages
1,174
FreeNAS over iSCSI is not likely to beat a high end LSI controller attached locally with SSD. There's too much latency in the whole thing. I want to say I see about 80-100MB/sec write speeds to the FreeNAS iSCSI filer under good conditions. However, it does so at tremendous cost ... 7TB of totally usable HDD-backed storage for about $7000.

After hearing this ... the "pandora box" (freenas over ISCSI) is not just going back to the shelf is going the the dumpster. I didn't believe was that bad(slow). You see my freenas in my signature is capable of around 500 MB/s over network CIFS share. That's read and almost write to. If presenting it to my workstation as iSCSI target gets the speed down to 100MB/s or worst I'll get the sledge hammer and I'll even apologies for the noise I am going to make :smile:
Why would anybody use iscsi if it's that slow ? Considering how much expansive these SANs are.

VM can write at around 35MB/sec
WAIT ! WHAT ? Who can write so slow ? Most SSD in smaller capacity can write at 100MB/s , Why would VM write so slow on raid1 SSD data store that is capable of over 100MB/s per disk ?! I remember you said you have 4xintel 530 480GB in raid10 capable writing 560MB/s sustained so if you put VM there should be little slower but I would expect write speeds of ... perhaps at least 400MB/s ?
 
Joined
Nov 11, 2014
Messages
1,174
But most people don't actually have the rewrite loads like that, so SSD might be a pleasant speed bump.

You are right about that. I just have few VM that do a lot of writes and need this kind of speeds: Like DVR-Ftp server and Torrent box. For all other VM's even single SSD as datastore (no redundancy is the price but) is fast and works great. I was hoping in I raid few SSD like you can find a solution and get the redundancy ?!

Something very weird is happening with VM using single SSD as datastore when it writes ?! When reads is almost as fast as the SSD native can be like 480MB (samsung 850 PRO 512GB) but when is suppose to write is like 10x slower and it's struggling. Slower than single mechanical HDD that can write with 160 MB/s in VM. I am hoping that adding raid card in between and adding second sdd can change that. I just don't have the SSD's to try.
So I wonder if you copy a file to a Windows VM on your (4xSSD) datastore , what write speed can sustain ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think you're reading about every other word I write.

I can buy some modest Synology DS416slim's at $289 each, throw 2 x 2TB Spinpoint M9T's in there for $200, and then 2 x PNY 960GB's in there for $420, [...] With the SSD in RAID1, a VM can write at around 35MB/sec.

WAIT ! WHAT ? Who can write so slow ? Most SSD in smaller capacity can write at 100MB/s , Why would VM write so slow on raid1 SSD data store that is capable of over 100MB/s per disk ?! I remember you said you have 4xintel 530 480GB in raid10 capable writing 560MB/s sustained so if you put VM there should be little slower but I would expect write speeds of ... perhaps at least 400MB/s ?

I think I explained adequately. Read what I quoted again. Small Synology NAS. 35MB/sec.

So I wonder if you copy a file to a Windows VM on your (4xSSD) datastore , what write speed can sustain ?

I don't have any 4xSSD datastores. We're pretty much exclusively mirroring for datastores. So you want to see something like this, is what I'm guessing?

ssd-to-ssd.png


That's running from one datastore to another, both of which are RAID1 Intel 535's. You'll notice the characteristic ~800MB/sec for a second or two at the start, then things fall into around 360MB/sec for ongoing transfers, and then near the end speed bumps up a bit again, I'm guessing something to do with the amount of readahead dropping to zero, maybe freeing up space for more write cache again.
 
Joined
Nov 11, 2014
Messages
1,174
Small Synology NAS. 35MB/sec

My bad.:smile:

I don't have any 4xSSD datastores.

Here you misunderstood me. I didn't say "datastores" , I said your (4xSSD) datastore. That implies a single datastore composed out of 4 individual SSDs. Actualy I do remember they were actualy 5xSSDs and 4 in raid1 and one is hot spare. I just didn't include the hot spare cause in my case I'll be getting 4xSSDs, and hot spare wont make any performance difference so I left it out.

P.S. I got little confused with Raid1 and Raid10 after playing with the raid card: When I hear raid1 I always think of 2 hdd in mirror mode (2 identical drives with identical data). When somebody says: I have 4 Hdd in Raid1 I think 4 way mirror. Like having 4 hdd with identical data on all four of them and the capacity of single HDD.

I feel that's not what you have done. Perhaps you have 2xSSDs mirrored with identical data in one group (like vdev) and then another group(vdev) with 2xSSDs mirrored with same data between them but different from first group, and both groups(vdev) are striped like in Raid0. So if that's the case isn't it that what raid10 should be ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My bad.:)

Here you misunderstood me. I didn't say "datastores" , I said your (4xSSD) datastore. That implies a single datastore composed out of 4 individual SSDs. Actualy I do remember they were actualy 5xSSDs and 4 in raid1 and one is hot spare.

...sigh...

I think you're reading about every other word I write.

there's two 480GB datastores made out of 5 Intel 535 480GB's, two sets of two mirrors and a standby.

so then you say

So either on the same datastore made of 4xIntel 535SSD or on the other one made with 2xWD red in raid1 ?

*headache*

Actually, 4 x would be a bad design - in a crisis, it is better to have two separate mirror datastores, because if you were to have two drives fail in rapid succession, you could still shuffle things around to have a single SSD datastore with full redundancy.

and yet you come back to

I remember you said you have 4xintel 530 480GB in raid10 capable writing 560MB/s

so clearly someone's not following along, and I don't think it's me...

...sigh...
 
Joined
Nov 11, 2014
Messages
1,174
So you did say 2 datastores. I just read between the lines and thought was one. That changes everything. If you have 2 separate datastore in raid1 using 2xIntel 535 480G (I hope I didn't misread again) then you not gaining any write speed, all the raid card is giving you is the redundancy , right ?
 
Status
Not open for further replies.
Top