BUILD Supermicro X9SRL-F Build Check

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Depends on what your goals are. If you might have full height cards, or you want to be able to stock fewer spare parts, the WIO board is nice because the same exact board is in both units.

Part of that depends on understanding what your use case is. For example, I know that for a hypervisor, what we need is to be able to have 4x 10GbE and a RAID controller. Assuming X520-SR2 cards, that's two network cards and a RAID controller, which leaves two slots available. So I could potentially drop in another card or two if things change in the future and that's not a problem.
 
Joined
Nov 11, 2014
Messages
1,174
So there are three WD Red 2.5" 1TB HDD's, two in RAID1 and a standby. Those are for "slow" bulk storage. Then there's two 480GB datastores made out of 5 Intel 535 480GB's, two sets of two mirrors and a standby.
Are these wd red 1TB are they sata or sas ? Are they on the raid card ? Are the SSD's on the raid card ?


Now the thing is, the 535's have a relatively low write endurance ... 40GB/day

Oh if you know how much I hate 535. Intel spits on customers face with 535 series. Flash is so small and crappy is getting in usbflash drive territory. Finally Intel made me make the switch to Samsung 850 PRO, it's flash is so much bigger and reliable like between S5000 and S7000 as far as endurance go, but that's besides the point.


For X9 ESXI I would like to step back on the Raid controller because of the cost , but still intending to use SSD's. Perhaps
SAS2208 ?
 
Joined
Nov 11, 2014
Messages
1,174
... the WIO board is nice because the same exact board is in both units.

That's a very good point I didn't think of.:smile:

For the size of cards(full or half size) , it's shouldn't be a problem to get any nic or raid card in low profile bracket. In fact I can only think of 16i HBA that you can't get in low profile ?!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's a very good point I didn't think of.:)

For the size of cards(full or half size) , it's shouldn't be a problem to get any nic or raid card in low profile bracket. In fact I can only think of 16i HBA that you can't get in low profile ?!

GPU's, HBA's with top-exit SFF8087's, Chelsio T580-CR ("no 40G for U!" ---- pun intended!), Addonics AD3M2SPX4, Intel I340-F4, etc.
 
Joined
Nov 11, 2014
Messages
1,174
GPU's, HBA's with top-exit SFF8087's, Chelsio T580-CR ("no 40G for U!" ---- pun intended!), Addonics AD3M2SPX4, Intel I340-F4, etc.

Damn you are right again.:) I have not use GPU or 40GB nic and perhaps won't but already suffer from "HBA's with top-exit SFF8087's". I can see why they would make them this way, it not good for WIO either, it should be on the back facing it's source-hdd,ssd or backplane, unless is used on consumer desktop or something. Damn ! Just when I thought I figure what I need, I am in drawing board.
But I appreciate it very much. Rather go back than waste money on something I'll regret getting.
 
Joined
Nov 11, 2014
Messages
1,174
I was asking about the the your esxi storeg if all is connected to the raid controller ?

Right now I have this issue I am trying to resolve:
I have Samsung 850 PRO directly connected to the sata ports on MB(no raid card) and use it as datastore for esxi. When I test the drive speed inside a VM I got 492MB/s read and 84MB/s. The samsung 850 PRO is capable of read and write at 500MB/s If I had install windows on it without ESXI.

I am trying to figure why my SSD Write speed is slower that mechanical drive when is used as datastore for esxi ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I was asking about the the your esxi storeg if all is connected to the raid controller ?

<in his best Snape voice:> "Ob-vious-ly."

There's no way to do RAID1 in ESXi without a RAID controller.

Right now I have this issue I am trying to resolve:
I have Samsung 850 PRO directly connected to the sata ports on MB(no raid card) and use it as datastore for esxi. When I test the drive speed inside a VM I got 492MB/s read and 84MB/s. The samsung 850 PRO is capable of read and write at 500MB/s If I had install windows on it without ESXI.

I am trying to figure why my SSD Write speed is slower that mechanical drive when is used as datastore for esxi ?

For a directly attached SATA device?

My offhand guess would simply be ESXi being finicky about the writes. Normally with a RAID controller, ESXi flags writes to happen synchronously, so if you have a low end RAID card (M1015 in IR mode, etc) you will end up seeing relatively low performance just like what you are describing. My experience with hooking disks up directly to mainboard ports isn't particularly vast, because for what we do here, the loss of a datastore is a significant operational issue. Plus, a RAID card lets me use lower quality disks and maintain spares, and just magically does the necessary things to keep that functioning.

Anyways as an example of what I suspect you're facing:

I put two 8GB virtual disks on a VM. One of our hypes here without a caching RAID controller pumps a whopping 9MBytes/sec out to local RAID1 HDD and 50MBytes/sec to local RAID1 SSD.

Moving that same VM to a hypervisor with an LSI3108 and 2GB cache, I get a whopping 600MBytes/sec for the first two seconds while writing to RAID1 HDD, ending up at 75MB/sec. The underlying disks are capable of about 115MB/sec.

With the SSD, I get 750MBytes/sec for the first one second, and 595MBytes/sec for the remainder of the disk. That's far beyond the 520MB/sec the 850 Evo's rated for, I'm guessing the RAID controller still has some data cached at the end.

In any case, the way to get performance out of ESXi is a RAID controller with a BBU. One of the best bits of hardware I've found is the Supermicro 3108 with its supercap. This gets you a 2GB write cache and a supercap-with-flash setup instead of a battery, which is all sorts of better. But the damn thing is like ~$700 all told.

So you might or might not have noticed my ears perking up when I saw this thread, where @danb35 has acquired a massive storage chassis with a 2208. That's ALSO a great controller, and you'll notice I made what some would call a lowball offer. It is the previous generation equivalent to the 3108.

There are a number of good options for RAID controllers, though, such as http://www.ebay.com/itm/LSI-MegaRAI...-25121-85C-w-Battery-LSli-BBU07-/191832339212
 
Joined
Nov 11, 2014
Messages
1,174
<in his best Snape voice:> "Ob-vious-ly."

There's no way to do RAID1 in ESXi without a RAID controller.

You are right I am sorry.:smile:
I went back and read again what you said about your setup and it's "Obvious" - since all of your drive are raid-ed , they will be on the raid card. But perhaps you may have one drive connected to the MB for esxi boot drive , unless you prefer usb boot for the hypervisor ?!



I 've been running away from RAID like vampire from sunlight for most of my life , but it's time to face it again. I'll get raid card in my next bigger ESXI host who will have an an option for more than 1 expansion card. Either 1u WIO like yours or 2u, but for this ESXI host I chose to put 10GB nic in the only one expansion slot available.
This will be for light visualization loads, where disk redundancy will not be needed of high performance.

So in my 4 bays connected to the MB I have all drives separate:

1.Intel 530 SSD 120GB - boot drive
2.Samsung 850 pro 256GB - datastore1
3. Samsung pro 512 - datastore2
4.1TB WD - datastore3

I am thinking if performance of single drive is good enough for bare metal OS , it should be good for VM. For example my 1TB WD can do READ:200MB/s and WRITE: 190MB/s on bare metal. Now as a datastore (VMFS5) running same os as VM(eager zeroed thick provision), the performance inside is READ:176MB/s and WRITE:175MB/s perfectly fine for the job , with very little slow down.
So no performance issues as far as I see with mechanical drives connected to the MB as separate datastores.

The problems start are with SSDs , they are acting weird(Read:490MB/s Write:86MB/s) used as datastore. I am investigating why this is happening. I read thi from a forum about the slow SSD writes: ".. was due to the disabled disk's private DRAM cache, and more precisely on how badly flash memory need it to give good sustained performance." Either this has something to do with it , or esxi not having trim made my SSDs slow (in writing mostly) as hell. I wish I remember how was when the drive was new and just set as datastore.

For a directly attached SATA device?

My offhand guess would simply be ESXi being finicky about the writes. Normally with a RAID controller, ESXi flags writes to happen synchronously, so if you have a low end RAID card (M1015 in IR mode, etc) you will end up seeing relatively low performance just like what you are describing. My experience with hooking disks up directly to mainboard ports isn't particularly vast, because for what we do here, the loss of a datastore is a significant operational issue. Plus, a RAID card lets me use lower quality disks and maintain spares, and just magically does the necessary things to keep that functioning. If the second one is true(about the trim) this can be solved with raid controller that supports trim and will keep the ssds in peak performance.


You didn't mention are these speeds that you are getting just write speeds ?


P.S. Parallel with this issues I am already looking for raid card for the next ESXI. And Yes I did read recently your post about getting older card for X9, but you mention LSI9270CV, not LSI9260. I will need to read a lot more about the LSI raid controllers line up and features , before ask you questions about it. If it wasn't for the cost I would just get like yours:smile: But I am gonna have to settle for a generation back plus your as being so new , probably will need drivers for ESXI5.5u2( which I use) I like when works out of the box for the drivers.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are right I am sorry.:)
I went back and read again what you said about your setup and it's "Obvious" - since all of your drive are raid-ed , they will be on the raid card. But perhaps you may have one drive connected to the MB for esxi boot drive , unless you prefer usb boot for the hypervisor ?!

No, these are must-boot machines. If they cannot be booted, someone's on a plane a few hours later to go fix them. That is super-expensive, usually $400-$800. They boot from RAID. The drives on the RAID are protected against failures to a reasonable level.

I 've been running away from RAID like vampire from sunlight for most of my life , but it's time to face it again. I'll get raid card in my next bigger ESXI host who will have an an option for more than 1 expansion card. Either 1u WIO like yours or 2u, but for this ESXI host I chose to put 10GB nic in the only one expansion slot available.
This will be for light visualization loads, where disk redundancy will not be needed of high performance.

So in my 4 bays connected to the MB I have all drives separate:

1.Intel 530 SSD 120GB - boot drive
2.Samsung 850 pro 256GB - datastore1
3. Samsung pro 512 - datastore2
4.1TB WD - datastore3

I am thinking if performance of single drive is good enough for bare metal OS , it should be good for VM. For example my 1TB WD can do READ:200MB/s and WRITE: 190MB/s on bare metal. Now as a datastore (VMFS5) running same os as VM(eager zeroed thick provision), the performance inside is READ:176MB/s and WRITE:175MB/s perfectly fine for the job , with very little slow down.
So no performance issues as far as I see with mechanical drives connected to the MB as separate datastores.

Somewhat outside my area of expertise.

The problems start are with SSDs , they are acting weird(Read:490MB/s Write:86MB/s) used as datastore. I am investigating why this is happening. I read thi from a forum about the slow SSD writes: ".. was due to the disabled disk's private DRAM cache, and more precisely on how badly flash memory need it to give good sustained performance." Either this has something to do with it , or esxi not having trim made my SSDs slow (in writing mostly) as hell. I wish I remember how was when the drive was new and just set as datastore.

My theory would be that they're trying to do something funky with the disk writes that is causing the SSD to behave differently. I don't really recall what exactly ESXi tries to do for write flushing out to SATA.

It looks like the Samsung 850 Pro has a write cache that's enabled, and it looks like there's an option to disable flush cache, so that's probably where I'd start chiseling away at this.

You didn't mention are these speeds that you are getting just write speeds ?

Yeah. Read speeds will always be pretty good. Your problems are write related.

P.S. Parallel with this issues I am already looking for raid card for the next ESXI. And Yes I did read recently your post about getting older card for X9, but you mention LSI9270CV, not LSI9260. I will need to read a lot more about the LSI raid controllers line up and features , before ask you questions about it. If it wasn't for the cost I would just get like yours:) But I am gonna have to settle for a generation back plus your as being so new , probably will need drivers for ESXI5.5u2( which I use) I like when works out of the box for the drivers.

One of the advantages to the LSI stuff is that it does work out of the box. There's the VMware provided driver and then there's also an LSI provided driver that might be better. Of course a new LSI 3108 isn't going to work out of the box with ESXi 5.0, but that's to be expected.

The 9260 is the first-generation 6Gbps based on the LSISAS2108 RAID-on-a-Chip controller. basically a 2009 era PCIe 2.0 card. It's the "Intel X520" of the RAID world. It's not THE fastest, or THE newest, or THE most energy efficient, but damn, everything supports it.

The other cards are kind of a confusing mash of 2108 and the newer PCIe 3.0 based LSISAS2208 RoC controller. The 2208's a dual core processor, with faster memory, and is therefore a somewhat better choice if you've got a lot of disks or SSD or whatever that would pose a challenge to the 2108.

Then of course there's the 9361 which is based on the new 12Gbps LSISAS3108. Very $$$$$$$$$$$$$ but pretty sweet.

http://docs.avagotech.com/docs/12352108

So if you look at that, there are two directions you can go. You can go with battery backup, which provides a few days worth of power to the controller cache in case of power outage. This is "okay" but if you need to move the RAID card to another chassis for recovery, disconnecting the battery dumps the cache. The batteries also require periodic replacement (~3 years) and are not suitable for a situation where you have a hypervisor that isn't always powered on. Also, ESXi will stop caching writes and fall back to slow mode when the battery's nonoptimal in its learn mode (you can hack around that though).

The CacheVault module, on the other hand, is essentially a flash disk and supercapacitor arrangement that provides power to the RAID controller after a power failure. When power fails, the RAID controller quickly dumps its cache to flash before the capacitor discharges. This is a much nicer solution than the BBU, but is only supported on newer controllers.

Every storage admin I know would happily take a clue bat to whoever designed the BBU system.

This barely scratches the surface of what there is To Know about the LSI controllers.
 
Joined
Nov 11, 2014
Messages
1,174
No, these are must-boot machines. If they cannot be booted, someone's on a plane a few hours later to go fix them. That is super-expensive, usually $400-$800. They boot from RAID. The drives on the RAID are protected against failures to a reasonable level.

Not very clear what you are saying here , but if I am assuming that ESXI is installed on one of the two "pools" since you said"they boot from raid". So either on the same datastore made of 4xIntel 535SSD or on the other one made with 2xWD red in raid1 ?


In a mean time while I am doing my extensive read on LSI raid controllers I was wondering these things:
Ok,using SSD with raid controller for example on LSI 9271 is well supported and even Samsung 850 pro is listed there. But I wonder how one can maintain ssd performance in raid controller on any ssd without trim ?
I read a lot of people asking if ESXI supports trim and then I said to my self - It just not possible: underlying os need to supports trim to work and to instruct the ssd to do it, so with raid controller os can't see the SSD's anymore , will see new virtual storage device or without raid (like in my case) the VM also will no longer see SSD to exercise trim on , will see virtual storage device that esxi present to it.

So I wonder again how to keep SSD's from degrading in performance ?

I have one Samsung 850 PRO 512 GB in my desktop(windows 7) as data drive connected to the MB, and after a much longer use and around 20TB of writes I did a test speed just as example how beautifully the numbers can look like when performance is not degrading.


P.S. And keeping in mind that this performance is out of a single SSD drive.
 

Attachments

  • Samsung 850 PRO 512GB on Desktop.PNG
    Samsung 850 PRO 512GB on Desktop.PNG
    74.3 KB · Views: 311

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Not very clear what you are saying here ,

Imagine you live where you live, which I seem to recall as Chicago, and the machines you're managing are on the east coast of the United States.

Now imagine that you stick a USB thumb drive in them to boot, or a nonredundant hard disk, as suggested above:

perhaps you may have one drive connected to the MB for esxi boot drive , unless you prefer usb boot for the hypervisor ?!

Now imagine that for whatever dumb reason, that USB boot device, or nonredundant hard drive, fails. A hypervisor that is running dozens of virtual machines is now offline because of a stupid hardware failure. For a business, this is a crisis, and it means we spend lots of money to have someone address that issue right-damn-now. It's probably several hundred dollars for a same-day airline ticket from ORD to IAD, and in the meantime there's a downtime that shouldn't be happening, maybe losing money or business or causing other troubles. All because the boot device wasn't redundant.

So that's not the way you deploy critical resources in the data center. Instead, you deploy RAID. After all, a high quality hypervisor box costs ~$5K-$20K, and the cost to make sure that the thing can actually boot without being bottle-fed is pretty minor in comparison. So now when there's a disk failure, the RAID controller goes "frak, I gotta replace the drive" and rebuilds the RAID1. Then it can even lose ANOTHER drive and still continue running in degraded mode. In the meantime, a notification of the disk failure has been sent off, and someone schedules a trip to replace the disk, and in the meantime the virtualization host just keeps chugging along, running its VM's without any interruption. Someone slots in the new drive and it is ready to repeat the process the next time a disk fails. No downtime. Our hypervisors sometimes hit 4 or 5 years continuous uptime.

but if I am assuming that ESXI is installed on one of the two "pools" since you said"they boot from raid". So either on the same datastore made of 4xIntel 535SSD or on the other one made with 2xWD red in raid1 ?

Actually, 4 x would be a bad design - in a crisis, it is better to have two separate mirror datastores, because if you were to have two drives fail in rapid succession, you could still shuffle things around to have a single SSD datastore with full redundancy.

From a practical point of view, we boot from the HDD's because (1) it's cheaper storage and more suited to the task, and (2) because ESXi is a bit of a bear about booting from SSD because it's a bit more complicated to tag a boot datastore as SSD after creation.

In a mean time while I am doing my extensive read on LSI raid controllers I was wondering these things:
Ok,using SSD with raid controller for example on LSI 9271 is well supported and even Samsung 850 pro is listed there. But I wonder how one can maintain ssd performance in raid controller on any ssd without trim ?

Don't fill it...? Sometimes the simple obvious solution is best.

The real thing here though is that the question is kind of up in the air even after all these years. The best summary I know of is the V-Front one: http://www.v-front.de/2013/10/faq-using-ssds-with-esxi.html

So there's reason to think it may be supported if enabled, but then this may cause other issues, and of course we've seen TRIM being poorly supported by SSD firmware. The pragmatic approach is to buy big SSD's and don't fill them.

I read a lot of people asking if ESXI supports trim and then I said to my self - It just not possible: underlying os need to supports trim to work and to instruct the ssd to do it, so with raid controller os can't see the SSD's anymore , will see new virtual storage device or without raid (like in my case) the VM also will no longer see SSD to exercise trim on , will see virtual storage device that esxi present to it.

ESXi is perfectly capable of presenting a virtual disk to a VM appearing to be an SSD. No idea if it handles TRIM or does anything differently. But there are so many layers going on that it is a little difficult to credibly believe that there's a reliable way to make this work right, especially with things like disk migrations.

So I wonder again how to keep SSD's from degrading in performance ?

I have one Samsung 850 PRO 512 GB in my desktop(windows 7) as data drive connected to the MB, and after a much longer use and around 20TB of writes I did a test speed just as example how beautifully the numbers can look like when performance is not degrading.


P.S. And keeping in mind that this performance is out of a single SSD drive.

Yeah. So?

See, I don't see any big deal in just saying something like "well get a bigger SSD and don't fill the damn thing." From my perspective, virtualization significantly reduces the capex cost of servers, and if I've got to double the size of a cheap component like a HDD or SSD in order to make things work right, I'm so much money ahead that I really don't give a damn. As it is, I've settled for cheap SSD instead of the mid-range DC S3500's that we'd originally planned to deploy, and life is good.
 
Joined
Nov 11, 2014
Messages
1,174
Now imagine that you stick a USB thumb drive in them to boot, or a nonredundant hard disk, as suggested above:

I am not suggesting is good way. I was just asking for your opinion on how it should be done. I have no experience in enterprise world, but never like the usb way. People in enterprise world are suggesting the usb way for the obvious reasons, but they don't see other reasons that also have to be considered - the ones you just pointed out. It really gives me a great pleasure when someone with enterprise world with experience and understanding(very important quality) confirms what I was only suspecting.:)

The real thing here though is that the question is kind of up in the air even after all these years. The best summary I know of is the V-Front one: http://www.v-front.de/2013/10/faq-using-ssds-with-esxi.html

The link you mentions and also this(JarryG) is pretty much what has the answer. I've bee searching and reading the whole internet for few 2 days back and forward , many times even end up in a loop by finding my own postings back here.:)
I don't know how you can pull this info from you sleeve so fast ? White magic I presume. This is what I did:

1. I delete all my VM's from my datastore3(Samsung 850 pro 512GB)
2. SSH to ESXI host running this command: esxcli storage vmfs unmap -l datastore3
3. Then I put my vm back in the datastore.(Deploy from OVF file)

And here are the test results from inside the VM before and after.(see pics) AMAZING !!

Now Let me ask you how much space would you recommend to leave ? I read that some people leave even some space un-partitioned (JarryG post),but I don't know if that would help.

Also on SSD as datastore , do you do thin or thick provisioning ? I always do Thick-Eager-Zero on mechanical HDD, but I don't know for SSDs ?!
 

Attachments

  • before.PNG
    before.PNG
    43.2 KB · Views: 308
  • after.PNG
    after.PNG
    221.7 KB · Views: 318

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
but they don't see other reasons that also have to be considered - the ones you just pointed out.

Well, you see, I'm largely with you on the "hate RAID" thing; RAID adds another entire level of complexity and annoyance to the process, especially if you screw up the first few times. People who don't do their homework up front wind up in the entry-level RAID cheap seats (M1015 in IR/MFI mode, etc) or making other mistakes. For example, for a hypervisor, I don't see much value in 3.5" drive bays or attached-to-MB storage because I tend to like to put SSD and RAID in there. But then I see stuff like the new HPE Proliant EC200a and I go W....T....F....?! Two 3.5" HDD on a 64GB hypervisor makes no sense to me. But then I think about it a little more and come to realize that these guys might only be putting three or four VM's on there, and they probably haven't bothered to RAID, so that's actually two datastores... and their VM footprint is probably not "a dozen high performance FreeBSD systems each in 256MB of RAM".

As long as we're talking hates, I also hate virtualization because instead of a physical piece of gear that you can unplug and go beat some clues into it with the ten pound sledge, you are instead left cursing and swearing at what is effectively a large computer's daydream.

But these things also bring with them interesting new possibilities.

The link you mentions and also this(JarryG) is pretty much what has the answer. I've bee searching and reading the whole internet for few 2 days back and forward , many times even end up in a loop by finding my own postings back here.:)
I don't know how you can pull this info from you sleeve so fast ?

In the end, any insoluble technical problem around here ends up laid at my feet. I know the answers to these questions for the opposite reason you want to know them. You want to make it go faster (presumably). I have to be able to guarantee that we're not going artificially fast and are going to hit a wall.

I haven't been putting SSD's into hypervisors to make them as-fast-as-SSD's. I've been putting them in with the expectation that if I've got a bunch of VM's hammering away on I/O, that the VM's see something that's at least a reasonable approximation of HDD speeds. I mean, I'd *like* for ESXi and a RAID controller and a VM to be able to pass TRIM notifications back and forth, but the reality seems to be that (at least for RAID1, the use case I care about) the RAID controller is not able to deal with this at the current time. So I just shrug at that and don't worry too much.

It helps to have perspective. I used to use punch cards. "Fastest" is more fun, of course, but for actually getting work done, "merely fast" is quite sufficient.

White magic I presume. This is what I did:

1. I delete all my VM's from my datastore3(Samsung 850 pro 512GB)
2. SSH to ESXI host running this command: esxcli storage vmfs unmap -l datastore3
3. Then I put my vm back in the datastore.(Deploy from OVF file)

And here are the test results from inside the VM before and after.(see pics) AMAZING !!

Now Let me ask you how much space would you recommend to leave ? I read that some people leave even some space un-partitioned (JarryG post),but I don't know if that would help.

That only helps if you're doing a lot of writes. The drive is already overprovisioned by some certain amount that the manufacturer doesn't tell us, and the drive manages to supervise its pool of available blocks from that. Leaving some space unprovisioned, or overprovisioning the drives, increases that pool of blocks. But if you're not doing a lot of writes, you're not burning through that pool at a mad pace, and the SSD controller ought to be able to keep pace with a leisurely level of garbage collection and recycling.

My theory on VM design is to avoid doing a lot of trite writes. And I'm usually not filling datastores up with max data anyways, so kinda the same thing in a way.

Also on SSD as datastore , do you do thin or thick provisioning ? I always do Thick-Eager-Zero on mechanical HDD, but I don't know for SSDs ?!

I don't see a lot of value to thick provisioning, but part of that's because our VM's don't do a lot of writes, part of that is because the amount of space we have might not be sufficient to support EVERY virtual disk in a fully expanded role, and part of that is that it makes things like storage vMotion etc more problematic. Right now I'm actually transmitting VM's across the country to a client and part of making that process happen in a reasonable amount of time is zeroing unused space, then migrate the VM from one datastore to another, and all of a sudden the ~25-30GB of vmdk files for the VM are down to around 3GB. I don't see the value in reserving 30GB of space for something that'll never actually use more than maybe 10, and it is a LOT more convenient to be slinging around the smaller amounts of data.
 
Joined
Nov 11, 2014
Messages
1,174
For example, for a hypervisor, I don't see much value in 3.5" drive bays
Only reason in my case with 4x3.5 chassis is flexibility. In 3.5 bays I can put 2.5 (with adapter) or 3.5 drives.In fact I have 3 of my 3.5 bays with SSD using adapters and the 4th one is the only 3.5 mechanical WD. But I see it the same way otherwise.


As long as we're talking hates, I also hate virtualization because instead of a physical piece of gear that you can unplug and go beat some clues into it with the ten pound sledge, you are instead left cursing and swearing at what is effectively a large computer's daydream.

But these things also bring with them interesting new possibilities.

That's very well put. That's exactly how I feel. It really has some possibilities and conveniences, but don't have to virtualize everything. I for example would not make my main freenas server a virtual one. Even if everything is done properly to work reliably. I just don't like this idea of 1 box for everything(freenas,router,esxi,washer, dryer, etc.) because everything can much easier to turn on to nothing.
You know how plastic box that is a router,switch, modem,file server, etc in one box will work.(Daily reboot* highly recommended) :smile:


I don't see a lot of value to thick provisioning, but part of that's because our VM's don't do a lot of writes

When writes are no concern and SSD is used then Yes, cause is not much you can do to help SSD for writes , but on mechanical HDD I think is eager zero thick is a must. No reason to kill writes from 170MB/s to 18MB/s if the drive is perfectly capable of sustaining 170MB/s on thick eager.I put eager as "bold" cause unless is thick eager provision , there is not relly any difference between thick lazy or thin expect showing more occupied space. But I know you already know this.:smile:



By the way , despite my report from yesterday posting the good test numbers from inside a VM, SSD still struggles on writes. I can still copy large from VM(read) 4-5 times faster than if I have to copy to VM(write). I am giving up for now. I will still use my VM on SSD that don't do much writes, but for VM that do a large downloads and stuff I may have to use mechanical HDD with raid controller.(even without raid controller single mechanical HDD could be faster than SSD on large write files)

Damm SSD's! Can't live with them, can't without them, as they say.
I never had a SAS drive or above 7200rpm drive like 10K or 15K , but I am really eager to play with them. I am reading about them and get even more interested. Perhaps I should purchase some old ones from ebay just to get my feet wet. Is it true that SAS drives can use the "second path to the drive" for more bandwidth ?


*Daily reboot - This term also describes Comcast phone troubleshoot help in essence.
 

Attachments

  • 2.5 adapters.jpg
    2.5 adapters.jpg
    268.5 KB · Views: 289
Joined
Nov 11, 2014
Messages
1,174

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Ahh, you don't know what it's like to chase a 15K Cheetah (Seagate). Although expensive for their size, I remember buying 450 & 600GB drives, those 15k drives were fast. I still manage an ESXi server that's running these drives.

I never had a SAS drive or above 7200rpm drive like 10K or 15K ,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have some ST34501W's in stock if anyone happens to need 4.5GB SCSI UW drives.
 
Joined
Nov 11, 2014
Messages
1,174
Ahh, you don't know what it's like to chase a 15K Cheetah (Seagate). Although expensive for their size, I remember buying 450 & 600GB drives, those 15k drives were fast. I still manage an ESXi server that's running these drives.

So tell me were they loud like an airport then ?:smile:
I am trying to keep my server rack as quiet as possible , but I never actually notice the noise from 7200rpm , cause you 'll hear the fans before hdds, but I don't know how 15K will be ? And what about the temps , any numbers you can give ?

P.S. No wonder a lot of dell hyper-visors comes with 8x15K SAS instead of SSD. Probably I would like too.
 
Joined
Nov 11, 2014
Messages
1,174
I have some ST34501W's in stock if anyone happens to need 4.5GB SCSI UW drives.

Well that Cheetah perhaps will need a walking stick to move.:smile: I can pass one like that by walking next to her.
 
Status
Not open for further replies.
Top