Not very clear what you are saying here ,
Imagine you live where you live, which I
seem to recall as Chicago, and the machines you're managing are on the east coast of the United States.
Now imagine that you stick a USB thumb drive in them to boot, or a nonredundant hard disk, as suggested above:
perhaps you may have one drive connected to the MB for esxi boot drive , unless you prefer usb boot for the hypervisor ?!
Now imagine that for whatever dumb reason, that USB boot device, or nonredundant hard drive, fails. A hypervisor that is running dozens of virtual machines is now offline because of a stupid hardware failure. For a business, this is a crisis, and it means we spend lots of money to have someone address that issue right-damn-now. It's probably several hundred dollars for a same-day airline ticket from ORD to IAD, and in the meantime there's a downtime that shouldn't be happening, maybe losing money or business or causing other troubles. All because the boot device wasn't redundant.
So that's not the way you deploy critical resources in the data center. Instead, you deploy RAID. After all, a high quality hypervisor box costs ~$5K-$20K, and the cost to make sure that the thing can actually boot without being bottle-fed is pretty minor in comparison. So now when there's a disk failure, the RAID controller goes "frak, I gotta replace the drive" and rebuilds the RAID1. Then it can even lose ANOTHER drive and still continue running in degraded mode. In the meantime, a notification of the disk failure has been sent off, and someone schedules a trip to replace the disk, and in the meantime the virtualization host just keeps chugging along, running its VM's without any interruption. Someone slots in the new drive and it is ready to repeat the process the next time a disk fails. No downtime. Our hypervisors sometimes hit 4 or 5 years continuous uptime.
but if I am assuming that ESXI is installed on one of the two "pools" since you said"they boot from raid". So either on the same datastore made of 4xIntel 535SSD or on the other one made with 2xWD red in raid1 ?
Actually, 4 x would be a bad design - in a crisis, it is better to have two separate mirror datastores, because if you were to have two drives fail in rapid succession, you could still shuffle things around to have a single SSD datastore with full redundancy.
From a practical point of view, we boot from the HDD's because (1) it's cheaper storage and more suited to the task, and (2) because ESXi is a bit of a bear about booting from SSD because it's a bit more complicated to tag a boot datastore as SSD after creation.
In a mean time while I am doing my extensive read on LSI raid controllers I was wondering these things:
Ok,using SSD with raid controller for example on LSI 9271 is well supported and even Samsung 850 pro is listed there. But I wonder how one can maintain ssd performance in raid controller on any ssd without trim ?
Don't fill it...? Sometimes the simple obvious solution is best.
The real thing here though is that the question is kind of up in the air even after all these years. The best summary I know of is the V-Front one:
http://www.v-front.de/2013/10/faq-using-ssds-with-esxi.html
So there's reason to think it may be supported if enabled, but then this may cause other issues, and of course we've seen TRIM being poorly supported by SSD firmware. The pragmatic approach is to buy big SSD's and don't fill them.
I read a lot of people asking if ESXI supports trim and then I said to my self - It just not possible: underlying os need to supports trim to work and to instruct the ssd to do it, so with raid controller os can't see the SSD's anymore , will see new virtual storage device or without raid (like in my case) the VM also will no longer see SSD to exercise trim on , will see virtual storage device that esxi present to it.
ESXi is
perfectly capable of presenting a virtual disk to a VM appearing to be an SSD. No idea if it handles TRIM or does anything differently. But there are so many layers going on that it is a little difficult to credibly believe that there's a reliable way to make this work right, especially with things like disk migrations.
So I wonder again how to keep SSD's from degrading in performance ?
I have one Samsung 850 PRO 512 GB in my desktop(windows 7) as data drive connected to the MB, and after a much longer use and around 20TB of writes I did a test speed just as example how beautifully the numbers can look like when performance is not degrading.
P.S. And keeping in mind that this performance is out of a single SSD drive.
Yeah. So?
See, I don't see any big deal in just saying something like "well get a bigger SSD and don't fill the damn thing." From my perspective, virtualization significantly reduces the capex cost of servers, and if I've got to double the size of a cheap component like a HDD or SSD in order to make things work right, I'm so much money ahead that I really don't give a damn. As it is, I've settled for cheap SSD instead of the mid-range DC S3500's that we'd originally planned to deploy, and life is good.