Ode to the Dell C2100/FS12-TY

crazyjpeters

Cadet
Joined
Feb 17, 2016
Messages
7
Has anyone run ESXi past 6.0 ? I'm pretty sure I've got the dell customized vmware for 6.0, but is there any way to go past 6?
 
Joined
Dec 29, 2014
Messages
1,135
Depends on the hardware. I have an HP DL360G7 that works great on 6.0. I tried putting 6.5 on it, and it gave me a PSOD because the hardware is unsupported.
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Has anyone run ESXi past 6.0 ? I'm pretty sure I've got the dell customized vmware for 6.0, but is there any way to go past 6?

This is going to mostly depend on your processors - Support for the Xeon 55xx series was dropped in ESXi 6.5 (The last version that supported it was 6.0 U3), if you have a 56xx processor you should be able to run 6.5 (don't confuse this with it being 'supported' by VMware).

Depending on exact hardware and BMC version, the system may get hung up on boot loading the IPMI drivers (I've had this happen with a lot of Dell's servers from this era with ESXi ver. 6+). If this happens hit the Shift + O during boot up and add the parameter noipmienabled and it should boot. Once you get it up and running you can edit this option in advanced settings for the host to prevent it from trying to load the driver on future startups.

Good luck!
 

crazyjpeters

Cadet
Joined
Feb 17, 2016
Messages
7
This is going to mostly depend on your processors - Support for the Xeon 55xx series was dropped in ESXi 6.5 (The last version that supported it was 6.0 U3), if you have a 56xx processor you should be able to run 6.5 (don't confuse this with it being 'supported' by VMware).

Depending on exact hardware and BMC version, the system may get hung up on boot loading the IPMI drivers (I've had this happen with a lot of Dell's servers from this era with ESXi ver. 6+). If this happens hit the Shift + O during boot up and add the parameter noipmienabled and it should boot. Once you get it up and running you can edit this option in advanced settings for the host to prevent it from trying to load the driver on future startups.

Good luck!

I’m running a pair of E5645 Xeon so I suppose I should give it a go. Has anyone tried it on a C2100 specifically?
 

vcomtech

Cadet
Joined
Oct 2, 2018
Messages
1
I have dell fs12-ty formerly running esxi no issues. now I formatted the machine, made raid 5, and installed server 2012. it sees all the drives, but will not make the OS partition bigger than 2TB. in diskmgmt there is 5TB of other space available separated and I cannot combine those into one drive letter for some reason. I have 12 500GB drives installed.
There is no UEFI option in the BIOS for some reason. I have h700 raid controller. what am I doing wrong...…….
 

theowad

Cadet
Joined
Oct 4, 2018
Messages
1
Hi All,
I recently both one of those c2100 and there is one annoying thing i can't get rid off.
There are only 3 fans in the system, all working as they should - good rpm and quiet, but FAN section in Sensors tab shows error stating that i'm missing bunch of other fans int he system.

X System FAN6_1 0 RPM 0 RPM N/A 0 RPM N/A
X System FAN6_2 0 RPM 0 RPM N/A 0 RPM N/A
V System FAN5_1 3700 RPM 1000 RPM N/A 800 RPM N/A
X System FAN5_2 0 RPM 0 RPM N/A 0 RPM N/A
V System FAN4_1 4900 RPM 1000 RPM N/A 800 RPM N/A
X System FAN4_2 0 RPM 0 RPM N/A 0 RPM N/A
V System FAN3_1 3700 RPM 1000 RPM N/A 800 RPM N/A
X System FAN3_2 0 RPM 0 RPM N/A 0 RPM N/A
X System FAN2_1 0 RPM 0 RPM N/A 0 RPM N/A
X System FAN2_2 0 RPM 0 RPM N/A 0 RPM N/A
X System FAN1 0 RPM 0 RPM N/A 0 RPM N/A

I set minimum threshold via ipmitool to 0 to get rid of blinking amber light but when i disconnect power from the server it keeps coming back with default setting (throwing errors again).
Couple of pages before in this thread i've seen someone posting a picture from the same section only with 3 sensors - just like it supposed to be, so now I'm totally confused why mine is like this.
Does anyone know how to change, permanently disable or remove unnecessary fans?

Here is my firmware configuration:

System Information
Manufacturer Dell
Product Name PowerEdge C2100

BMC Information

Firmware Version 1.86.24128
Firmware Updated Sat Sep 22 18:42:35 2018
Hardware Version 0.01

BIOS Version C99Q3B23
Product Name PowerEdge C2100
Manufacturer Dell
Manufacture Date 2011/05/04 13:04

Thanks for any help
 

crazyjpeters

Cadet
Joined
Feb 17, 2016
Messages
7
Not sure if anyone has tried this, but is anyone booting from a pci-e m2 adapter? I’ve got one booting my p55 desktop, but it’s one of those plextor m6 drives with a bios ROM extension so it’ll work in a pre-Uefi bios.

Or another thing. Has anyone got a pci-e m2 or nvme drive to work as even an available device in ESXI or other?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I have dell fs12-ty formerly running esxi no issues. now I formatted the machine, made raid 5, and installed server 2012. it sees all the drives, but will not make the OS partition bigger than 2TB. in diskmgmt there is 5TB of other space available separated and I cannot combine those into one drive letter for some reason. I have 12 500GB drives installed.
There is no UEFI option in the BIOS for some reason. I have h700 raid controller. what am I doing wrong...…….
Did you initialize the disk as GPT or MBR? MBR will limit you to a 2TB system drive.
 

TommyJohn

Cadet
Joined
Oct 26, 2018
Messages
3
Hi Everyone, thanks for all the incredible information here... brain hurts!

Well after 3 failed ReadyNAS devices I've decided to walk away from consumer devices that are prone to failure and try the DIY approach with the highly recommended Dell FS12-TY.

Based on my poor experience with failed hw RAID devices I've decided to go the FreeNAS ZFS route. First of all, I'm a noob when it comes to FreeNas, and server grade hardware. I've built many home PCs in my days but never really dealt with servers, HBA/raid controllers etc. So this will all be new to me.

My goal is to setup an ESXI server running FreeNas on 1 VM, and Windows 10 on another. Freenas will simply be my storage server, and Windows will run my Plex server. The reason I am going the Windows route for Plex as opposed to the built-in Plex is that I want to take advantage of HW transcoding on current GPUs (Quadro P2000 or 1050 or similar)

So, here are some questions I have, my apologies if they seem dumb but again this is all new to me.

1. Can I drop in a graphics card like an Nvidia Quadro similar into the FS12-TY? I understand there are 2x PCI-e slots available. This is to take advantage of hardware transcoding in Plex when used within a windows VM. I am assuming BSD will be able to use the nvidia card with the latest BSD nvidia 11x driver?

2. The particular FS12-TY I'm looking at includes the DELL C1RTV LSI SAS2008 which supports 8 sata channels as I understand it, but this server supports 24 drives, how to I expand beyond 8 drives? Can I use the integrated onboard motherboard ports for >3gb drives? How many?

3. Can I create separate ZFS arrays with different combinations of drives? For example if I have 4x 3tb drives, can I make a single ZFS array out of those, and then if I have 4x 4tb drives make a separate array of those all running from the same LSI card?

4. Does the LSI 2008 card take up a pci-e slot or does it have its own dedicated slot?

5. Am I correct in assuming Windows will be able to see these drive arrays? Anything special I need to do?

6. Can I disable one of the power supplies to reduce power consumption? Or is the 2nd supply not actually in use and just acts like a failover?
A single 750W supply already seems overkill but I understand this is a server rig and the 2nd supply serves as redundancy.

I think thats it for now but I'm sure I'll have more questions down the road. Thanks in advance!
 
Joined
Feb 2, 2016
Messages
574
So, here are some questions I have,

1. Plex doesn't really do GPU-assisted encoding. Whatever the feature list says, the footnotes take away. Check the Plex forums for details and disappointment.

2. The server's backplane allows you to have more drives than ports. Others will correct me if I'm wrong but with that one card you should be able to use all 24 drive bays.

3. Yes. In fact, you can mix and match drives inside the same pool and there may be a good reason to do so.

4. That part number leads me to a card that doesn't use a slot (but will use PCIe lanes).

5. For certain values of "see", yes. For others values of "see", no. If you share the drives from FreeNAS, Windows will be able to access the data on the drives.

6. In most cases, two power supplies share the load and then, when one dies, a single power supply carries the load. Typically, two supplies running at half capacity are quieter than one supply running at full capacity. The fans REALLY start cranking when there is only one supply in use.

As for ESXI, that's not the route I would take given your use case. I'd let FreeNAS run on bare metal - because that is how it runs best - and then I'd run the Windows instance under FreeNAS using its built-in VM tools. Actually, given the Plex runs at least as well and probably better under Linux, I've got my Plex running in an Linux VM. That way I still have all the functionality of Plex without paying for a Windows license.

Cheers,
Matt
 

TommyJohn

Cadet
Joined
Oct 26, 2018
Messages
3
n
1. Plex doesn't really do GPU-assisted encoding. Whatever the feature list says, the footnotes take away. Check the Plex forums for details and disappointment.

Thanks for your answers Matt. I'm curious about the first one, I've read the plex and reddit forums regarding hw transcoding and GPU acceleration is significant and definitely supported, what makes you state otherwise? Are you referring to transcoding in a VM which Plex doesn't officialy support? I've read this is bypassed through PCI passthrough of GPU, and this is best run on ESXI. Perhaps because you're running your Windows VM through freeNas thats why you're not able to take advantage?
 
Joined
Feb 2, 2016
Messages
574
Plex hardware accelerated streaming is not ready for prime time, @TommyJohn:
  • The video quality may be lower, appearing more blurry or blocky. This is especially true and more noticeable when streaming at resolutions below 720p or lower bit rate source material. (Hardware-accelerated video encoders are faster, but lower quality than software encoders.)
  • Only files encoded with H.264 or HEVC video can take advantage of hardware-accelerated decoding.
  • On Linux, hardware-accelerated decoding is not supported on NVIDIA GPUs.
  • Intel Quick Sync is required for hardware-accelerated decoding.
  • Docker: unsupported.
  • Virtual machine: unsupported even when hardware exposed.
  • Hardware-accelerated HEVC 8-bit decoding on Windows and Linux requires a 6th-generation Intel Core (Skylake, 2015) or newer.
  • Hardware-accelerated HEVC 10-bit decoding on Windows and Linux requires a 7th-generation Intel Core (Kabylake, 2016) or newer.
  • Windows and Linux devices using NVIDIA GeForce graphic cards are limited to hardware-accelerated encoding of 2 videos at a time.
There are so many requirement that Plex hardware assistance might as well not exist. And, even when it does exist, the resulting output is lower quality. What's the point?

Cheers,
Matt
 

TommyJohn

Cadet
Joined
Oct 26, 2018
Messages
3
Fair enough, I guess what you mean to say is it is supported but with limitations. Thats fine, I think the 2x Xeon cpus will probably do a fine job at transcoding anyway. I do intend to have direct play for all my in-home devices but for family I will need to have some transcoding ability.

Now this may seem obvious to most but again for a server newbie like me.. how do I setup the server before being able to remote in? Do the usb ports ports function for a mouse and keyboard?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175

crazyjpeters

Cadet
Joined
Feb 17, 2016
Messages
7
So 6.7 stock VMware image seems to install with no PSOD. It does load up, but I can’t seem to get vsphere client running. Can’t even connect directly to it on the IP address either. Something funny going on there. Driver issues?
 

crazyjpeters

Cadet
Joined
Feb 17, 2016
Messages
7
Success! It wasn’t 6.7 exactly. Apparently having my switch expecting a trunked connection with both my intel 80275 dual nic ports makes esxi unable to connect. All good now.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
Interesting topic :) I'm thining about buing the C2100 but have some questions

1. Which RAID cotroller do you recomend PERC H200 PERC H700 or anything else? Should I flash fw?

2. How large SATA 3.5" disk does it service?

In spec i found: 3.5” SATA (7.2K): 500GB, 1TB, 2TB

only 2 TB???
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Interesting topic :) I'm thining about buing the C2100 but have some questions

1. Which RAID cotroller do you recomend PERC H200 PERC H700 or anything else? Should I flash fw?

2. How large SATA 3.5" disk does it service?

In spec i found: 3.5” SATA (7.2K): 500GB, 1TB, 2TB

only 2 TB???

1) The PERC H200 will work, but you'll need to crossflash it to IT mode - there are tons of posts on here about how to do this. You could also use any other LSI SAS2008 based card (IBM M1015 for example) that's been crossflashed to IT mode - these can be had for as low as $45 on eBay (less if you're willing to buy from China).

Do not under any circumstances use a PERC H700 - there's no such thing as IT firmware for this card and it doesn't pass the drives through correctly. If your server comes with one, sell it and get a SAS2008 based card.

I wouldn't waste money on a 12G SAS card as the backplane in these servers is limited to 6G...

2) I have successfully run 3TB HGST/HItachi SATA drives, 5TB WD Red SATA drives, and 6TB Seagate SAS drives in these servers. I can't definitively say where the upper end of the support is, but I've never had an issue with any drive I've tried... I think the specs you found are most likely limited by the RAID/HBA card that originally shipped with the server (which we already determined you're going to be replacing)
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
Thank you for answer. Does the IBM 1015 support 12 drives?

The M1015 (and all other LSI SAS2008 controllers) is an 8 channel controller; however, the backplane in the C2100 is what's referred to as a SAS Expander which allows the connection of more drives....

So, Yes! you can use all 12 drive bays in the C2100 chassis when using a LSI SAS2008 based HBA. You'll need 2 mini-SAS (SFF-8087) cables to connect the HBA to the backplane (these should be included with your server)
 
Top