Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Ode to the Dell C2100/FS12-TY

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,108
How do you want to drive it, anyway?
 
Joined
Oct 12, 2017
Messages
5
is there not a header on the board for this? Manual look on page 14 and 15.
 

Attachments

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
16,108
I don't mean the physical connection as much as the software side of things.
 

Redcoat

FreeNAS Expert
Joined
Feb 18, 2014
Messages
1,302
So, here is a question for everyone. Has ANYONE seen the supposed LCD screen that goes in the rack ear on the right side of the server? The documentation on this says that little black piece is suppose to be an LCD screen. Also, if not then what do you peeps think about popping a dual usb port in that spot (since the thing only has 2 ports on the back)
I do not have one on my FS12-TY version of the C2100, nor do I know where it would connect to my MB nor the cable route to it (no case penetrations). With respect to the use of additional USB's, my MB does not have the additional internal USB headers indicated in the manual version you referenced.
 
Joined
Oct 12, 2017
Messages
5
Well to answer your question. The BMC/Bios controls the output of the screen if it works anything like every other dell server since the PE 1950, as for the second comment about USB headers. I just checked my FS12/TY and my True C2100, BOTH have the same motherboard in them. Both have the USB headers soldered on. The True C2100 has more connectors than the FS12/TY but the solder spots are there.

Also, as for the "There are no holes for the cables". This was also strange to me. I figured if I was going to do this then I would just drill some.

You have to admit it looks like its suppose to be USB ports...

UPDATE: Looked at my C1100 and it also has the same board in it. The C1100 has front USB ports of which are plugged into the board in the usb headers that I pointed out in the FS12/TY.

UPDATE: The USB port cables colors are backwards... For some unknown reason dell made the colors of the wires in the exact opposite they should be. so this means looking at the picture, converting from dell colors to real colors here is the correct wire config. It is essentially how every other motherboard in the world has the layout of the usb ports. Ignore the colors of the wires in the picture of the C1100.
--------------------------------------------
| R W G B B | R=Red | W=White | G=Green | B=Black
| R W G B |
--------------------- --------------------
 

Attachments

Last edited:
Joined
Oct 18, 2017
Messages
4
So, I recently got one of these mysteriously lovely FS12-TY machines for a pretty good price, $100 off of craigslist here in Portland. I think it will make a decent FreeNAS box, with 64GB of DDR3 ECC, two Intel Xeon X5570 CPUs running at 2.93GHz, dual Gigabit NICs, etc.

The issue is the drive controller and the backplane. It came with an "SAS 6/iR" card (which I am assuming is the same thing that people are calling the PERC 6iR?), connected via dual SFF8484 connectors to a 12 bay, 3.5" hot swap backplane. I'm using 4TB WD Red drives, so I know this card won't work for me, so I'm looking at getting H200/H310 or M1015 controller, and flashing it with the correct IT firmware.

My question is about the connectors and the backplane. If I've done my research right, the H200/H310/M1015 cards don't have 8484 connectors, but instead SFF8087 connectors. Is it a simple matter of buying 8087 to 8484 adapter cables (Amazon is currently selling some for about $35) or am I going to need a new backplane that has SFF8087 connectors on it? I want to be able to hot swap drives larger than 2TB, at 6gB/s, and I know practically nothing about these SFF connection types.

I think it will work, but I wanted to ask the experts just in case I've missed something. Thanks!
 
Joined
Oct 12, 2017
Messages
5
So, I recently got one of these mysteriously lovely FS12-TY machines for a pretty good price, $100 off of craigslist here in Portland. I think it will make a decent FreeNAS box, with 64GB of DDR3 ECC, two Intel Xeon X5570 CPUs running at 2.93GHz, dual Gigabit NICs, etc.

The issue is the drive controller and the backplane. It came with an "SAS 6/iR" card (which I am assuming is the same thing that people are calling the PERC 6iR?), connected via dual SFF8484 connectors to a 12 bay, 3.5" hot swap backplane. I'm using 4TB WD Red drives, so I know this card won't work for me, so I'm looking at getting H200/H310 or M1015 controller, and flashing it with the correct IT firmware.

My question is about the connectors and the backplane. If I've done my research right, the H200/H310/M1015 cards don't have 8484 connectors, but instead SFF8087 connectors. Is it a simple matter of buying 8087 to 8484 adapter cables (Amazon is currently selling some for about $35) or am I going to need a new backplane that has SFF8087 connectors on it? I want to be able to hot swap drives larger than 2TB, at 6gB/s, and I know practically nothing about these SFF connection types.

I think it will work, but I wanted to ask the experts just in case I've missed something. Thanks!
Ok, from my experience that backplane is a piece of crap. The only GOOD one is the expander backplane regardless of what everyone says about it. Its capable of 6gb/s transfers and works with just about any raid/hba card that can support 12 drives. the only thing that is VERY important to know is that you MUST use the right cables. The correct cables are Dell part number 05X8NH no other cables will work. The part number of the backplane I am talking about is part number 9NXC7.

As a side note there is one other backplane that is good, it is a straight passthrough backplane that has a sata connection for every drive that is connected. This backplane however is rare and I have only ever seen one for sale on ebay EVER.
 

BetYourBottom

Newbie
Joined
Nov 26, 2016
Messages
79
Ok, from my experience that backplane is a piece of crap. The only GOOD one is the expander backplane regardless of what everyone says about it. Its capable of 6gb/s transfers and works with just about any raid/hba card that can support 12 drives. the only thing that is VERY important to know is that you MUST use the right cables. The correct cables are Dell part number 05X8NH no other cables will work. The part number of the backplane I am talking about is part number 9NXC7.

As a side note there is one other backplane that is good, it is a straight passthrough backplane that has a sata connection for every drive that is connected. This backplane however is rare and I have only ever seen one for sale on ebay EVER.
I have the SFF-8484 backplane and it works fine. The only limitation is that each drive will be forced to run at SAS1/SATA2 speeds of 3Gbps, which isn't much of a limitation for HDDs. Also despite the speed drop, there doesn't seem to be the 2TB capacity limit on most SAS1 backplanes.

So, I recently got one of these mysteriously lovely FS12-TY machines for a pretty good price, $100 off of craigslist here in Portland. I think it will make a decent FreeNAS box, with 64GB of DDR3 ECC, two Intel Xeon X5570 CPUs running at 2.93GHz, dual Gigabit NICs, etc.

The issue is the drive controller and the backplane. It came with an "SAS 6/iR" card (which I am assuming is the same thing that people are calling the PERC 6iR?), connected via dual SFF8484 connectors to a 12 bay, 3.5" hot swap backplane. I'm using 4TB WD Red drives, so I know this card won't work for me, so I'm looking at getting H200/H310 or M1015 controller, and flashing it with the correct IT firmware.

My question is about the connectors and the backplane. If I've done my research right, the H200/H310/M1015 cards don't have 8484 connectors, but instead SFF8087 connectors. Is it a simple matter of buying 8087 to 8484 adapter cables (Amazon is currently selling some for about $35) or am I going to need a new backplane that has SFF8087 connectors on it? I want to be able to hot swap drives larger than 2TB, at 6gB/s, and I know practically nothing about these SFF connection types.

I think it will work, but I wanted to ask the experts just in case I've missed something. Thanks!
This cable will work just fine for that backplane I use it myself. https://www.monoprice.com/product?p_id=8191

Besides the cable, you will need jumpers (like the ones from old IDE drives ex. https://www.amazon.com/dp/B00NQB8TJE) to force your drives into SATA2 legacy mode. It's not going to hurt speeds at all because you will still get 3Gbps per drive, which is more than enough for HDDs. Also like I mentioned in my reply to the other user, this won't limit your available capacity. For where the pins go https://support.wdc.com/knowledgebase/answer.aspx?ID=981

You can try your drives without the jumpers to see if this backplane issue was limited to me (I haven't confirmed with others with this backplane yet), however, be sure to watch during testing for errors. I found that there were errors while running standard badblocks testing for new drives that caused the testing to take waaaay longer than normal. If you run into the same issue the jumpers cleared it up entirely for me. I think the backplane is SAS1 but doesn't actually tell anyone that it connects to of it's speed limitation, so running it at higher speeds causes it to wig out.
 
Joined
Oct 18, 2017
Messages
4
I have the SFF-8484 backplane and it works fine. The only limitation is that each drive will be forced to run at SAS1/SATA2 speeds of 3Gbps, which isn't much of a limitation for HDDs. Also despite the speed drop, there doesn't seem to be the 2TB capacity limit on most SAS1 backplanes.
I understand that feeding my NAS with consumer-grade 1Gb NICs won't saturate a 3Gbs SATA pipe, however won't running in SATA2 significantly increase maintenance tasks like scrubbing? (Essentially making the scrub take twice as long?) I'm going to be starting off with six 4TB drives in raidz2, but I have no experience with how long it would take to scrub an array of that size.

I can't shake the feeling that if I have 6Gbps drives, I should try to find a way to utilize them as such.

Thank you both for taking the time to respond. I really appreciate it.
 

danb35

FreeNAS Wizard
Joined
Aug 16, 2011
Messages
10,801
won't running in SATA2 significantly increase maintenance tasks like scrubbing? (Essentially making the scrub take twice as long?)
Only if the drive itself were consistently able to transfer data at SATA3 rates--which spinning rust won't be. If you were going to be dealing with an SSD pool, that'd be a different story.
 

Johnnie Black

FreeNAS Guru
Joined
May 10, 2017
Messages
774
however won't running in SATA2 significantly increase maintenance tasks like scrubbing?
Currently no disk on the market can saturate a SATA2 connection, but we're getting very close, the 10/12TB HGST helium drives max sequential speed in the outer cylinders is around 275MB/s, that's the "real world" max transfer rate of a SATA2 link.
 
Joined
Oct 18, 2017
Messages
4
Okay, that settles it then. I'm going to keep my current backplane and run at 3Gbps.

Thanks again everybody!
 
Joined
Oct 12, 2017
Messages
5
I have my C2100 and FS12-ty with the expander backplane. Both have 12 6TB drives in them. they run just fine and quite fast. The expander backplane is quite finicky though you have to use the right cables or it will not detect anything but 1 drive.

If anyone needs one of those mini-sas backpanes with the 3 SFF-8484 connectors I have one, the cables and perc6/i. message me if your interested.

It was mentioned that no spinning disk on the market can saturate the sata 2 protocol. This my be true but you are leaving something out. What happens when you have 12 disks all on the same controller, Sata 2 can not provide the bandwidth, it WILL get saturated. Even with the expander backplane it will still get saturated with that many disks but the margin is MUCH higher. This happend to me this is the reason I went to the expander backplane in the first place. After I added my 6th drive to my array I was only getting write speeds of about 20MBp/s and read speeds of 50MBp/s, granted that the read was about normal but the write takes a major hit. I even tried diffrent configuration including diffrent raid cards and HBA cards. it was not until I switched to a Sata3 backplane that I was getting a read/write speed that was not terrible, With the exander backplane I was getting about 100MBp/s write and 150is read. I have had quite some experience with these servers as I have 4 of them now in different configurations. I do not auctually run freenas, I only joined this forum to talk about these servers, I thought since this was an already well established source of info on this server and its configurations that I could add some information that I have discovered from running/servicing these.

Also from a personal experience aspect, The Perc 6 and Perc 5 raid cards are crap. They are slow and severely outdated. Even with the firmware mod to allow them to utilize larger than 2TB drives they are at the best of times... Buggy. Do your self a favor and trash that pos, get a LSI 9260/9270 (for Hardware raid), or the 9211/H200 in IT/HBA mode (For Software Raid).
 
Last edited:
Joined
Jan 2, 2018
Messages
1
Alup,

With respect to the backplane with 3 SFF-8484 connectors, is that a direct attach backplane? What are the limitations everyone seems to have with them?
Can I just use that with a new SAS controller and have direct access to each drive?
 
Joined
Oct 18, 2017
Messages
4
Alup,

With respect to the backplane with 3 SFF-8484 connectors, is that a direct attach backplane? What are the limitations everyone seems to have with them?
Can I just use that with a new SAS controller and have direct access to each drive?
That is what I have and yes, the drives connect directly into SATA ports on the backplane. I have the 3.5 inch drive version, so that means I have 12 horizontally aligned drive bays that attach directly into the backplane. There is also a version with 24 vertically aligned 2.5 inch drive bays, but I don't have that one, so I know nothing about it.
 

BetYourBottom

Newbie
Joined
Nov 26, 2016
Messages
79
Alup,

With respect to the backplane with 3 SFF-8484 connectors, is that a direct attach backplane?

What are the limitations everyone seems to have with them?

Can I just use that with a new SAS controller and have direct access to each drive?
It seems to be direct attach.

The only possible limitation is that I believe it's limited to SATA2 (and you need to set it with jumpers on the drives), however, SATA2 is still faster than any HDD, so as long as you aren't doing a SSD array you'll be fine.

I replaced my Perc 6/i when I got it for a Dell H200 Mezzanine that I managed to find for $50 at the time, it works fine and I have tested a 9211-8i that also worked, so other HBAs should work just fine. However, for most modern HBAs you'll need to get an SFF-8484 to SFF-8087 cable for connecting the Backplane to the HBA; remember that SAS cables are directional so you need to get one that says that it's SFF-8484 on the backplane.
 

southwow

FreeNAS Experienced
Joined
Jan 18, 2018
Messages
114
Just wanted to make a note on this. Per Dell's Spec Sheet; it states "up to 38TB of disk capacity". Now, I have not had the chance to test this yet since the most I have put in a single system is 36 TB (12 x 3TB). I am however actively searching for some 4TB drives on eBay (cuz I am cheap) and may eventually be able to see if I can go beyond that. Thinking so since I am using a HBA (H200 cross-flashed to LSI 9211-8I).

If anyone else has already tested this their feedback would be greatly appreciated.
I can confirm success with 5TB and 8TB drives on the H200, M5015, and H310 cross-flashed to 9211-8I.

I'll also add that I'm getting ready to do 10TB drives
 
Joined
Jan 28, 2018
Messages
8
Has anyone figured out how to power on a C2100 without a motherboard? I'd like to use one as an expansion chassis next to my existing one. I'd rather not have a motherboard and riser in the new case, but the power button connects to the motherboard.

I'm thinking I can put an LSI SAS9201-16e in my existing C2100, a dual port SFF-8088 to SFF-8087 card in the new one, run a pair of SFF-8088 cables between them, and a pair of 05X8NH cables from there to a 9NXC7 (expander) backplane.
 
Last edited:

BetYourBottom

Newbie
Joined
Nov 26, 2016
Messages
79
Has anyone figured out how to power on a C2100 without a motherboard? I'd like to use one as an expansion chassis next to my existing one. I'd rather not have a motherboard and riser in the new case, but the power button connects to the motherboard.

I'm thinking I can put an LSI SAS9201-16e in my existing C2100, a dual port SFF-8088 to SFF-8087 card in the new one, run a pair of SFF-8088 cables between them, and a pair of 05X8NH cables from there to a 9NXC7 (expander) backplane.
I'm not sure this would make a good expansion chassis because of all the unused space in the case if you do; I won't try to stop you or convince you not to do it.

On to your actual question. I believe powersupplies usually turn on when a green cable in the connector is shorted to ground. If you don't care about using the unit for anything but running the drives then you probably could just short that wire to ground and the powersupply will always be on when plugged in.

If you want something a little more complex that lets you use the power button still then you'll probably need to probe the cable that is coming off the power button for which wires are connected to the button itself, then use an arduino or something to sense when it's pressed and turn on/off the PSU.

Another option might be to splice a wire onto the green one coming off the PSU, then run it alongside your SAS cables and up into the other C2100 you are using then splice it into the other powersupply's green sensor wire. That should make it so the expander would turn on whenever the main unit turned on. This one probably makes the most sense since it's just going to power the drives for the main computer anyways.
 

cheath94

Newbie
Joined
Mar 19, 2018
Messages
11
Just received my C2100 and first thing I noticed was how loud the fans were. IN IPMI the fans were spinning anywhere from 7500-8000rpms. I flashed BMC version 1.70 and bios B16. This had no change. I tried BMC 1.66, still no change. Does anyone has any ideas for me to try to get these fan speeds down?
 
Top