Ode to the Dell C2100/FS12-TY

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
I'm curious why it's suggested to avoid the older backplane (besides the older and no longer standard SFF-8484 connectors). It seems like there should be any problems even if it is SAS1, because it's 1:1 it shouldn't limit bandwidth (maybe as long as SSDs aren't used).

I also heard mention of some sort of 2TB limit in this thread? What would cause that? I'm concerned because I was planning on grabbing a model with an older backplane and using 4TB drives.

Are there any other special things to watch out for that I haven't noticed in this thread? Does it need a special variety of ram (other than Registered ECC)?
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
I hope I don't offend anyone in saying this, but after some research given my current needs I decided to give OpenMediaVault a try last night. I am having some hardware errors still on the C2100 which are leading me to believe that it's not the software, but the actual server hardware that may have a faulty component somewhere in the backplane or cabling that is causing unreliability. Can I share the log here and get some opinions of anyone who sees the errors at the exact time the issue happens to see if anything jumps out at you? This could be useful to everyone if I identify a bad component in the C2100 hardware. I'd like to be able to take advantage of the 30 day warranty and get a new part if that is the case.

If given that I am not running this on FreeNAS, you prefer I take the help request elsewhere, I fully understand.

Thanks all. BTW, BetYourBottom - I am using 5 TB drives in here, will let you know if they become suspect as the problem if we get a look at the logs in this thread or a new thread.
 

Linkman

Patron
Joined
Feb 19, 2015
Messages
219
I also heard mention of some sort of 2TB limit in this thread? What would cause that? I'm concerned because I was planning on grabbing a model with an older backplane and using 4TB drives.

AFAIK, SAS1 is hardware limited to 2TB, and the expanders have the same limit. It's an addressing issue, though I'm sure (since SAS is new to me) that someone more knowledgeable will correct me if I'm wrong.
 

BetYourBottom

Contributor
Joined
Nov 26, 2016
Messages
141
Thanks all. BTW, BetYourBottom - I am using 5 TB drives in here, will let you know if they become suspect as the problem if we get a look at the logs in this thread or a new thread.

I would think if the drives are detected at 5TB at all your good in that regard. What backplane do you have? Is it the one with 3 wide SAS connections (SFF-8484)? That's the one that I'm concerned about.

AFAIK, SAS1 is hardware limited to 2TB, and the expanders have the same limit. It's an addressing issue, though I'm sure (since SAS is new to me) that someone more knowledgeable will correct me if I'm wrong.

That's why I'm confused. The backplane would be 1:1 so I'd think there would be no hardware limit on the backplane itself only on the raid card that it connects to. So if I swap out the raid card with an SAS2 one, I figure I wouldn't be limited. Idk for sure either.
 

bbddpp

Explorer
Joined
Dec 8, 2012
Messages
91
I wanted to come back and give some impressions on the C2100 I bought from the deal posted earlier in this thread.

First off, thanks for recommending this machine. I was trying so hard to solve my problem with existing parts and the right answer was to restart my infrastructure over fully with this machine as a solid base. For a whole home server running applications like Plex, NZBGet, Sonarr, unpacking files, streaming, etc, the thing does not break a sweat. Plenty of RAM and Horesepower.

Noise level is great. I have it in the basement so even better. My old tower had drives in the mid 30's tempwise. Drives are all steady under 30 C in this thing. Very pleased. I was cooking drives in my ITX case and my external SATA JBOD boxes. This is great.

I will eventually want to extend this system to an additional JBOD shell so would love recommendations on parts I'd need to add bays to this thing down the road.

Also, since I'm new to this form factor and actual server stuff, if there are any settings in the BIOS I should be setting or checking to make sure I have this thing configured for the proper performance, I'd love any tips there. I really didn't change any settings and just assumed what was sent had everything set the way I needed it.

I am running the OS on a small 64 GB SSD inside and so far have 8 of the 12 bays filled, mostly with 5 TB drives. They are working very well and speeds seem great. I did do an rsync between two drives over ssh that took surprisingly long but rsync may have been the bottleneck there.

Anyway, just wanted to say thanks, very pleased to have this machine as the backbone of my home media serving.
 

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
I just came across to this thread.
I installed FreeNAS on a number of these C2100 from Ebay plus a couple of them that I actually purchased back in 2011 directly from Dell.
Just so you know, these babies use 3.6 Amps when equipped with dual CPU's and fully populated with WD RED 6TB HDDs (if you do SAS drives, im sure is more).
Where I live I pay 11 cents per KWH, so, if my calculations are correct, thats almost $32 a month in electricity.

I moved my personal FreeNAS from a C2100 to a Kontron Mini ITX board with an i7 and 6 of the WD RED 6TB and I went down to .7 Amps ( $6 a month in electricity).
I run owncloud, plex, transmission, iSCSI for ESXI, etc, and I cant tell the difference between the C2100s and the i7.
Benchmarked both with "dd" (very empirically) and the i7 was actually faster.
My file transfer do feel faster in real life also, but everything else is just the same.

Also, these c2100's are really hot, unless you live in the north pole where generating heat off these servers would actually saving you money, you should also factor the energy costs on cooling them down.

Just in case electric cost matters to you.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Having seen your other thread, regarding FreeNAS branding, is this what you recommend for your clients?

i7's don't support ECC RAM. Read the long running thread regarding ECC vs non-ECC RAM.

I moved my personal FreeNAS from a C2100 to a Kontron Mini ITX board with an i7 ...
 

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
Having seen your other thread, regarding FreeNAS branding, is this what you recommend for your clients?

i7's don't support ECC RAM. Read the long running thread regarding ECC vs non-ECC RAM.

I use this board:
http://www.kontron.com/products/boa...actors/motherboards/mini-itx/ktqm87-mitx.html
The board is long term support and availability, thats one of the reason I picked it.
It supports ECC

I picked the variant with the i7-4700EQ:
https://ark.intel.com/products/75469/Intel-Core-i7-4700EQ-Processor-6M-Cache-up-to-3_40-GHz
It also supports ECC

I put 16GB of ECC RAM.

FreeNAS boots from a 16GB stick plugged internally on a USB hub attached to one of the headers:
https://www.amazon.com/dp/B0031ESKJA/?tag=ozlp-20
(I checked and couldn't notice a difference booting from USB 3.0 ports)

1 Evo SSD 256GB on internal SATA for later use
2 SSHD 1TB each mirrored on Internal SATA for Plex
8 WD RED 6TB on SAS9211-8I , I started with Areca cards on JOBD, beautiful, with Out of Band management via Web and all, but the SMART implementation didnt work well with FreeNAS so I switched to LSI with IT firmware instead.
The 6TB HDDs are configured on two Volumes, (each with 4 HDDs on RaidZ) one pool for iSCSI extents and another for Windows Shares, Time Machine and Nextcloud
The iSCSI extents are used by ESXI 6.0 servers (that by the way boots from USB and doesnt have any HDD)

On the Kontron mobo, one Onboard NIC for Intel AMT and the File Shares, Nextcloud, etc,, the other NIC is on a different VLAN serving iSCSi.
(The PCI slot on this motherboard is x16 and the Bios supports bifurcation, so you could actually split it on two x8's and install one HBA and one 10GBE NIC at the same time, but the cases I use dont have room on them)
I have FreeNAS with 10GBE on another site, very fast but when people hear the additional complications and cost, they dont want to go that route.

My only complain with the board is that the Video on AMT KVM wont work if the system boots headless, I worked with Kontron engineers for a little while but eventually they realized some other people outside Kontron would have to be involved so they abandoned the efforts, and what I did was install Dummy dongles behind a DP-DVI adaptor,. They stick 2 inches behind the board, and there is the possibility they get knocked out accidentally, I dont like it too much, but whatever.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
Dotty, I sincerely apologize for my earlier comment. 99% of the i-7's, especially ones that other forum users have chosen, don't support ECC RAM. Looks like you found one that does.
No problem!
I love those Kontron boards, pricey, but super reliable.
I have another box running VMware with that same board, inside a warehouse in Florida, no AC, very hot, 2 years, no problem, out of band access has never failed, and the box has never crashed.

My dream is they come with one Mini ITX board based on Xeon D soon, with ECC, 10GBE SFP+ , KVM, headless boot, M.2 capable, PCI slot and low profile. Probably asking too much. :smile:

Back to the topic, those C2100 are hot, noisy and power-hungry (specially the last part). I replaced the fans on one once, and cut the noise by half, but the power stayed the same.
 

Fallon

Cadet
Joined
Nov 26, 2016
Messages
1
No problem!
I love those Kontron boards, pricey, but super reliable.
I have another box running VMware with that same board, inside a warehouse in Florida, no AC, very hot, 2 years, no problem, out of band access has never failed, and the box has never crashed.

My dream is they come with one Mini ITX board based on Xeon D soon, with ECC, 10GBE SFP+ , KVM, headless boot, M.2 capable, PCI slot and low profile. Probably asking too much. :)

Back to the topic, those C2100 are hot, noisy and power-hungry (specially the last part). I replaced the fans on one once, and cut the noise by half, but the power stayed the same.

What chassis do you use for those Kontron systems? I keep looking at old servers for my first deployment, but the excessive heat, noise & poewr consumption keeps driving me to continue looking & planning.
 

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
What chassis do you use for those Kontron systems? I keep looking at old servers for my first deployment, but the excessive heat, noise & poewr consumption keeps driving me to continue looking & planning.
Heat = Airflow, Noise = Fan choice, Power = HDD+PSU choice
Pick a case with enough space: Silverstone DS380B is good, I used it before.
Make sure you get Fans that can be pumped up with minimum noise (pick good Noiseblockers or Silent Wings, neither are well known, but Ive used them for years on Watercooled 3D rendering workstations with great results so I ended up using them for NAS builds too)
Pick NAS HDD's, they use less power, or better yet, pick SSDs. Dont undersize/oversize the PSU (calculate what is your total power draw and pick the PSU accordingly, aim for 70% utilization on peak, so the PSU fan dont overwork and start getting noisy)
(On the Kontrons I have 400W PSU and is plenty, for all those HDDs, USB, etc)
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
So I got one of these for free the other week. The sound level compared to the supermicro that it replaced is incredible. My exhaust fan in my crawlspace actually makes more noise than my 3 servers do combined. Once I got the IBM M1015 card and flashed it over to IT mode everything went semi smooth. The only problem I had was that in the Super FreeNAS used the EM driver for the NICs and in this it used the the IGB so I had to use dhclient to force one of the NICs to grab an IP from a non trunk port on my switch in order to dump the network configs and then rebuild my network config. No big deal there. I also went from 16GB of RAM up to 96GB (and I can always go bigger if I want). Overall I am very happy with this server and I am sure that it will service my home NAS needs for a long time. The next upgrade I have planned now is start slowly swapping my drives from 2TB over to 4TB which should give me plenty of storage. Out of my current 12 2TB drives setup with 2 raidz2 pools connected together I have 2TB of free space left so I've still got some more room to grow but I feel that within the next year it will be time to start the upgrade. At least with the 2 pools connected together I can get the increase in space once I complete the rebuild of 1 of them.
 

Dotty

Contributor
Joined
Dec 10, 2016
Messages
125
So I got one of these for free the other week. The sound level compared to the supermicro that it replaced is incredible. My exhaust fan in my crawlspace actually makes more noise than my 3 servers do combined. Once I got the IBM M1015 card and flashed it over to IT mode everything went semi smooth. The only problem I had was that in the Super FreeNAS used the EM driver for the NICs and in this it used the the IGB so I had to use dhclient to force one of the NICs to grab an IP from a non trunk port on my switch in order to dump the network configs and then rebuild my network config. No big deal there. I also went from 16GB of RAM up to 96GB (and I can always go bigger if I want). Overall I am very happy with this server and I am sure that it will service my home NAS needs for a long time. The next upgrade I have planned now is start slowly swapping my drives from 2TB over to 4TB which should give me plenty of storage. Out of my current 12 2TB drives setup with 2 raidz2 pools connected together I have 2TB of free space left so I've still got some more room to grow but I feel that within the next year it will be time to start the upgrade. At least with the 2 pools connected together I can get the increase in space once I complete the rebuild of 1 of them.
A C2100 with acceptable noise levels? wow,, I have the Ebay models plus the original Dell models, they are super noisy in my terms. I used to have one at home and the noise, heat and power draw were killing me.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

Comet Jo

Dabbler
Joined
Oct 20, 2014
Messages
21
+1 on that BMC v. 1.66 fix. I just got a "new" FS12-TY from eBay last week, and after filling it with 72GB and running MemTEST to make sure they were all good, I decided to quiet it down. The vendor had upgraded it to BMC 1.86? before they shipped it. It wasn't terribly loud at idle, but my server room is in the basement and I could faintly hear it running from upstairs, through the floorboards. BMC 1.66 has it so incredibly quiet that I can barely hear it in the next basement room with the door open. I mean it's seriously like 25% or less noise now.

Booted it off a USB flash drive with FreeNAS 9.10 and seems to like it just fine.

Now I just need to flash my PERC H310 to IT mode and find 12 big drives!

Thanks for this thread, OP. I have been running FreeNAS on home-built hardware and "weaker" Dell T105/T110 type hardware for a while, and decided I wanted something that could really scale up my storage. This is looking to be what fills the need for now!
 

Jason Hamilton

Contributor
Joined
Jul 4, 2013
Messages
141
+1 on that BMC v. 1.66 fix. I just got a "new" FS12-TY from eBay last week, and after filling it with 72GB and running MemTEST to make sure they were all good, I decided to quiet it down. The vendor had upgraded it to BMC 1.86? before they shipped it. It wasn't terribly loud at idle, but my server room is in the basement and I could faintly hear it running from upstairs, through the floorboards. BMC 1.66 has it so incredibly quiet that I can barely hear it in the next basement room with the door open. I mean it's seriously like 25% or less noise now.

Booted it off a USB flash drive with FreeNAS 9.10 and seems to like it just fine.

Now I just need to flash my PERC H310 to IT mode and find 12 big drives!

Thanks for this thread, OP. I have been running FreeNAS on home-built hardware and "weaker" Dell T105/T110 type hardware for a while, and decided I wanted something that could really scale up my storage. This is looking to be what fills the need for now!
I have been so happy with my c2100 been running steady now for almost a month. Even the Mrs couldn't believe how quiet that thing is compared to the old box. I truly think that this server will tide me over now for quite sometime

Sent from my LG-H830 using Tapatalk
 

RickH

Explorer
Joined
Oct 31, 2014
Messages
61
I don't know how I missed this thread until just now - there's some great information on here!

I'm the IT director for a small document management company and I actually have 8 of these C2100/FS12-TY servers running FreeNAS (4 in my main office, 2 in the CoLo and another 2 that I built for one of our clients). We use them for both ISCSI ESXi datastores, and network SMB shares housing a couple hundred TB of scanned image data. All of my servers are in dedicated server rooms so noise isn't really an issue, but they have been a great value solution for my company...

I actually started with a FS12-SC about 3 years ago and I would definitely recommend against the older model with the older backplane for several reasons:
  • I can confirm that the backplane has issues with drives larger than 2TB
  • There are some weird incompatibility issues with certain newer SATA 3.0 drives that caused them to only connect at 150 MB/sec. (SATA 2.0 drives always seemed to connect at 300 MB/sec) - using the newer SATA 3.0 drives also seemed to cause random intermittent CAMCONTROL bus resets - this never resulted in any data corruption, but it definitely filled the dmesg output and caused some personal anxiety...
  • In my experience, the backplane seemed to be significantly bottle-necking my array performance - moving my HBA (a LSI SAS2008) and drives to a newer FS12-TY chassis with the new backplane raised local benchmark tests (performed using dd with a 128k block size to write and then read a 100GB file) from around 560 MiB/sec to 1,170 MiB/sec
  • The motherboard in the -SC models uses older DDR2 memory and has only 6 slots which makes it fairly difficult to upgrade
For the reasons listed above - I recently sent the original FS12-SC server to the recycler and replaced it with the newer FS12-TY model.



Each of my servers is set up a little differently depending on exactly what it's being used for, but I can offer the following advice/observations:
  • The new backplane is definitely compatible with higher capacity drives, 2 of my servers are populated with 12 5TB WD Reds and we've never had any issues with them.
  • All of my servers use a HBA based on the LSI SAS2008 chip (IBM M1015, 9211-8i) with both ports connected via SFF-8087 cables to the 2 ports on the backplane. I haven't seen any compatibility issues with any drives I've uses (WD Reds, HGST, WD Blacks, WD se, and some junk 'white-label' drives I had laying around) and none of the bus reset issues I had with the older model have ever shown up.
  • The per-drive throughput with this combination seems to cap out between 200-207 MiB/sec - in almost all cases your spinning drives are still going to be the bottleneck, but if you're considering the combo for SSD's I wouldn't recommend it.
  • If you're going to use your server for an ESXi datastore (or any other virtualization tasks), get as much RAM as you can - I purchased most of my servers with 128GB
  • There are 6 SATA ports on the motherboard, but unfortunately they're only SATA 2.0 and seem to have a transfer limit between 265-280 MiB/sec. This still works out to be a little faster than the front drive bays so if you're going to run SSD cache's, the internal ports are still your best bet - just realize that you're going to be leaving some of your SSD performance on the table because of the interface
    -You could possibly install a 2nd HBA and run a SFF-8087 breakout cable to the SSD caches, but remember there are only 2 PCIe slots in this chassis so if you want to run 10GbE Ethernet you're going to have to find one of the HBA mezzanine cards (I haven't personally tested this theory - I just run my SSD drives on the SATA ports and accept that they're not as fast as they could be).
  • As previously mentioned in this thread, there are 2 internal 2.5" drive shelves - these are sized to hold the thicker style 2.5" enterprise drives, so if you're using thinner 'consumer-style' SSD's you can stack them and actually get 4 in there (using something fancy like double-stick tape to hold them together)
  • There are 2 sata power connectors and 1 4-pin molex power connector internally - I typically use a SATA y-cable and run 3 SSD's (2 for a mirrored SLOG and 1 for L2ARC)
  • I typically use SATA-DOM's for my boot drives - the way the SATA ports arranged on the MB means that if you use 2 SATA-DOM's in a mirrored boot config, you're going to slightly cover one of the other ports - the only way to use 2 SATA-DOMS and all 4 of the remaining SATA ports is to use a short extension cable to mount the DOM's
  • The processors in my servers range from 4-core E-series to 6-core X-series - In my testing with FreeNAS I can say that the processors in use on a particular system seem to have the LEAST effect on overall performance. If you're just purchasing one of these to run FreeNAS, I would recommend you go with the L-series Xeon's - I haven't measured direct power usage, but they run 15-20 degress cooler than the X-series do. In any case, save your money on the X-series processors and buy more RAM!
Final Thoughts:
I tend to buy my servers on eBay from esiso pre-populated with 3TB drives and 128GB RAM:
DELL-FS12-TY-C2100-2x-QUAD-CORE-L5630-2-13GHz-128GB-RAM-12x-3TB-SATA-H700 - $1,500
If you talk to them, they'll usually pull the H700 and knock $60-70 off the price. I then add 2 100GB Intel DC S3700 for SLOG, a Samsung 850 Evo SSD for L2ARC, a Chelsio 10GbE NIC, a SAS2008 HBA and a couple Chinese SATA-DOM's for boot. If you take a little time and shop around, you can end up with a highly capable, enterprise-ready storage array for less than $2k!
 
Last edited:
Top