Dell R510 Server Build: Advice appreciated

StoreMyBytes

Dabbler
Joined
Aug 1, 2017
Messages
13
Hi folks,

I've built a lot of custom PC's before, but have never built anything on the server side of things, so I'd appreciate a little input.

I'm taking the leap into building my first ever FreeNAS box, and have settled on the Dell R510 as my choice, providing it meets my intended use case.

The Dell R510 comes in several variants, but I'm looking at refurbished ones on ebay that come with 12 drive bays. I already bought eight WD Red NAS drives, each of them 8TB for a total of 64TB of raw storage. Additionally, I've already purchased a server rack and am going to build a special sound-proof closet with cooling to sound isolate it from dust and fan noise.

My Use Case:

  • Backup storage for my multiple Windows PC's, taking regular snapshots/cloning of hardrive images in case the Windows machines suffer hardware failure or malware/ransomware attacks
  • Cloud file storage for when I'm away from home
  • Plex Media Player for storing my ripped DVD's
  • Network attached video editing storage for use with my Windows desktop (my Windows machine has m.2 PCIe SSD's for faster performance, but in case I need more storage space when working with an active video project, I'd like to know how FreeNAS performs as a scratch disk)

Questions:

  1. Has anyone here used a R510 with larger capacity drives like these 8TB models, and if so, with which controller? My WD Reds are model# WD80EFAX-68LHPN0 (8.0 TB, 256 cache)

  2. Western Digital says that Red drives are rated for arrays of up to eight disks. Spec sheet can be found here. Is there any inherent risk to using all 12 bays of a R510 with WD Reds, should I buy four additional drives? I've read that some people say they aren't rated for more than eight due to not being rated for heavy-duty data enterprise-level drive vibrations. Has anyone here used the WD Reds in a FreeNAS box in an array greater than 8 disks? If so, how did it work out for you? I see that WD also offers WD Red Pros, which are supposed to be rated for up to 16 drive arrays, but I also noticed that the regular WD Reds have anti-vibration features built in. Are the WD Pros just marketing hype, or are there any appreciable benefits to them over the regular WD Reds, besides the Pro's having a longer warranty?

  3. After reading the recommended FreeNAS hardware FAQ's, it seems like 1gig of RAM per 1TB of drive space seems to be the accepted safe standard. I'm looking for ebay servers that have 96gb (or more) of RAM, in case I ever max out the 12 bays. (12 bays x 8tb drives = 96TB, requiring 96gb of RAM). Even if I kept the 8x8TB setup worth of drives I currently have, I still wouldn't mind extra RAM in case I ever repurposed the server to run a lot of VM's.

  4. There are a lot of Xeon 5500 and 5600 series CPU's that are compatible with the R510. Any particular advice as to which one to pick? The server will not be kept running 24/7 most likely, but I'd still like to see if there's a sweet spot between performance and power usage.

  5. For those of you who have a R510 or similar Dell server, especially with larger disks like my 8TB units, which controllers have you found to work best? I understand that FreeNAS can't use RAID, and must use a HBA. In case the server I end up buying doesn't have a HBA, I'd just like to know what you've found to work well with Dell servers.
Thank you in advance for any education you can provide a FreeNAS student! :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Most likely the included controller with that server will only support drives up to 2TB. I am not sure about the backplane support. If it has a SAS expander in it, that could be a problem too. I don't personally like to go with the Dell chassis because of some of the proprietary things they do. If you are already dead set on that, you should do more research on the system. Other people on the board surely will have something to say about this.

We have four systems at work (16 drive rack mounts) that were delivered with the WD Red Pro 4TB drives. They have been running 24/7 for about two years and I have only had to replace two drives. The Red Pros are alright, not great, but they will do. The best drives (in my experience) are the HGST drives. I have a unit with those in it that is at the 5 year mark with not even a slight hiccup. The ones we have are obviously not the latest model but I wouldn't hesitate to buy them in a new system. I have another system that has 60 of the Red Pro 6TB drives that is only about six months in and I have had to replace 3 of those. Could be because of all that vibration.
The thing is, you said that you already bought the drives, so why ask now? The time to ask was before you bought it.

I worry about this closet you are talking about though. You need plenty of cool air to keep the drives and the CPUs in that chassis cool. The WD reds we have run on the hot side. I have seen them hit 113F in normal use. We keep the room at 65F and the heat out the back of the server runs around 120F. You have to exhaust that heat and keep cool air going in the front all the time. Constant air flow, not like a normal home AC unit where it goes on and off. Let those drives get too hot and they will die very early. They are rated to a max of 60C but they don't like to live there all the time.

The set of servers I had before I built the two I am using now had socket 1366 processors, which is what you are asking about. The best bang for the buck that I found was the Xeon x5670. It is a 2.93 GHz Six core processor and it doesn't run as hot as the 3.4 GHz unit or use as much power. These CPUs are a couple generations older and take more power, generate more heat and have just a bit less performance than newer CPUs. If this is what you want, If you feel like Dell is what you want or you have some vested interest in this model, go for it, but I think there are better options depending on your budget. Did you look at the recommended hardware list?

The Dell H310 controller can be cross-flashed with the IT firmware to make it into a plain HBA. I use those in my FreeNAS builds. Works great.
 

sfryman

Dabbler
Joined
Dec 11, 2016
Messages
13
I thought about getting one of those Dells when I setup my FreeNAS, but wound up with a Supermicro 8 Bay (2U) case with X8DTI-F mainboard and 2x Xeon E5620. I put a best offer on ebay for $160 shipped, didn't expect to get it but the seller accepted. The server works great, though I did have to flash the firmware to get it to recognize drives over 2TB. Definitely watch out for that, get the wrong MB and you will wind up having to buy a new controller.

I installed six x 4TB "Red" drives and use it for backup, file storage and Plex. The server only came with 32GB RAM but I haven't run into any problems with that. Full disclosure: I never worked on a server before this one.

I don't know about video editing but dual Xeon E5620 should be able to do everything else you have listed, and is basically free (~$20 on ebay). The most processor-intensive is transcoding. When I had 2x Xeon E5620 it could transcode any of my 1080p content fine, even multiple streams, but struggled with some of the 4k movies. The downstairs TV refuses to direct play 4k content with H.265, so I upgraded the processors to dual Xeon X5680 for another $125 (ebay, "best offer" auction again). So now it is able to handle real-time transcoding of everything in my library, though it cannot handle two x 4k streams simultaneously (understandably).

People will point out that I'm using alot of electricity with this beast, but you should check the math before springing for new components. I use 150-160 watts under normal loads. A newer server would use less electricity, but cost at least a thousand more. I spent under $300 for my server. I'd have to keep it running for over a decade before the energy cost exceeds the price of new equipment.

Agree with CM on the heat/noise issues. You need a cool, isolated place for the 5600 servers.
 

StoreMyBytes

Dabbler
Joined
Aug 1, 2017
Messages
13
Bear with me folks as I'm learning, this is going to be a fairly long reply because I'm still trying to soak it all up, and hopefully summarize all the research I've done so far such that if others are considering this configuration they can learn from my experience. :D

Most likely the included controller with that server will only support drives up to 2TB. I am not sure about the backplane support. If it has a SAS expander in it, that could be a problem too. I don't personally like to go with the Dell chassis because of some of the proprietary things they do. If you are already dead set on that, you should do more research on the system. Other people on the board surely will have something to say about this.

Thank you, Chris, for your assistance.

I ended up buying a Dell R510 on ebay for only $200. The price for everything included really couldn't be beaten by other competing options from what I could tell. This is my first time ever delving into FreeNAS. I'd like to try the R510 based upon the price and also I didn't see many SuperMicro servers with a lot of RAM, dual PSU, dual CPU, 12 x 3.5" HDD bays, etc. included without massively jumping the price. I have heard good things about SuperMicro, but I'd like to give the Dell a try first. Seeing that I didn't spend much on this box, I could always repurpose it for something else. That being said, I'd appreciate your input on the points below.

It has the following specs:

  • 12 x front 3.5" drive bays
  • Included activated license of Windows Server 2016 Standard(not sure what I might use it for, but I look at it as an extra bonus)
  • 2 x 750W Gold-rated Dell PSU's (nice for redundancy)
  • 96 GB of ECC RAM
  • Over 1.2 TB of hardrive space spread across 10 SAS 15k enterprise drives
  • Dual E5540 Xeon CPU's
  • Qlogic 8GB Fiber Channel card (I'm planning on removing this as I don't have any other Fiber in my setup at this time)
  • 4 extra 1gb NICS ports (2 PCI cards, each with 2 x 1gigabit ports)(I don't currently have a network switch that supports link aggregation, nor a client machine that has more than 1 gigabit NIC, so I'll probably remove these as well.)
  • PERCH700i Controller w/512 cache (Has to be removed, not recommended per my reading of the forum and FAQ's.)
  • Motherboard has 5 built-in SATA ports
Here's the Dell Service Tag lookup of my machine to see what's inside of it, you can find it here.

I read the Dell R510 user manual here (PDF link)

From what I can tell, the R510 included backplane is supposed to support up to 12 SATA/SAS drives. It is my understanding that the PERCH700i controller is a no-go for use with FreeNAS (I'm a newbie, but I did read the hardware FAQ where it clearly states to avoid the H700 due to it not having a true JBOD mode for drive pass-thru), and needs to be replaced with a card that has true I.T. mode.

So far, the list of contenders are:
I'm strongly leaning towards the IBM M1015 seeing how much support it seems to have from within the FreeNAS community. What's the old saying? If it isn't broken, don't fix it? Given that it is in so many FreeNAS machines, I'm going to make an assumption that the developers will keep this card in mind for future updates and compatibility testing. Given that I have a Dell system, could there be any benefit to using the Dell H200 or H310 as compared to the IBM M1015? If there's no clear-cut case for going with one of the Dell cards, I'm going to just get a IBM M1015 and plug the two 2xSFF-8087 ports into my 12 bay Dell backplane. According to cyberjock the moderator in this thread, he said the card can support up to 32 physical drives, so I should be good, correct? I'd connect no more than 14 drives via the backplane. (12 x 3.5" HDD's bays in the front of the R510, plus up to 2 SSD's inside of the detachable caddy, see this pic for reference):

hqdefault.jpg


I plan on running FreeNAS off two SSD's in that detachable caddy. I haven't purchased any for that purpose yet, but I've had pretty good results with the Samsung 850 and 950 series in my desktop PC's. Any thoughts as to which SSD makes and models are recommended?

We have four systems at work (16 drive rack mounts) that were delivered with the WD Red Pro 4TB drives. They have been running 24/7 for about two years and I have only had to replace two drives. The Red Pros are alright, not great, but they will do. The best drives (in my experience) are the HGST drives. I have a unit with those in it that is at the 5 year mark with not even a slight hiccup. The ones we have are obviously not the latest model but I wouldn't hesitate to buy them in a new system. I have another system that has 60 of the Red Pro 6TB drives that is only about six months in and I have had to replace 3 of those. Could be because of all that vibration.
The thing is, you said that you already bought the drives, so why ask now? The time to ask was before you bought it.

Regarding the WD Reds, I'm not so much looking for validation of my purchase, but more so what other individuals have experienced seeing them run in larger arrays such as the 60 disk one you mentioned. I'm all about compiling information for future reference. I bought them a while back when I was thinking of building a Synology Diskstation with them, but decided I could get a higher degree of customization with FreeNAS, plus I prefer open source projects compared to Synology's quite underpowered hardware, even if it has an easy to use OS. I'm going to roll with the WD Reds in my Dell R510 for now, but I will certainly keep your HGST suggestion in mind should any of them fail.

I worry about this closet you are talking about though. You need plenty of cool air to keep the drives and the CPUs in that chassis cool. The WD reds we have run on the hot side. I have seen them hit 113F in normal use. We keep the room at 65F and the heat out the back of the server runs around 120F. You have to exhaust that heat and keep cool air going in the front all the time. Constant air flow, not like a normal home AC unit where it goes on and off. Let those drives get too hot and they will die very early. They are rated to a max of 60C but they don't like to live there all the time.

My goal is to build a closet door with a built-in fan that pulls air through a filtered mesh to keep dust intake down. Once the cool air is pulled in, I want a second fan to be ceiling mounted above the server rack pushing the rising hot air out to a roof vent. The closet is located inside of an insulated garage. The indoor air temps stay fairly low, even in the summer time. Combined with a push/pull fan combination with high-speed fans, I think it should be fine. If the temps rose higher than what the hardware is rated for, I could also mount an A/C unit inside of the closet. There's flexibility there, as this will be new construction and I'm consulting with my contractor as I build out the space.

The set of servers I had before I built the two I am using now had socket 1366 processors, which is what you are asking about. The best bang for the buck that I found was the Xeon x5670. It is a 2.93 GHz Six core processor and it doesn't run as hot as the 3.4 GHz unit or use as much power. These CPUs are a couple generations older and take more power, generate more heat and have just a bit less performance than newer CPUs. If this is what you want, If you feel like Dell is what you want or you have some vested interest in this model, go for it, but I think there are better options depending on your budget. Did you look at the recommended hardware list?

As noted above, I just went ahead and purchased the Dual E5540's that came with the R510. Yes, you're right, the older CPU's tend to have higher TDP ratings and consume more juice. This was something I've given thought to, but given I have no intention to run the server in a 24/7 environment, it didn't seem to make sense to purchase a brand-new latest gen Xeon at hundreds of dollars more per processor. I'd have to use my less-efficient ones for a decade or more before the outlay of cash caught up to the electric bill. :)

I did take a look at the Hardware List before purchasing.

I ended up choosing the Dell R510 that I went with for several reasons:

  • Dell seems to have a very strong reputation for quality when it comes to reliability of their servers. I like things that generally work and have a reasonable expectation for them to last a long time.
  • The Dell R510 was one of the very few 12 x 3.5" drive bay servers I could find in a 2U format, plus included 96 GB of ECC RAM for FreeNAS or if I ever decided to turn it into a VM box. I've read that you can't have too much ECC RAM with FreeNAS, especially with my multiple 8TB drives. Combined with dual CPUs, dual PSUs, included SAS drives and an activated copy of Windows Server 2016 and free shipping, I really couldn't say no to buying it at $200, even though I didn't have a chance to come back here and post this first. In an ideal world, I might be putting the cart before the horse.
That being said, I'd love to hear what you think are some better alternatives. I could always repurpose the R510 as a VM machine. What do you feel is a better configuration (be it price, performance, compatibility, reliability, etc.)? It seems SuperMicro is pretty popular. Are there any models that you'd recommend that have 12 or more 3.5" bays? I really want my FreeNAS box to have plenty of expansion space. If not SuperMicro, how about some other makes and models? While I have the R510 now, I certainly don't mind getting another machine. I consider it all a learning experience. :)

The Dell H310 controller can be cross-flashed with the IT firmware to make it into a plain HBA. I use those in my FreeNAS builds. Works great.

Thanks for the heads up. Do you have any comparison using the H310 versus say the IBM M1015 or the Dell H200? Performance? Reliability? Compatibility? How many machines have you deployed the H310 to? Any particular reason why you went with it over the M1015 seeing it is on the hardware list and seems to be all the rage around here?

I thought about getting one of those Dells when I setup my FreeNAS, but wound up with a Supermicro 8 Bay (2U) case with X8DTI-F mainboard and 2x Xeon E5620. I put a best offer on ebay for $160 shipped, didn't expect to get it but the seller accepted. The server works great, though I did have to flash the firmware to get it to recognize drives over 2TB. Definitely watch out for that, get the wrong MB and you will wind up having to buy a new controller.

sfryman,

Thanks for your advice. Congrats on getting a great deal. I know the feeling when you never expect a seller to accept and then when you wake up in the morning you get a Christmas present in your email from ebay. :D

Was there any particular reason why you leaned towards the SuperMicro over the Dell, or was it a coin toss and the seller just happened to accept your offer?

I installed six x 4TB "Red" drives and use it for backup, file storage and Plex. The server only came with 32GB RAM but I haven't run into any problems with that. Full disclosure: I never worked on a server before this one.

I've heard that for reliability and performance you can never have too much RAM with FreeNAS. Good to error on the side of caution and use the ECC memory and plenty of it. Been reading a few too many horror stories to want to buy a box without much RAM or non-ECC RAM. Not a risk I'm willing to take! :)

This is my first ever (maybe there will be a second one soon if someone convinces me to buy a SuperMicro or similar machine!) server build. I am completely new to server architecture and have spent dozens of hours reading the forums, google, youtube, etc. learning about it. Thankfully, there's never EVER been wrong information on the internet, so one never has to be too careful before you put your precious data on a new machine, right? ;)

I don't know about video editing but dual Xeon E5620 should be able to do everything else you have listed, and is basically free (~$20 on ebay). The most processor-intensive is transcoding. When I had 2x Xeon E5620 it could transcode any of my 1080p content fine, even multiple streams, but struggled with some of the 4k movies. The downstairs TV refuses to direct play 4k content with H.265, so I upgraded the processors to dual Xeon X5680 for another $125 (ebay, "best offer" auction again). So now it is able to handle real-time transcoding of everything in my library, though it cannot handle two x 4k streams simultaneously (understandably).

I don't plan on doing too much transcoding, as I'm going to rip my DVD collection and just cast it to the 1080p TVs. I'm going to need a bunch of different formats for tablets and cell phones, as I only really watch on the TV. Combined with the dual Xeons in there, I'm pretty sure that transcoding shouldn't be an issue. My curiosity is more so if I can offload some of my video editing encoding from my Windows client to this box, perhaps running a secondary OS on a separate drive? Who knows? Perhaps that is a question better asked in the Adobe forums, or I should get a separate server dedicated for that purpose altogether. I'll have to do more research to find out for sure.

People will point out that I'm using alot of electricity with this beast, but you should check the math before springing for new components. I use 150-160 watts under normal loads. A newer server would use less electricity, but cost at least a thousand more. I spent under $300 for my server. I'd have to keep it running for over a decade before the energy cost exceeds the price of new equipment.

Agree with CM on the heat/noise issues. You need a cool, isolated place for the 5600 servers.

You pretty much came to the same conclusion I did regarding up-front cost versus long-term cost to run it. I'll probably replace it with something else before the break even point of electricity ever comes into question. My garage is cool, insulated from the outside and thankfully doesn't share a wall with any room that people sleep in. I powered up my R510 for the first time today and I can confirm those fans are LOUD! :D :D :D

Thanks once again guys for the continued input. I'm really enjoying learning, and it is always encouraging when you have an active community of vets keeping a newbie on track.

Much obliged!
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I plan on running FreeNAS off two SSD's in that detachable caddy.
Both the speed and capacity of an SSD is wasted on a boot drive. I use a pair of 40GB laptop hard drives that I picked up from eBay for $10 each because they were "new, old stock" and the seller was trying to get rid of them. I bought 6 all at the same time. Two each for the NAS systems I was building and two as spares and I tested them all when I got them and they had 0 hours. Meaning that they were actually new and after over two years I have had ZERO problems of any kind from them. I can run smartctl tests against them to get useful data to tell me their health too. Works like a champ and they are the price of USB drives. Hate USB drives, they can't be depended on. I have had two USB boot drives fail on me, one after less than six months.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Qlogic 8GB Fiber Channel card (I'm planning on removing this as I don't have any other Fiber in my setup at this time)
Sell it on eBay. Someone might want that. ;-)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Regarding the WD Reds, I'm not so much looking for validation of my purchase, but more so what other individuals have experienced seeing them run in larger arrays such as the 60 disk one you mentioned. I'm all about compiling information for future reference. I bought them a while back when I was thinking of building a Synology Diskstation with them, but decided I could get a higher degree of customization with FreeNAS, plus I prefer open source projects compared to Synology's quite underpowered hardware, even if it has an easy to use OS. I'm going to roll with the WD Reds in my Dell R510 for now, but I will certainly keep your HGST suggestion in mind should any of them fail.
For my home systems, I use Seagate Desktop (Barracuda) drives. They are good enough. I had a whole batch that were about the same age that I ran to around 45000 hours before I decided to replace them preemptively.

My goal is to build a closet door with a built-in fan that pulls air through a filtered mesh to keep dust intake down. Once the cool air is pulled in, I want a second fan to be ceiling mounted above the server rack pushing the rising hot air out to a roof vent. The closet is located inside of an insulated garage. The indoor air temps stay fairly low, even in the summer time. Combined with a push/pull fan combination with high-speed fans, I think it should be fine. If the temps rose higher than what the hardware is rated for, I could also mount an A/C unit inside of the closet. There's flexibility there, as this will be new construction and I'm consulting with my contractor as I build out the space.
I am not sure how sensitive to heat the Dell will be. We have some servers here that are perfectly happy if the room temperature is 75F. You will need to monitor it and make adjustments to the intake air on the server based on the temps you see. The hard drives will most likely not be the problem, so see if you are able to monitor the CPU or other chassis temps.

It was a long post. I am not sure I if I got everything. If you have a question, just post back.
 

sfryman

Dabbler
Joined
Dec 11, 2016
Messages
13
Was there any particular reason why you leaned towards the SuperMicro over the Dell, or was it a coin toss and the seller just happened to accept your offer?

Happened to get this one. I was looking for a deal on a server for a few weeks, and put in offers for similar Dells and Supermicro units without winning. The guy I bought from just happened to have a bunch of servers he wanted to unload fast, so I got a good deal.

I've heard that for reliability and performance you can never have too much RAM with FreeNAS. Good to error on the side of caution and use the ECC memory and plenty of it. Been reading a few too many horror stories to want to buy a box without much RAM or non-ECC RAM. Not a risk I'm willing to take! :)

I agree you can't have too much RAM, and was expecting to need to buy more. But I haven't noticed any problems so I just stuck with the RAM that came with my server. For my use case, 32GB seems to be sufficient.


I don't plan on doing too much transcoding, as I'm going to rip my DVD collection and just cast it to the 1080p TVs. I'm going to need a bunch of different formats for tablets and cell phones, as I only really watch on the TV. Combined with the dual Xeons in there, I'm pretty sure that transcoding shouldn't be an issue. My curiosity is more so if I can offload some of my video editing encoding from my Windows client to this box, perhaps running a secondary OS on a separate drive? Who knows? Perhaps that is a question better asked in the Adobe forums, or I should get a separate server dedicated for that purpose altogether. I'll have to do more research to find out for sure.

The advantage of Plex is that you don't need a bunch of different format files, just rip the highest resolution you need, and plex handles conversions to all your devices: tablets, phones. TVs, gaming consoles, PCs, doesn't matter. The plex client is available on ios, android, linux, windows, FreeBSD and probably more. The plex server will automatically convert to whatever format you need, and you can also store converted files to any device locally. I sometimes watch on my TV, sometimes on a tablet. I keep a few shows synced to the tablet so if I travel I can watch them without a connection. My son and in-laws have ipads, my wife and I have both android phones and taptops. All of them work the same.

Your dual E5560 have a passmark of 8000, which should be good for 3 to 4 simultaneous 1080p streams.

I expect that you could install windows as a secondary OS and do whatever you want with it. I'm not sure about performance, but expect it would be in-line with similar older hardware.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Thank you, Chris, for your assistance.
Sure, I like to help. What I was getting at in my last post, some servers are happy at 75 degrees in the room and stay cool enough but I have one that needs a colder room because at 75 it gets so hot inside that the back row of drives is at 57C (about 135F) which is much hotter that I like my drives. That same server the CPU will be hot, but still inside the operating range. It won't kill it instantly, but it is not good for the long life of the equipment. Cooler is better as long as it isn't too cold.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
So far, the list of contenders are:

IBM M1115 is pretty much exactly the same as the M1015, and can sometimes be found cheaper, since everyone's looking for the M1015 ;)

All of those cards, except the H310 I believe, are all pretty much OEM LSI cards. The H310 is a more custom design IIRC, and I've also seen posts on the forum about the H200 (at least earlier models) running hot... or maybe I'm confused.

I plan on running FreeNAS off two SSD's in that detachable caddy. I haven't purchased any for that purpose yet, but I've had pretty good results with the Samsung 850 and 950 series in my desktop PC's. Any thoughts as to which SSD makes and models are recommended?

Depending on your use case, mirrored boot ssds might be overkill. Up to you.

As noted above, I just went ahead and purchased the Dual E5540's that came with the R510.

...

if I ever decided to turn it into a VM box.

Worth pointing out that the Westmere upgrade to the Nehalem CPUs includes extra virtualization instructions. At least with BHyve (not sure about VMware), an upgrade to Westmere CPUs is pretty much required if you plan to use VMs. Good news is that Westmere CPUs are cheap, and only getting cheaper, so one day, you might to upgrade your dual quad, to a dual hexacore for very little.
 

StoreMyBytes

Dabbler
Joined
Aug 1, 2017
Messages
13
Both the speed and capacity of an SSD is wasted on a boot drive. I use a pair of 40GB laptop hard drives that I picked up from eBay for $10 each because they were "new, old stock" and the seller was trying to get rid of them. I bought 6 all at the same time. Two each for the NAS systems I was building and two as spares and I tested them all when I got them and they had 0 hours. Meaning that they were actually new and after over two years I have had ZERO problems of any kind from them. I can run smartctl tests against them to get useful data to tell me their health too. Works like a champ and they are the price of USB drives. Hate USB drives, they can't be depended on. I have had two USB boot drives fail on me, one after less than six months.

I've been doing a lot of reading here on the forums, and USB thumb-drives seem to have poor longevity. There seems to be no point in putting something in the machine with a suspect track record. Totally agree.

I have 3 different types of 2.5" drives laying around that could be used for the FreeNAS box:

  • 60 GB Toshiba taken out of an old laptop, has an IDE interface, but I do have a IDE to USB adapter. When it comes to the issues with USB thumb-drives, I take it the problems arise from the flash memory overheating or more reads/writes than they were designed for. Do you foresee any problems with using an actual HDD like the 60 GB one, adapted from IDE to a USB port on the motherboard? Alternatively, I could try to find an IDE to SATA adapter. I like to repurpose old tech whenever possible, but if this ill-advised, then I'll try something else.
  • 640 GB Seagate, with SATA interface, also taken out of an old laptop. Way larger than what FreeNAS needs to be installed, but I don't have any other use for it at the moment.
  • 256 GB Samsung 850 Pro SSD via SATA, never used before.

Sell it on eBay. Someone might want that. ;-)

That's the plan! If I ever upgrade the network in my home, it will be with RJ45 10gbe, not Fiber Channel. I'm already doing cat6a wire drops in my new construction, just waiting for the 10 gigabit switches to drop in price.

I have my eye on this switch from Ubiquity: https://www.ubnt.com/edgemax/edgeswitch-16-xg/

Retails for about $600, which is the cheapest I've seen for a 10 gig switch with RJ45 ports. Netgear and Cisco still seem to be charging a premium for anything that has RJ45 ports and 10 gig. Not a whole lot of prosumer-grade options on the market, mostly very costly enterprise grade stuff. Ideally, I'd like to put a RJ45 10gig NIC in my Windows desktop, then upgrade to a 10 gig RJ45 compatible switch, then throw some 10 gig NIC into the R510. Right now, I don't see an affordable way to do all that unless there's some networking products you recommend that aren't crazy expensive. Any ideas? I'm open to hearing what you use for your own network needs, as I see in your signature that you're waiting for a 10 gig switch. What model did you go with and why?

I am not sure how sensitive to heat the Dell will be. We have some servers here that are perfectly happy if the room temperature is 75F. You will need to monitor it and make adjustments to the intake air on the server based on the temps you see. The hard drives will most likely not be the problem, so see if you are able to monitor the CPU or other chassis temps.

It was a long post. I am not sure I if I got everything. If you have a question, just post back.

The R510 isn't currently hooked up to a mouse/keyboard/monitor, so I haven't played with the BIOS settings for Dell Config utilities. I would imagine they have some type of thermal monitoring sensors that report back to the Dell Utilities for server admins to keep an eye on. That being said, I did notice that there was an option to let the server fans to run at full speed all the time. I'm also open to slightly underclocking the CPU if it isn't locked and if there's an option to do so in the BIOS, to keep the temps down. If the CPU temps ran too high on a regular basis, I'm open to changing them out for 3rd party heatsink or mini water cooler if the chassis will fit one. Without drilling out the backside of the case, I'd like to try and solve any potential overheating issues by making sure there's good airflow to and from the closet, plus configuring the server not be maxed out at all times when it comes to load.

Yes, that was a long post! You've covered a lot of questions that needed to be answered. Thank you. :D

I'm trying to learn things at a breakneck pace and am enjoying playing around with server architecture for the first time. Just a couple of follow-ups on my original questions here:

From what I can tell, the R510 included backplane is supposed to support up to 12 SATA/SAS drives. It is my understanding that the PERCH700i controller is a no-go for use with FreeNAS (I'm a newbie, but I did read the hardware FAQ where it clearly states to avoid the H700 due to it not having a true JBOD mode for drive pass-thru), and needs to be replaced with a card that has true I.T. mode.

So far, the list of contenders are:
I'm strongly leaning towards the IBM M1015 seeing how much support it seems to have from within the FreeNAS community. What's the old saying? If it isn't broken, don't fix it? Given that it is in so many FreeNAS machines, I'm going to make an assumption that the developers will keep this card in mind for future updates and compatibility testing. Given that I have a Dell system, could there be any benefit to using the Dell H200 or H310 as compared to the IBM M1015? If there's no clear-cut case for going with one of the Dell cards, I'm going to just get a IBM M1015 and plug the two 2xSFF-8087 ports into my 12 bay Dell backplane. According to cyberjock the moderator in this thread, he said the card can support up to 32 physical drives, so I should be good, correct? I'd connect no more than 14 drives via the backplane. (12 x 3.5" HDD's bays in the front of the R510, plus up to 2 SSD's (or HDD's inside of the detachable caddy, see this pic for reference):

hqdefault.jpg

You mentioned using the H310 in your FreeNAS builds. Your signature suggests you're running them in Supermicro systems? Any particular reason why you didn't go with the IBM M1015, IBM 1115 or something else? I was talking with a server components seller on ebay today and he said that he thought that the M1015 might run into R510 clearance issues or the cables being too short, as he didn't think it would mount in the front PCI riser slot. If the M1015 had to move to the rear, he said the cables that would plug into it wouldn't be long enough without buying an extender. Have you had any experience running the H310 or M1015 or similar in a Dell server, and if so, where did you have to mount it, and did you have to buy different cables?

Last but not least, your signature says you've built quite a few of these FreeNAS boxes. Have all of yours been the SuperMicro? I take it you're a fan? How much did you pay for yours? I have a friend of mine I've talked into building his own, but before I tell him to become a Dell R510 fanboy, I'd be interested to find out why you've used SuperMicro and what your experience has been after they've been setup. I'm all ears!
 

StoreMyBytes

Dabbler
Joined
Aug 1, 2017
Messages
13
IBM M1115 is pretty much exactly the same as the M1015, and can sometimes be found cheaper, since everyone's looking for the M1015 ;)

Thanks for the heads up mate in trying to save a few bucks. Always appreciated when your I.T. budget comes out of your own pocket. ;) The IBM M1115 is

Your signature says that you're using both the M1015 and M1115 for your FreeNAS build. Anything of note in terms of compatibility, ease of crossflashing or performance of the devices connected to them comparing these two models?

All of those cards, except the H310 I believe, are all pretty much OEM LSI cards. The H310 is a more custom design IIRC, and I've also seen posts on the forum about the H200 (at least earlier models) running hot... or maybe I'm confused.

Cyberjock mentioned in his Hardware Guide that he wasn't keen on Dell cards from what I could tell, especially the H700. Others, including Chris, seem to think at least the H310 (and I've seen yet other people mention the H200) being a fine option for FreeNAS. Hard to sort through all the different hardware configurations. It would be wonderful if in a future version of FreeNAS there could be an easy way to upload the hardware configuration to a central server to see who is running what and what seems to have the highest degree of compatibility. I'd imagine there's a lot of systems out there that are pieced together Frankenstein style. :D

Depending on your use case, mirrored boot ssds might be overkill. Up to you.

Understood. I'm leaning towards Chris' suggestion of just repurposing some old 2.5" laptop HDD drives that are laying around. You can see the specs on those in my previous post just above this one. Any thoughts?

Worth pointing out that the Westmere upgrade to the Nehalem CPUs includes extra virtualization instructions. At least with BHyve (not sure about VMware), an upgrade to Westmere CPUs is pretty much required if you plan to use VMs. Good news is that Westmere CPUs are cheap, and only getting cheaper, so one day, you might to upgrade your dual quad, to a dual hexacore for very little.

VM's are of secondary import to me at the moment compared to getting the FreeNAS storage online. Just thought it might fun to play with them and learn how they work, seeing the machine has 96GB of RAM. Like you were mentioning, the older generations of Xeon chips can be had for very little money on ebay these days, so if I ever repurposed the machine to be primarily a VM box, an inexpensive upgrade could certainly be in order.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The IDE drive should be fine, but the additional moving parts make the native SATA drive a better choice.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I would prefer a couple of good USB flash drives to the Frankenstein ide/hd/USB abomination :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Others, including Chris, seem to think at least the H310 (and I've seen yet other people mention the H200) being a fine option for FreeNAS
I have five of the Dell H310 cards working in three different model Supermicro system boards. I can only imagine that a Dell card would work in a Dell server. Stands to reason, maybe...
I have als used the Dell H310 card in a couple of Dell workstations, like the T3500 and T3600 Precision workstations that I have and use both at home and at work. In fact, I used a Dell T3500 workstation when I flashed the cards I have to IT mode.
Then there is this:
Dell H310: http://www.ebay.com/itm/Dell-H310-6...-IT-Mode-for-ZFS-FreeNAS-unRAID-/162601544925
LSI 9211: http://www.ebay.com/itm/Brand-New-I...ATA-8-port-PCI-E-Card-Bulk-pack-/252996267204

The difference is $11, so pick the one you like best and go with it. Dell got the chip from LSI and had had the card custom fabricated in some place with super cheap labor, I bet... You can look at them and tell they are not the exact same card, but once I flashed my cards with the LSI firmware, the system recognizes them as LSI cards, just like the two actual LSI cards I have.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
The LSI card is noticeably easier to work with, when using stock LSI firmware.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
60 GB Toshiba taken out of an old laptop, has an IDE interface, but I do have a IDE to USB adapter. When it comes to the issues with USB thumb-drives, I take it the problems arise from the flash memory overheating or more reads/writes than they were designed for. Do you foresee any problems with using an actual HDD like the 60 GB one, adapted from IDE to a USB port on the motherboard? Alternatively, I could try to find an IDE to SATA adapter. I like to repurpose old tech whenever possible, but if this ill-advised, then I'll try something else.
No, I would not take this route. When I built my NAS using spinning disks for the boot drive, I went and bought "new" drives for the boot drives. I wouldn't suggest using old drives that are laying around. First, they may already have some defect that you don't know about and even if they don't they could still have a shorter life than desired.
In addition to the above problem, using a USB adapter to connect them will likely prevent you from accessing the SMART data reported by the hard drive that allows you to monitor it's health and this is a big part of the reason for using a hard drive instead of USB flash media.
USB flash media does not report SMART statistics and you have no way of monitoring the health, so you have no way to know ahead of time that it is about to fail. After I had two different USB boot sticks fail on me, I decided it was time to make a change.
I searched for and found someone selling drives that had not been used previously. When I got them, they had zero power on hours.
This is the type I got: http://www.ebay.com/itm/Seagate-HDD...-8-MB-Internal-Hard-Disk-Drives-/172311835507
- edit: not exactly, my seller was US based and, looking more closely at the listing, this doesn't appear to actually be new.

The seller I purchased from had many available and I bought six all at once. They are all still working perfectly two years later. The idea is to have something reliable and be able to run SMART tests against it and pull reports to ensure it is still working.
The rational behind the choice is that speed and capacity are NOT needed. If you can get acceptable performance from an 8GB USB 2.0 boot stick, the hard drive is only going to be better, but it is total overkill to use any sort of modern SSD. You could get some small capacity, slow, industrial SSD; but they are comparatively very expensive. This is not a "cheapo" solution, just spending money where it makes sense to spend it. All you need is a RELIABLE boot drive. I didn't do this to be cheap.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The LSI card is noticeably easier to work with, when using stock LSI firmware.
True. If you are going to flash the Dell card, you first have to use the Dell version of the firmware and flash back to revision 7 ( if I recall correctly ) then you can again flash the Dell IT mode firmware (change from the RAID firmware that is stock) and then you can flash back forward to the LSI firmware. It takes time and effort and multiple reboots. It is a pain in the behind. That is why if I need another card, I will probably buy one like the ones I linked to that is already flashed.
 

StoreMyBytes

Dabbler
Joined
Aug 1, 2017
Messages
13
Chris,

Regarding the controller...

I ended up getting an IBM M1015 from a US-based seller that gave great pre-sales advice over the phone. He said that they sell a lot of these models to FreeNAS folks and has never had a complaint. He went out of his way to even double-check the fit and cable length in his own Dell R510 in the warehouse to ensure it would work before shipping it out. I saw a few Chinese sellers of M1015's, but honestly, I would rather have it from a US source given how many Chinese fakes are floating around. Seems like it is a fair amount of work doing the crossflash for the M1015, but given the excellent track record of protecting data and just plain ol' working, I went that route even if it is more of a hassle.

Still, I've also saved your link for the pre-flashed controllers you referenced. Didn't even know there were ebay sellers catering to people who want to run ZFS. Would have saved some time, and I might just buy one of them anyway and see if there's any difference in performance compared to my M1015 that is in the mail, or just keep one or the other as a backup in case one of them fails. Never hurts to have some parts redundancy laying around in case of hardware biting the dust. :D Lets say if the M1015 controller ever died, is it much of an issue to swap in a H310 controller and have it rebuild the array, coming from the M1015, or is it always best to find the identical part and replace it whenever possible with FreeNAS hardware and how it is configured?

Those new-old stock HDDs are very reasonably priced. I'm just going to bite the bullet and get a couple of them per your suggestion. Doing that will minimize failure points in IDE to USB adapters and possible loss of SMART data. Sometimes it is just hard to recycle old tech, but back in the parts drawer it goes! FreeNAS just seems to use the HDD's as a loading device until the OS is stored in RAM from what I understand, meaning that an SSD really won't affect performance once the initial boot sequence is complete. Reading some reviews now for which 2.5" SATA HDD's seem to be the most reliable, then I'm going to purchase at least four of them. That would give me two in active production inside of the R510 and two additional ones for on-hand spares.

Next question I have is regarding networking and link aggregation. The Dell R510 has two gigabit RJ45 jacks built into the motherboard. Additionally, there are two more gigabit NIC, each with two RJ45 ports on them. That means six gigabit ports altogether. If combined in lagg mode via FreeNAS, that gives 6gbps of throughput, the same speed as your standard SATA III HDD's which what is going inside of the box.

I currently do have not a switch supports link aggregation or 10gbe. I plan on putting a new switch (or switches) inside of my server rack right next to the FreeNAS box.

Here are three proposed options:

Option 1:

Buy a 10gigabit switch. Run multiple RJ45 connections from the switch to the R510 to create a 6gbps link would be easy to do, as they'll be in the same rack. Seeing that the bottleneck of the SATA III HDD's will be 6gbps, buying a 10gbe NIC for the R510 shouldn't give any extra speed, should it? Unless there's a lot of overhead on a NIC? I'm not a network engineer, so someone correct me if I'm wrong. My plan to minimize costs was to use Link Aggregation and the multiple NIC ports in the R510 to get the link to the switch at 6gbps, then take a single port of the switch and run a cat6 wire about 200 feet away to where my Windows desktop machine is. I would then buy a 10gbe NIC for that desktop. Does FreeNAS work well with linking up multiple NICs? Any issues with data integrity?

Option 2:

Buy a 10gbe NIC for the R510. Buy a 10gbe NIC for my Windows desktop. Skip buying an expensive 10gbe RJ45 based switch. Create a network bridge between the two devices and bypass a 10gbe switch altogether. Does FreeNAS support creating a direct link from the server NIC to a client NIC?

Option 3:

Same as option two, but just add a 10gbe switch to the mix in case FreeNAS doesn't support direct server NIC to client NIC connections.

Thoughts?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The question of a controller failure is one that I have had to go through personally. Once I replaced the controller and booted the system back to FreeNAS, the pool was imported and available. In my situation, no rebuild was needed.
I have also moved a pool from a SATA controller to a SAS controller and that just works too. Basically, FreeNAS doesn't care about how it gets to the disks, as long as it can find the disks it needs, it will make the storage ready.
It doesn't care about the order of the connection or which controller the drives are on. I have two SAS controllers in my NAS and, as a test, I took all the disks out and randomly put them back in. They were on different controllers, different ports, and still the pool was automatically mounted and ready when the NAS booted.
ZFS and FreeNAS are the best.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Top