First Server

Status
Not open for further replies.

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
I'm looking to build my first server to use as a backup or file sharing server... After looking around, I'm leaning towards finding a used Dell PowerEdge R710, R510 II, or R720...

I'll probably be using 4TB 7200RPM drives like the WD Re4 4TB which means I can't use older controllers like the PERC6i since they only recognize up to 2TB of each drive.

My question isn't so much as to rather or not I have a solid build, but rather what controller can/should I use with these servers that will play nicely with FreeNAS and not limit me to using 2TB drives.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'm looking to build my first server to use as a backup or file sharing server... After looking around, I'm leaning towards finding a used Dell PowerEdge R710, R510 II, or R720...

I'll probably be using 4TB 7200RPM drives like the WD Re4 4TB which means I can't use older controllers like the PERC6i since they only recognize up to 2TB of each drive.

My question isn't so much as to rather or not I have a solid build, but rather what controller can/should I use with these servers that will play nicely with FreeNAS and not limit me to using 2TB drives.
Any of the LSI 9211 / IBM M1015 / Dell H200/H310 boards will serve you well; they're functionally equivalent.

In addition to the guide @Ericloewe suggested, check out this thread:

https://forums.freenas.org/index.php?threads/confused-about-that-lsi-card-join-the-crowd.11901/

Good luck!
 

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
I did read the hardware recommendations but seeing as I'm new to some of the terminology, it wasn't a very clear read nor did it seem very specific.

The popular replacement for the PERC6i in the R710 seems to be the H700 controller. Am I correct in my understanding that this controller would not work well?

If so, why is this and its there anything in a controllers specifications that I should be looking for as an indicator that it may or may not work well?
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I did read the hardware recommendations but seeing as I'm new to some of the terminology, it wasn't a very clear read nor did it seem very specific.

The popular replacement for the PERC6i is the R710 seems to be the H700 controller. Am I correct in my understanding that this controller would not work well?

If so, why is this and its there anything in a controllers specifications that I should be looking for as an indicator that it may or may not work well?
Correct, the H700 won't work well because it can't be configured as a true JBOD ('Just A Bunch Of Disks') adapter; you can only use it as a RAID controller. FreeNAS needs direct access to the drives for best performance, which is why simple HBA (Host Bus Adapter) cards like the Dell H200 or H310 (or LSI 9211, or IBM M1015) when flashed to IT mode, are much better.
 

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
Aside from being a JBOD adapter/controller, is there anything else that determines if it'd work with a particular server? For instance, if I get a Dell server, is anything about it going to be proprietary or will any of the controllers/adapters work with the server? Basically, what I'm getting at is, are the interfaces standardized or do they vary manufacturer to manufacturer or only work with certain back planes, motherboards, etc?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
Basically, what I'm getting at is, are the interfaces standardized or do they vary manufacturer to manufacturer or only work with certain back planes, motherboards, etc?
Everything is standardized, but big OEMs love to artificially restrict their products. Specifics will vary, but I seem to recall Dell servers having no problem with non-Dell HBAs.
 

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
Ok, so with that information in mind, my goal is something cheap but decent enough to use for the next few years. I'm currently looking at...

(I'll probably buy most of this as "used" off ebay, so I don't have any direct links)
Dell PowerEdge R510 (12-bay model)
2x Intel Xeon X5675 Hexacore 3.06Ghz Processors (I was told this was overkill, is it? What would you suggest instead?)
Dell PERC H310 RAID Controller
6x Western Digital Re4 4TB 7,200 RPM Hard Disk Drives (for starters, plan to expand later)
2x16GB DDR3 1333 Mhz ECC Buffered (again, for starters - plan to expand as I add more drives)

This system only supports up to 128GB RAM, I'd like to keep room for expansion in mind (possibly attaching a Dell MD1220 or equiv when needed), should I look at something like the PowerEdge 710 instead which allows up to 288GB RAM? I'd be sacrificing 4 bay slots on the unit but if it's the difference between being able to attach more drives in the future, it may be worth it.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
What about 'That'? Could you elaborate, please?

Any opinions on my other questions?
I am a fan of Dell for a lot of things but I would recommend staying away from their server chassis because the shape of the chassis precludes using a standard ATX system board.

I use the H310 for my NAS builds and it works great, but you must cross flash it to IT mode and there are instructions for doing that on the web that can be found with a few Google searches, I did. It is not super hard to do, but it does take a few steps. If you have it flashed properly, it shows up as an "Avago Technologies (LSI) SAS2008"..
The H310 (as Dell sells it) is only able to do RAID-0 and RAID-1 anyhow, but once you flash it to IT mode, you can't even do that. It becomes a straight HBA that passes all the SMART data on the disks over to FreeNAS, which is what you want.

If you are taking suggestions, I would get one of these chassis and use a SuperMicro system board: http://www.ebay.com/itm/172315518085

Take a look at the build in my signature for ideas. It was all built with eBay parts (except for the drives) and works like a champ.

A question that came to mind, because of your discussion of RAM. What is your total target for storage and what do you want to use the system for besides storage? Form follows function. If you will discuss your usage it will make hardware recommendations more accurate to your use-case.
 
Last edited:

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
I plan to use it for backing up the 5 computers in the house and for sharing files (pictures and other media). That said, I expect space requirements to grow over the next few years and don't really have a 'target'. For now, I plan to start with either 8 or 12 4TB drives.

The builds in your signature, @Chris Moore, only have 32GB of RAM... from what I've been reading, it's recommended to have at least 1GB of RAM per TB of physical disk space. Seeing as I'm going for high capacity on this build, I don't think that that would be enough - unless the RAM requirements don't scale 1:1 as I've been led to believe. Should one stick to the 1:1 rule for best performance, aim above it, or is 1:1 typically a bit more than actually necessary at larger quantities?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I plan to use it for backing up the 5 computers in the house and for sharing files (pictures and other media). That said, I expect space requirements to grow over the next few years and don't really have a 'target'. For now, I plan to start with either 8 or 12 4TB drives.

The builds in your signature, @Chris Moore, only have 32GB of RAM... from what I've been reading, it's recommended to have at least 1GB of RAM per TB of physical disk space. Seeing as I'm going for high capacity on this build, I don't think that that would be enough - unless the RAM requirements don't scale 1:1 as I've been led to believe. Should one stick to the 1:1 rule for best performance, aim above it, or is 1:1 typically a bit more than actually necessary at larger quantities?
I have one server with 16 and the other with 32 and in both cases it is more than I need.
Let me go over a couple things to make sure you know what I know, and this is based on a lot of reading and my own personal observations from running several servers over the course of the past 6 years. I made some mistakes and learned some lessons the hard way, but I never lost my data.

The demand for memory is for two things:
1st, you need memory to cache your writes. By default, write go to RAM first and are flushed to disk when either enough data is in the write cache to hit the flush threshold or enough time has passed to hit the flush threshold and then the write is actually committed to disk.
2nd, you need memory to cache your reads. This usually is the vast majority of the memory utilization but all it does is speed access to data. The design of this is for a system with multiple simultaneous users. The data that is cached is based upon the available space in memory, the frequency with which the data is read and how recently the data is read. If you are not reading the same data with some degree of frequency, your read cache is useless because you are always having to go to the disks to get the data anyhow. This is the most likely situation in a home use environment, at least in my experience.

That said, you do need a certain amount of memory, however in the case of a home system, with a small number of concurrent users, once you get beyond 16 GB of memory, I would say that it is really not needed to go for 1 GB of RAM for every 1 TB of storage. In my system with 32 GB of memory, I have all that memory because one of my jails is the PLEX server that everyone in the house uses to watch movies and the other is a headless Virtual Box installation with four Linux virtual machines running. Because of all the virtual machines, I would like to go up to 64 GB of memory, but I can get by with what I have.

The amount of storage you are targeting (in the 8 drive system) is actually not that different from what I have. Here is what I am basing that on:
if you go with the 8 x 4TB solution and set it for RAID-Z2, it would give you around 17 TB of usable storage.
if you go with the 12 x 4TB solution and set it for RAID-Z2, it would give you around 31 TB of usable storage.
With the level of redundancy in my system, my storage space is almost 14 TB and I am only using 5.5 TB so I have room to grow and I am only using 12 drives. I can easily add more.
Certainly you can build more storage upfront, but if I were in your place, I would get one of those 24 bay storage units, put six drives in it and setup a single vdev pool and when I need more space, I can just add another vdev to the pool. One of the nice things about FreeNAS is that you can easily expand your existing pool. The rule is that all drives in a vdev should be the same size, but you can have a vdev of 6 drives that are 2TB each, another vdev of drives that are 4TB each and another vdev of drives that are 6TB each all in the same pool. I plan on staying with 2 TB drives for now because they are super inexpensive but there are some advantages to 4 TB drives.

In the end, if you are not hammering your server with a constant workload, like a corporate data center, the RAM cache will probably not give you much performance benefit and the amount of RAM is more accurately based on the amount of usable storage, not the raw disk space. Workload matters, for example, in most home scenarios it would be a total waste to put a SSD in the system to act as additional cache for the log file or any other purpose. You can throw more money at a system if you want to, but it really depends on how you will actually use it. I am super paranoid about loosing my data, so I have everything on one NAS replicated on the other NAS. I could have a complete, catastrophic failure of one of these systems and not loose a single file.

I know that I am a bit long winded, but I hope that I provided some insight.
 
Last edited by a moderator:

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
That wasn't long winded at all, you gave just enough information to be informative without rambling on like a 10 page blog that never gets to the point. It was very helpful.

I think I will go with the SuperMicro chassis, I kinda hate it's appearance, but it's the cheapest 24-bay there is. I'm thinking about going with the X8DTE motherboard instead the X9SCM-F that you have.

I do have a question about the RAM though, how do you know if your board supports Low Voltage, Load Reduced, HyperCloud, and other types of RAM? I noticed HyperCloud tends to be cheaper in some instances but I can't find much information on what it even is or how to tell if a board supports it.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
That wasn't long winded at all, you gave just enough information to be informative without rambling on like a 10 page blog that never gets to the point. It was very helpful.

I think I will go with the SuperMicro chassis, I kinda hate it's appearance, but it's the cheapest 24-bay there is. I'm thinking about going with the X8DTE motherboard instead the X9SCM-F that you have.

I do have a question about the RAM though, how do you know if your board supports Low Voltage, Load Reduced, HyperCloud, and other types of RAM? I noticed HyperCloud tends to be cheaper in some instances but I can't find much information on what it even is or how to tell if a board supports it.
My board supports Low Voltage, which I am using on one of them (not sure which) because they do have different RAM quantity and brand. The E3 Xeon has less options on RAM than I might like and my generation is limited to 32 GB, but I couldn't spend the money (at the time) for the newer generation chip and board and 64 GB of RAM that I would have liked to have in the system I use for Virtual Machines. The other thing I plan to do but have not done yet, which is why I have the 10 GB network cards, is to setup another server to run VMWare and have all the virtual disks hosted on my storage array. The VMWare server will have to be a newer generation board at the very least.
Sorry, I got a little off track, I have no experience with HyperCloud memory. The hardware changes from one generation to the next and that is one of the nice things about the SuperMicro system boards. The model number actually tells you the generation. I have an X8 generation board that I use in a firewall, and my FreeNAS units are both on X9 generation boards. One day I would like to get a deal on a used X10 generation board because I think it would be fine as a VMWare host, but the latest generation is X11 and those just came out relatively recently.

I like the ability that SuperMicro hardware gives me to mix and match parts which is something you can't do when you buy a pre-built system from Dell or HP.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,363
I have one server with 16 and the other with 32 and in both cases it is more than I need.
Let me go over a couple things to make sure you know what I know, and this is based on a lot of reading and my own personal observations from running several servers over the course of the past 6 years. I made some mistakes and learned some lessons the hard way, but I never lost my data.

The demand for memory is for two things:
1st, you need memory to cache your writes. By default, write go to RAM first and are flushed to disk when either enough data is in the write cache to hit the flush threshold or enough time has passed to hit the flush threshold and then the write is actually committed to disk.
2nd, you need memory to cache your reads. This usually is the vast majority of the memory utilization but all it does is speed access to data. The design of this is for a system with multiple simultaneous users. The data that is cached is based upon the available space in memory, the frequency with which the data is read and how recently the data is read. If you are not reading the same data with some degree of frequency, your read cache is useless because you are always having to go to the disks to get the data anyhow. This is the most likely situation in a home use environment, at least in my experience.

That said, you do need a certain amount of memory, however in the case of a home system, with a small number of concurrent users, once you get beyond 16 GB of memory, I would say that it is really not needed to go for 1 GB of RAM for every 1 TB of storage. In my system with 32 GB of memory, I have all that memory because one of my jails is the PLEX server that everyone in the house uses to watch movies and the other is a headless Virtual Box installation with four Linux virtual machines running. Because of all the virtual machines, I would like to go up to 64 GB of memory, but I can get by with what I have.

The amount of storage you are targeting (in the 8 drive system) is actually not that different from what I have. Here is what I am basing that on:
if you go with the 8 x 4TB solution and set it for RAID-Z2, it would give you around 17 TB of usable storage.
if you go with the 12 x 4TB solution and set it for RAID-Z2, it would give you around 31 TB of usable storage.
With the level of redundancy in my system, my storage space is almost 14 TB and I am only using 5.5 TB so I have room to grow and I am only using 12 drives. I can easily add more.
Certainly you can build more storage upfront, but if I were in your place, I would get one of those 24 bay storage units, put six drives in it and setup a single vdev pool and when I need more space, I can just add another vdev to the pool. One of the nice things about FreeNAS is that you can easily expand your existing pool. The rule is that all drives in a vdev should be the same size, but you can have a vdev of 6 drives that are 2TB each, another vdev of drives that are 4TB each and another vdev of drives that are 6TB each all in the same pool. I plan on staying with 2 TB drives for now because they are super inexpensive but there are some advantages to 4 TB drives.

In the end, if you are not hammering your server with a constant workload, like a corporate data center, the RAM cache will probably not give you much performance benefit and the amount of RAM is more accurately based on the amount of usable storage, not the raw disk space. Workload matters, for example, in most home scenarios it would be a total waste to put a SSD in the system to act as additional cache for the log file or any other purpose. You can throw more money at a system if you want to, but it really depends on how you will actually use it. I am super paranoid about loosing my data, so I have everything on one NAS replicated on the other NAS. I could have a complete, catastrophic failure of one of these systems and not loose a single file.

I know that I am a bit long winded, but I hope that I provided some insight.

Agree, and if you're talking about expansion, thinking about a 24 bay case is a great idea. I would point out that 8 drive vdevs work great in a 24 bay case... since you can fit three of them ;)

And they reduce the redundancy 'loss' to 25% from 33% for 6 drive RAIDZ2 vdevs.
 

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
I was actually debating between 6 or 8 drive vdevs but then I read read this article and also began considering mirroring too. Therefore, I'm not sure what I'm going to do yet...

Right now, my focus is getting the parts, assembling the box, and finding out where I'm going to set up my server. lol We don't have ethernet running through the house and the modem is just sitting next to the television in a spare bedroom, so I have a lot of decisions to make on that front too.

By the way, I like your Norco RPC-4224 a lot better than the SuperMicro case (visually speaking). By far, my favorite 24-bay chassis I've seen would have to be the Sans Digital EliteSTOR ES424X6+B, but both are so much more expensive than you can find the SuperMicro. :(
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
then I read read this article
Ugh, why do people keep digging that up?

Performance is simply not a concern for a properly-designed RAIDZ2 vdev. Rebuild times are perfectly manageable - and the pool is actually protected by some redundancy during the process, unlike what happens with simple mirrors.

RAIDZ2 is unequivocably safer than a set of simple mirrors.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,363
Thanks,

Mirrors are great... BUT there is an issue... Say you had 24 drives.

You can lose your entire pool with just 2 drive failures.

Alternatively, with 6-way Z2. You will not lose your entire pool with just 2 drive failures.

In fact, it would take 3 drives... and those drives would all have to be in the same vdev. You could withstand up to 8 drive failures.

So, Mirrors are not necessarily as robust as RaidZ2.

Also, 4 6-way Z2 arrays does give you a fair chunk of IOPs. Not as many as mirrors... since you get IOPS per vdev... and with mirrors you would have 12 vdevs...

So, its not quite as simple.

But I would say, if you're even considering mirrors... then you're looking at 50% redudancy. So, decide if you want 50, 33 or 25% redundancy.

And 33% is the half-way point... which is 6-way RaidZ2.
 

Mheetu

Dabbler
Joined
Oct 28, 2016
Messages
17
Ugh, why do people keep digging that up?

Performance is simply not a concern for a properly-designed RAIDZ2 vdev. Rebuild times are perfectly manageable - and the pool is actually protected by some redundancy during the process, unlike what happens with simple mirrors.

RAIDZ2 is unequivocably safer than a set of simple mirrors.

Probably because it gets a high placement in search results and us noobs find the logic to be sound. If they're bringing it up though, you should be happy because that provides you with the opportunity to argue against what is stated in the article and provide individuals like myself insight on the matter instead of walking down a potentially misleading path. :)

Anyway, one of the things that captivated me the most about that article is wanting to expand the size of your storage and swaping existing drives out with larger ones. He argued that in a Z2 of 6x 2TB devices, you could force one to fail, replace it with a larger one, and wait upwards of 12 (if I recall correctly) hours for it to rebuild. With that logic, a full rebuild of a single vdev could take a week to complete if you did one a day. Granted, this isn't something you'd want to do often but... humor me... is it accurate? To contrast, how long would it take in a mirrored setup? I understand the speed at which one can increase the size of their pool is irrelevant if the pool fails before you even need to, I'm not saying it alone is reason to pick mirrored over Z2+, I'm merely trying fish for all of the facts to base a decision on.

I plan to do a lot more research and even experimenting a bit when I get my NAS built before making my final decision. That said, please don't take my skepticism (for lack of a better word) personally; I just like to have all of the information before making a decision.
 
Status
Not open for further replies.
Top