BUILD Server Choices - Which server should I use

Which server Should I use?


  • Total voters
    3
Status
Not open for further replies.

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
Hi,

I am gearing up to move my freenas box from a business grade workstation to server grade hardware. I have two servers I purchased in the past for other uses that have since been decommissioned and are now dust collectors. I just can't make up my mind on which one to use for FreeNAS as they both have different pro's and Con's which makes my already indecisive mind hurt.

At this point i'm looking for as much reasonable performance as I can get considering the hardware i'm using. I'm using Sata disks and dual gig nics to start so i'm not expecting blazing fast speeds but I don't want to suffer. Also this will serve as a NFS storage for a single ESX host and a general use NFS (two separate raid arrays). This is for home use so i'd like the new server to be as quite as my current server.

Server1: Dell PowerEdge T110 II, 8GB Ram, Celeron G530 2 cores HT (4 threads), tower chassis, no warranty (can still be added)

PRO's:
It's Newer
Can be warrantied
It's very quite
Still in production (As far as I can tell)
CPU and RAM can be upgraded if needed

CON:
The Chassis only supports a maximum of 7 full size drives so i'd have to mod the case to fit more disk
Not rack mountable so it'll have to sit on a shelve in the rack (far from a deal breaker)
Drive are not hot swapable
I have to buy a new SATA controller(Not enough SATA ports)


Server2: Supermicro X7DBU, 16GB Ram, dual Xeon Quad core L5420 (8 threads) 12mb cache, 2U Rack mount Chassis, no warranty(To old to add)

PRO's:
Dual Xeon server with 16gb ram
12 drive slots and is hot swapable.
It's older, however I think it may actually be faster than server 1, I think...

CON's
It's seen more use and can not be warrantied.
It is VERY loud. I will have to do a chassis\fan mod to quite it down. It will still be louder than server 1 because I wont be able to mod the power supply fan.
I will also require a new raid controller. The LSI in there is SATA 2 and only supports up to 2 TB drives. I have a 4 disk raid 10 with 3 TB drives.
EOL (End of Life)

I'm sorry for the long post but I just don't know which way to go. Any and all advise would be appreciated!!
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Don't get X7 hardware. They have the slow FSB and the hot, energy-burning FB-DIMMs. From the two definitely the T110 II, but the Lenovo TS440 would feature 8x3.5" Hotswap from the front.

Without encryption this one might be a good choice, just add RAM (3 or 6x16GB DDR3 per CPU, it'll run on a single CPU as well): http://www.ebay.com/itm/151608402237

If 12-14 bays are fine with you, the HP DL180 G6 in various configurations is also a good system, as long as you add in a LSI 9211-4i4e or 9207-4i4e HBA instead of the HP RAID controller and enough RAM. Same rule as for the Supermicro X8 applies -> 3 or 6x16GB RAM per CPU.
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
Thanks Marbus, That makes the decision very easy. T110 it is! I'm definitely locked into one of these two server as I already own them. If I were to buy another computer my wife would probably kill me.

I guess it's time to let that Supermicro go. I've been holding out that there would be some other use for it but I can't seem to find one. It's a sad day...

Thanks again!
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
At this point i'm looking for as much reasonable performance as I can get considering the hardware i'm using. I'm using Sata disks and dual gig nics to start so i'm not expecting blazing fast speeds but I don't want to suffer. Also this will serve as a NFS storage for a single ESX host and a general use NFS (two separate raid arrays). This is for home use so i'd like the new server to be as quite as my current server.

Server1: Dell PowerEdge T110 II, 8GB Ram, Celeron G530 2 cores HT (4 threads), tower chassis, no warranty (can still be added)

I have a 4 disk raid 10 with 3 TB drives.

Here's my thoughts:

While you're right in going with the T110 for the much lower power consumption/noise/heat, you're definitely going to want more RAM.

8GB is the bare minimum and trying to host VMs from that is going to be a pretty painful experience. That's going to be coupled with the fact that NFS from ESXi is a sync write by default. With no SLOG device your write speeds are going to be slow to the point of unusable unless you force sync=disabled. And if you're going that route, you might as well run iSCSI.
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
Here's my thoughts:

While you're right in going with the T110 for the much lower power consumption/noise/heat, you're definitely going to want more RAM.

8GB is the bare minimum and trying to host VMs from that is going to be a pretty painful experience. That's going to be coupled with the fact that NFS from ESXi is a sync write by default. With no SLOG device your write speeds are going to be slow to the point of unusable unless you force sync=disabled. And if you're going that route, you might as well run iSCSI.

Thanks for your thoughts HoneyBager. I figured i'd need more ram but I was hoping 8GB would be enough to start with if I used an SSD for cache. I happen to have a few ssd's laying around with no current purpose.

I decided to go with NFS because I read somewhere that VMware+Freenas+ISCSI is slower than VMware+Freenas+NFS. Did I get bad information? I originally wanted to use ISCSI anyway.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
With only 8GB of RAM an SSD for cache - at least, in the ZFS parlance of "cache" which is "read cache" or "L2ARC" - might actually hurt you since indexing the contents of it will rob you of main memory. If your entire VM can fit into the size of your L2ARC then it would work ... but if that's the case, and you have a few SSD's laying around, I'd just add them in as a separate pool and dedicate that to your VMs.

Re: NFS vs iSCSI speeds, in my experience you've got that in reverse. NFS is slower because ESXi requests all writes to be synchronous and FreeNAS honors that - it won't return a "write completed" back up the storage path until it's actually on stable storage. Without an SLOG, that means it needs to write to your spinning disks. iSCSI on the other hand writes async by default unless forced. This does expose risk of data loss if your power goes out, so you want to have a UPS or something similar if those VM's are critical.

If you have a bunch of SSDs, you could add one (or a small slice of one) as SLOG, commonly misreferred to as "write cache" - that will absorb the sync writes from ESXi and make your writes much less slow.

Let me know how many spare SSDs you have and we'll figure out the best way to make use of them. But you'll still want to price out going to 16GB. Maybe use the proceeds from the sale of that Supermicro ... ;)
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
When I said Cache I meant ZIL. I was reading that although a standard SSD as a ZIL device is not the best solution as far as data safety it can increase write performance.

Regarding NFS VS iSCSI, my only thought is how much faster is NFS with a SLOG VS iSCSI? I'm guessing that would depend on the SLOG device. I get just about 30 minutes from my UPS and I have a second one just as powerful with no purpose so i'm not overly worried about constant power to the devices. I have personally seen APC's drop loads before but it was usually because there were issues with the units. Basically i'm willing to take the risk so i'm more concerned with which is faster.

SSD as a pool... Okay, i'm about to totally thread jack myself here.

I thought of doing that before but I didn't think i'd have enough space with my available SSD's. I have 3 128GB ssd's and 2 64GB SSD's that are all not in use, if I downgrade my wife's laptop and the touch screen in the kitchen i'll gain 2 more SSD's. That'll bring the total to 4x 128GB and 3x 64GB SSD's. I could do a raid 5 or raid 10 which would yield 384GB or 256GB respectively. I guess i could use the 128gb drives in a raid 5. That would give me almost 384Gb for VM's that I can setup with thin provisioning. Then I can add drives as needed. I will probably have to do this almost immediately if I go with iSCSI as I thought I read somewhere not to exceed 50% storage utilization with iSCSI.

My current direction is 4 2TB drives in a raid10 for vm's and for general storage (music, docs, movies, etc) either 3 3tb drives in a raid5 or 4 3tb drive in a raid10. The problem with this setup is 4tb for VM storage, for me, is ridiculously overkill. My host will never support enough VM's to justify this space. I also don't like the idea of 3tb drives in a raid5 but if I go with the raid10 i'm just over 60% capacity and there's the whole 50% iSCSI thing I mentioned earlier.

My other thoughts was to buy another 3TB drive and do a 6 3tb drives in raid10. Use the 9TB to serve both VM's and general storage. One iSCSI or NFS connection to ESXi. I could also waste some space and do the 4 2TB drives and the 4 3tb drives in a single raid10. That would be 8TB of storage serving both VM's and general storage. I'd be wasting collectively 3tb but I won't have to buy any more disks and the load will be spread across more spindles which should mean better performance.

Any thoughts on the drive layout would be appreciated.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Terminology thread. In ZFS terms, "cache" is "read cache" - SLOG is "write cache." Easy to confuse, I know.

NFS with SLOG will still be slower than iSCSI, because iSCSI writes are async and stored in RAM. In terms of speed, iSCSI > NFS with SLOG > sync iSCSI or NFS without SLOG. If you've got a UPS, go with iSCSI as you still get sync writes for the metadata updates, whereas NFS with sync=disabled doesn't even get that.

What model are the 3x128GB and 2x64GB drives? They may not even be suitable SLOG devices if they're old MLC with poor write performance.

And as far as getting more SSDs ... well, I'm no marriage counselor, but I know my wife wouldn't be happy if I took away her laptop's SSD and put her back on spinning rust. ;)

Sounds like right now you have the following. Fill in the blanks for me if you can.

3x 128GB SSD - unknown model
2x 64GB SSD - unknown model
4x 2TB SATA
3x 3TB SATA

I'd suggest:

3x 128GB SSD - 256GB RAIDZ1 pool. Share directly via iSCSI and format with VMFS. Store the OS/performance-demanding VMDKs for your VMs here, but nothing else. The pitfalls of RAIDZ1 are lessened with smaller disks, but if they're old SSDs you'll want to keep an eye on the SMART indicators for wear/block remapping.
2x 64GB SSD - No idea here. 128GB striped vdev pool for VMs you really don't care about?
4x 2TB SATA - 4TB mirrored vdev pool. Cut a smaller (500GB?) vdev out of this pool, share it over iSCSI, and put secondary VMDKs from your VMs here.
4x 3TB SATA - Buy another one and make a 4TB RAIDZ2 pool. Store general files, media, but not VMDKs.

And oh yeah,

16GB of RAM - Get this.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't even try to do VMs without 64GB of RAM. Even on my system with 32GB of RAM, performance was so bad the VMs were crashing.
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
Terminology thread. In ZFS terms, "cache" is "read cache" - SLOG is "write cache." Easy to confuse, I know.

NFS with SLOG will still be slower than iSCSI, because iSCSI writes are async and stored in RAM. In terms of speed, iSCSI > NFS with SLOG > sync iSCSI or NFS without SLOG. If you've got a UPS, go with iSCSI as you still get sync writes for the metadata updates, whereas NFS with sync=disabled doesn't even get that.

What model are the 3x128GB and 2x64GB drives? They may not even be suitable SLOG devices if they're old MLC with poor write performance.

And as far as getting more SSDs ... well, I'm no marriage counselor, but I know my wife wouldn't be happy if I took away her laptop's SSD and put her back on spinning rust. ;)

Sounds like right now you have the following. Fill in the blanks for me if you can.

3x 128GB SSD - unknown model
2x 64GB SSD - unknown model
4x 2TB SATA
3x 3TB SATA

I'd suggest:

3x 128GB SSD - 256GB RAIDZ1 pool. Share directly via iSCSI and format with VMFS. Store the OS/performance-demanding VMDKs for your VMs here, but nothing else. The pitfalls of RAIDZ1 are lessened with smaller disks, but if they're old SSDs you'll want to keep an eye on the SMART indicators for wear/block remapping.
2x 64GB SSD - No idea here. 128GB striped vdev pool for VMs you really don't care about?
4x 2TB SATA - 4TB mirrored vdev pool. Cut a smaller (500GB?) vdev out of this pool, share it over iSCSI, and put secondary VMDKs from your VMs here.
4x 3TB SATA - Buy another one and make a 4TB RAIDZ2 pool. Store general files, media, but not VMDKs.

And oh yeah,

16GB of RAM - Get this.

I've never really looked at the SSD's before but turns out they're the MLC devices you mentioned, made by Samsung. So SLOG is out, would they still be acceptable for the VM's you think or should I consider buying new ones (even though i'd rather not)? They don't have much use on them and one is still in the sealed packaging, I just never used it, I have a problem...

BTW - your wife sounds more tech savvy than mine so i think i'll be okay ;) . I love her to death but I bet she would believe me if I told her I had a RAM emergency. "But hun, your drive is 128GB this one is 500GB, Which ones bigger/better??" The phrase "Honesty is the foundation to a good marriage" was coined in a time before SSD's and raid arrays, that'll help me sleep tonight... I'm a dick...:D

I love the layout you proposed but I have a few questions.
3x 128GB SSD - 256GB RAIDZ1 pool. - do you still recommend this layout with more than 3 drives? For the cost of a 128GB SSD and since I can't add disks later i'd rather buy 3 more disks now and make this a 6 disk pool. That's 640GB which would be sufficient for my use.
2x 64GB SSD - I'll probably whack this all together.
4x 2TB SATA - Any reason why you picked 500GB? Is smaller best practice or something? can it be bigger vdevs?
4x 3TB SATA - I'm assuming you meant to say 9TB Raidz2. Just curious, why not go mirrored on this pool? I'm just thinking about my usage for this storage and it has the potential of filling up much faster than all the others. The flexibility to add disk\space may come in handy. Just curious as to your thought process.

The RAM is a given. My ESXi box currently has 16gb on RAM and I am going to max out the memory in that box. I was thinking of moving the RAM into my freenas box since it's compatible and faster (obviously bigger).
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
I don't even try to do VMs without 64GB of RAM. Even on my system with 32GB of RAM, performance was so bad the VMs were crashing.
64GB RAM!! Really. How many VM's are we talking about here?

My VM's are for home use. It'll only be maybe 6-8 VM's... MAX.

The t110 II maxes out at 32GB of ram so if that wont cut it I might have to get a supported raid controller and put all the VM storage local to the ESXi box. That'll probably still be cheaper than buying 64GB of ram, if my server supported that much.

How many VM's were you running and what kind of Vm's are they? What were they used for?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I do 2 VM's (Windows 7 and Linux Mint 17) with only 16 GB on the host and they've been running stable for a few days without any crashes.
Of course, they're also not really running any services (haven't really installed anything beyond the default install).
They're mostly just for testing things that I can't really do under FreeBSD jails so they're idle probably like 95% of the time.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@sb9t - I was trying to run 1 VM.. sometimes 2. They were unstable and slow as f*ck.

Just wait until you start using them. As soon as some big nasty workload comes (like Windows updates for .net framework!) and you might just find out what it's like to have a VM crash. Then you'll boot it up only to find it will restart the update process... then crash again. Then you'll boot it up again and it will restart the update process... then crash. After a few of those times the OS won't even boot anymore and you'll be f*cked.

See how fast things go downhill? ;)
So yeah.. not even kidding. That .net framework thing was the last straw for me on my system. That's why I mentioned it.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Win8.1/2012 R2 just outright sucked. Even without updates. It just crashed after 40-50 hours. Virtualbox is the devil. Worship the mighty VMware with the Gods Workstation and vSphere. Thou shall not look for other virtualisation solutions.
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
you guys just destroyed my plans. I'm surprised the performance is so bad.

My server only supports 32GB of ram so I guess I have to rethink this. Would using the SSD's for the VM's make any difference?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I've never really looked at the SSD's before but turns out they're the MLC devices you mentioned, made by Samsung. So SLOG is out, would they still be acceptable for the VM's you think or should I consider buying new ones (even though i'd rather not)? They don't have much use on them and one is still in the sealed packaging, I just never used it, I have a problem...

BTW - your wife sounds more tech savvy than mine so i think i'll be okay ;) . I love her to death but I bet she would believe me if I told her I had a RAM emergency. "But hun, your drive is 128GB this one is 500GB, Which ones bigger/better??" The phrase "Honesty is the foundation to a good marriage" was coined in a time before SSD's and raid arrays, that'll help me sleep tonight... I'm a dick...:D

I love the layout you proposed but I have a few questions.
3x 128GB SSD - 256GB RAIDZ1 pool. - do you still recommend this layout with more than 3 drives? For the cost of a 128GB SSD and since I can't add disks later i'd rather buy 3 more disks now and make this a 6 disk pool. That's 640GB which would be sufficient for my use.
2x 64GB SSD - I'll probably whack this all together.
4x 2TB SATA - Any reason why you picked 500GB? Is smaller best practice or something? can it be bigger vdevs?
4x 3TB SATA - I'm assuming you meant to say 9TB Raidz2. Just curious, why not go mirrored on this pool? I'm just thinking about my usage for this storage and it has the potential of filling up much faster than all the others. The flexibility to add disk\space may come in handy. Just curious as to your thought process.

The RAM is a given. My ESXi box currently has 16gb on RAM and I am going to max out the memory in that box. I was thinking of moving the RAM into my freenas box since it's compatible and faster (obviously bigger).

MLC SSDs will be perfectly fine for general vdev use, they're just not recommended as an SLOG device because of the heavy write load that puts on them. It burns through drive endurance at an alarming rate so unless you're using something like the Intel DC S3700 which has a rated writes in the petabyte range it's best not to put MLC there. If they're Samsung 840s you should be quite pleased at the performance.

3x 128GB SSD - 256GB RAIDZ1 pool - Actually you could do a 3-drive RAIDZ1 setup and stripe in a second 3-drive RAIDZ1 setup as well. If you're willing to buy new devices though I'd say just pick up a pair of 512GB Crucial MX100s and set those up as a mirrored pair.
2x 64GB SSD - Might as well use these if they're just going to collect dust otherwise. These might also have value as L2ARC to the pool below if you end up at 32GB of RAM.
4x 2TB SATA - I only picked 500GB as a random arbitrary size. The idea is that you can cut zvols out of this pool and present them via iSCSI as needed (or expand an existing one) and not have to devote the entire pool directly to VM hosting.
4x 3TB SATA - Whoops. No, I actually meant 6TB RAIDZ2, with the idea that in a four-drive scenario, without random I/O, RAIDZ2 is the better choice because of the higher resiliency ("any two drives" as opposed to "can't have two in a mirror fail"). But if you have a plan to expand you might want to either go 6-drive RAIDZ2 out of the gate for the better space ratio, or go with mirrors for ease of expansion.

Now regarding the RAM and VM performance thing ... I have to say that @cyberjock's experiences do not mirror mine at all. Running on SSD will definitely help because the penalty of a cache won't be anywhere near as severe, and you'll be able to absorb a much bigger transaction group before worrying about write stalls in async mode.

@sb9t What kind of VMs are you going to run? If you're hoping to run a full Active Directory/Exchange/SQL environment and beat the tar out of it, it'll be different from if you just want to muck about with a couple different OSes/nginx/LAMP for your own education.
 

sb9t

Dabbler
Joined
Nov 2, 2011
Messages
17
MLC SSDs will be perfectly fine for general vdev use, they're just not recommended as an SLOG device because of the heavy write load that puts on them. It burns through drive endurance at an alarming rate so unless you're using something like the Intel DC S3700 which has a rated writes in the petabyte range it's best not to put MLC there. If they're Samsung 840s you should be quite pleased at the performance.

3x 128GB SSD - 256GB RAIDZ1 pool - Actually you could do a 3-drive RAIDZ1 setup and stripe in a second 3-drive RAIDZ1 setup as well. If you're willing to buy new devices though I'd say just pick up a pair of 512GB Crucial MX100s and set those up as a mirrored pair.
2x 64GB SSD - Might as well use these if they're just going to collect dust otherwise. These might also have value as L2ARC to the pool below if you end up at 32GB of RAM.
4x 2TB SATA - I only picked 500GB as a random arbitrary size. The idea is that you can cut zvols out of this pool and present them via iSCSI as needed (or expand an existing one) and not have to devote the entire pool directly to VM hosting.
4x 3TB SATA - Whoops. No, I actually meant 6TB RAIDZ2, with the idea that in a four-drive scenario, without random I/O, RAIDZ2 is the better choice because of the higher resiliency ("any two drives" as opposed to "can't have two in a mirror fail"). But if you have a plan to expand you might want to either go 6-drive RAIDZ2 out of the gate for the better space ratio, or go with mirrors for ease of expansion.

Now regarding the RAM and VM performance thing ... I have to say that @cyberjock's experiences do not mirror mine at all. Running on SSD will definitely help because the penalty of a cache won't be anywhere near as severe, and you'll be able to absorb a much bigger transaction group before worrying about write stalls in async mode.

@sb9t What kind of VMs are you going to run? If you're hoping to run a full Active Directory/Exchange/SQL environment and beat the tar out of it, it'll be different from if you just want to muck about with a couple different OSes/nginx/LAMP for your own education.

3x 128GB SSD - 256GB RAIDZ1 pool - Actually you could do a 3-drive RAIDZ1 setup and stripe in a second 3-drive RAIDZ1 setup as well. If you're willing to buy new devices though I'd say just pick up a pair of 512GB Crucial MX100s and set those up as a mirrored pair. - Does the same "multiple spindle" rule of thumb for raid apply for SSD's? As I understand it more lower capacity spindles are faster than fewer larger capacity spindles. So with the options on the table, (2) 3-drive raidz1 striped (512GB) would be faster and more redundant than (1) 5-drive raidz1 stripe (512GB), and (1) 2-drive Mirror (512GB) would be the slowest option. I'm not arguing what you wrote but i'm curious if the same rules apply since there isn't really any spindles. Are the SSD's so much faster that it doesn't really matter anymore? IDK...
2x 64GB SSD - I'm sure I could use it for something, time will tell i guess. This is the same logic that lead to some of the spare parts I still have no use for... HA!
4x 3TB SATA - 6TB, Okay i thought the minimum drives for a raidz2 was 5 (3 stripped+2Parity) that's where I got 9TB.

I was shocked at the need for 64GB. The figures documented everywhere for FreeNAS and ZFS in general is roughly 1GB per 1TB of usable space. So at 16GB I should have more than enough RAM. 32GB should be a "perfect world" scenario for my roughly 12TB of usable space. I'm not arguing this point, I don't know enough on the topic to argue it. This is just the information I've gathered. I think I'll just max out the memory at 32GB and see where that get's me. I could always use the RAM in another system if this doesn't work out.

As for VM's, out the gate there will be a server 2008R2 and a server 2012R2 DC. I'll be migrating to 2012R2 then installing the essentials Role.

After the migration:
Server 2012R2 standard - DC with Essentials Role
Server 2012R2 standard - Misc Apps
Windows 7 - Home automation
Ubuntu - Subsonic - Subsonic is awesome!!
Windows 7 - Gallery Server Pro - Personal use, i'm not a photographer
Server 2012 -Veeam - backing up to a netgear readynas and online
I'm testing the waters with attempting to virtualize my media centers in the bedrooms and other locations like the laundry room. If it works i'll have 4 barebone windows 7 vms runing only XBMC and event ghost and 6 barebone ubuntu VMs.

Currently all of this is on physical hardware, my electric bill is crazy. I don't care about the single point of failure and I would like to P2V all of these devices. If I can't do the media centers, well that'll have to be a Atom shuttle and rasberrypi conversation but i'd rather it be virtual since they'd be useless with out the servers anyway.
 
Status
Not open for further replies.
Top