Help me put the finishing touches on my first FreeNAS build!

Status
Not open for further replies.

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
long time listener, first time caller.

looking to get some input on existing choices and remaining components.

use case: homelab/storage
Raid type: raidz2
Newegg Shopping List

general idea is creating a raidz2 pool consisting of 12x3TB drives. if the calc is accurate, we're looking at ~24TB usable. thinking about splitting that as 12TB presented to vmware (currently only 1 physical host. expanding to 2-3 in the future) as iscsi and the other 10TB as probably NFS (or something more appropriate) for backup/storage/plex.

hardware notes:
  • FreeNAS install on mirrored 32gb usb3.0 thumb drives
  • current config has 32GB ECC. could bump to 64GB ECC if performance improvement would justify the cost.
  • use LSI HBA for 8/12 drives. other 4/12 to be connected via motherboard.
questions:
  • is the processor sufficient for the intended workload?
  • PSU calculators recommend PSUs in the 650W range. does that sound right for 12x7500RPM drives + proc + ram? currently looking at 750W just in case.
  • should i add a SLOG/L2ARC m.2/SSD? if so, what sizes/formats?
  • do both L2ARC & SLOG require power loss protection capabilities? (documentation appears to indicate yes for SLOG, not sure about L2ARC)
  • ...how bout this? and based on IO performance and capacity, can it be used for both SLOG & L2ARC, or is the golden rule to keep all roles separate?
  • documentation appears to indicate that if you have sufficient ram, L2ARC will not significantly improve performance. but what is sufficient ram ratio based on storage/workload?
  • anything blatantly missing?
thanks in advance!
 
Last edited by a moderator:
Joined
Feb 2, 2016
Messages
574
1. Plenty. Overkill. FreeNAS itself doesn't require a lot of power. The only time the CPU really becomes an issue is if you're running hosting services (Plex, etc.) on the server. But, FreeNAS itself, it'll be fine with just about any modern CPU that supports ECC RAM. (You can probably simultaneously transcoded and stream four HD Plex streams with that processor and have power to spare.)

2. CPU 73 watts. Drives 144 watts. You're at 220. Toss in another 100 watts just to cover everything else. You're at 330 watts. Double that just to be safe. You're at 650 watts.

3. No. More RAM first. But you probably don't need more RAM. Try the current configuration then find out where you're slow. But you probably won't be slow. That's a pretty beefy system.

4. SLOG, yes, it should. Do you have a UPS? If not, I'd buy that before I'd buy any SLOG.

5. Don't complicate matters yet. Check your performance without any SSD.

6. It's complicated. More RAM more better. FreeNAS mostly does the right thing with most workloads using just RAM. It's fast. Where I see most people have problems is when they attempt to engineer an complex, edge case system. Start with simple. Then go from there only if you have problems.

creating a raidz2 pool consisting of 12x3TB drives. if the calc is accurate, we're looking at ~24TB usable. thinking about splitting that as 12TB presented to vmware (currently only 1 physical host. expanding to 2-3 in the future) as iscsi and the other 10TB as probably NFS (or something more appropriate) for backup/storage/plex.

Your pool configuration is MUCH more critical to performance than SLOG/L2ARC/RAM.

Dump the 12, 3TB drives and replace them with six, 6TB models so as to leave room for expansion. That's your slow, RAIDZ2 pool. Bulk data lives there. Media lives there. That pool - though huge - only has the IOPS of a single drive. It's not good for virtual machines: they crave IOPS. Instead of spending money on exotic SLOG/L2ARC devices, I'd get a pair or two pair of 1TB SSDs. Carve out those high-speed, high-IOPS SSDs to create your VM pool. You'll thank me later. You'll also still have spare slots available for expansion or live drive replacements.

Cheers,
Matt
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
1. Plenty. Overkill. FreeNAS itself doesn't require a lot of power. The only time the CPU really becomes an issue is if you're running hosting services (Plex, etc.) on the server. But, FreeNAS itself, it'll be fine with just about any modern CPU that supports ECC RAM. (You can probably simultaneously transcoded and stream four HD Plex streams with that processor and have power to spare.)

i'm not going to be running anything on the FreeNAS. that's what the esxi/vcenter hosts will be for. does that mean i can mess around with various compression options due to the overhead?

4. SLOG, yes, it should. Do you have a UPS? If not, I'd buy that before I'd buy any SLOG.

yeah. apc smart-ups 1500. i'm good for now. i'll grab another when i add more hosts. regardless, SLOG will have PDP.

6. It's complicated. More RAM more better. FreeNAS mostly does the right thing with most workloads using just RAM. It's fast. Where I see most people have problems is when they attempt to engineer an complex, edge case system. Start with simple. Then go from there only if you have problems.

easier said than done. 4 slots. 4 sticks right now totaling 32GB. if that's not enough, i don't have the option of adding 2, but instead replacing all 4. if i only start off with 2x32GB, i'm not utilizing maximum bandwith by populating all 4. if bumping ram up 32GB to 64GB means i don't need to mess with SLOG/L2ARC, then it could be a solution with less points of failure.

Your pool configuration is MUCH more critical to performance than SLOG/L2ARC/RAM.

Dump the 12, 3TB drives and replace them with six, 6TB models so as to leave room for expansion. That's your slow, RAIDZ2 pool. Bulk data lives there. Media lives there. That pool - though huge - only has the IOPS of a single drive. It's not good for virtual machines: they crave IOPS. Instead of spending money on exotic SLOG/L2ARC devices, I'd get a pair or two pair of 1TB SSDs. Carve out those high-speed, high-IOPS SSDs to create your VM pool. You'll thank me later. You'll also still have spare slots available for expansion or live drive replacements.

not sure what you're considering slow exactly, but based on the below math, with ssd caching on each side (slog on freenas and ssd cache on vmware host), but 538MB/s looks pretty damn good to me. i'm definitely planning on running a decent amount of virtual machines from time to time, but i feel like my gigabit connections are going to be the limiting factor at that point, not FreeNAS throughput. not to mention that the cheapest 1TB SSDs right now are still ~$270, and i'd probably have to go with something a bit more expensive to feel comfortable with the vendor/performance, so you're talking about not only adding $ for the larger 6TB drives, but also 2-4 1TB SSDs at anywhere from 500-1k total plus the difference between the 3TB and 6TB platter drives...financially, i'd probably be better off with one pool, more RAM, and fast SLOG.

upload_2017-12-28_11-10-5.png
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
yeah. i think we got a winner right here. solves a lot of issues with just one device (a cheaper device considering the speed, capacity, and PDP). so as i suspected, the newer PCIe devices have so much bandwidth that they can be used for multiple purposes. without any degradation of individual role performance. less points of failure. 2kMBps writes. PDP. done. now my drives really will be the bottleneck.
 
Last edited by a moderator:
Joined
Feb 2, 2016
Messages
574
i'm not going to be running anything on the FreeNAS.

If you're not running anything on the server but FreeNAS itself, you're WAY overpowered. That's twice as much CPU power as we had for 110 users and a dozen XenServer VMs using the FreeNAS pools.

i can mess around with various compression options due to the overhead?

You'll be fine. It'll take whatever you can throw its way.

We run lz4 on our primary server (Xeon E5645) and gzip-9 on the replication target (Xeon E5430). The difference between no compression and lz4 in terms of CPU power is almost immeasurable. Meanwhile, gzip-9 swamps the CPU of the replication server and doesn't make much of a difference in terms of space utilization. I'm pretty sure that gzip-9 slows down snapshot replication and we might be better off with lz4 but, even with the gzip-9 CPU swamping, snapshots transfer in seconds. So, I'm leaving the pools set to gzip-9. But it's mostly a coin toss.

FreeNAS-Compression.png

4 slots. 4 sticks right now totaling 32GB. if that's not enough, i don't have the option of adding 2, but instead replacing all 4. if i only start off with 2x32GB, i'm not utilizing maximum bandwith by populating all 4.

Maximum memory bandwidth would be the least of my worries. I'd still buy two 16GB SIMMS and then add two more if needed. That's not your bottleneck. (In looking at page 2-11 of the motherboard manual, it seems that with two slots populated (A2 and B2), you get the full RAM bandwidth. Maybe I'm reading that wrong?)

if bumping ram up 32GB to 64GB means i don't need to mess with SLOG/L2ARC,

I wouldn't mess with SLOG/L2ARC. We had both configured for our production server but took it out after a reconfiguration. Benchmarks before, during and after show no performance improvement for our use case. (SLOG helps with write speed, not read speed. L2ARC helps with read - maybe - but not write. Extra RAM helps with reads.)

538MB/s looks pretty damn good to me.

That's only raw bandwidth and less important to VMs than IOPS.

You have 12, 7200 RPM drives and you're only pushing 538 MB/s and, best case, 320 IOPS. On the other hand, a mirror of 1TB SSDs will do 650 MB/s but it can also push 60,000 IOPS. You're not going to get that kind of IOPS performance even if you SLOG/L2ARC your RAIDZ2 array.

Chances are, for a home lab, you'll be just fine with 32GB RAM and 12 drives in a RAIDZ2 pool without any SLOG/L2ARC. So, before you go exotic, I'd try that.

On the other hand, if you really want a fun lab where everything happens lightening quick, put your bulk data on spinning rust and your VMs on a pair of mirrored 1TB SSDs. When all you keep on the SSDs is the VM OS drive and databases (and then mount the bulk data from the slower pool), you don't need a lot of space. We have a dozen VMs (with FreeNAS snapshots) using under 500GB. We can spin up all the VMs simultaneously in seconds. When they were on conventional drives, it took minutes and really lagged while everything was cranking up.

Cheers,
Matt
 
Last edited:

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
6 x WD Red Pro 6TB @ $230 = $1,380 + tax.
4 x Samsung Evo 1TB @ $299 = $1,196 + tax.

don't think i can get on board with spending $2,576 for less space without SLOG/L2ARC vs let's call it 12 x 4TB @ $119 ($1,428) + $389 for the Intel Optane 900P (500k 4kb random read/write IOps, btw) ($1,817 total+tax) which gives you a larger pool, which is exactly what you need for higher read write with platters given the parity overhead, with the added benefits of having SLOG/L2ARC/slop on a device capable of 500k r/w IOps/2kMBps.

the initial calculation for raidz2 performance was assuming 50% reads, which is probably on the conservative side of things. with homelab/storage, it'll probably be 75%+ read, which by the calculator would bump performance estimates to 837MB/s.

i'm also not sure how i feel about the mirrored SSDs. on the high side you're looking at 2TB datastore size? that's a lot less than the 10-12TB i was looking for (iso's, exchange, sql, servers, clients, clusters).
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
THIS article also has some rather interesting conclusions.

the first table didn't exactly inspire awe and wonder:
upload_2017-12-28_17-8-13.png


1GB/s read on 12x4TB in raidz2...not bad...

what's this you say? lz4 compression?
upload_2017-12-28_17-9-22.png

that's better...
 

Zredwire

Explorer
Joined
Nov 7, 2017
Messages
85
I have read through this thread and I really don't understand why you keep showing Sycronus read stats? I guess that helps for Plex but for backup and ESXI that means close to nothing. For backup you would look at writes and for ESXI you would look at IOPS. Both IOPS and Writes suffer with RaidZ2. Your SLOG can help to some degree but if you do a good bit of writing, your system will still be waiting on your disk as they struggle to keep up as your SLOG does not keep caching endlessly. Also if you plan to use ISCSI with ESXI then by default it does not force sync writes and would not use the SLOG. If you want to protect your data you can force sync writes on the ISCSI and it will then use the SLOG. Maybe as a compromise you could run Mirrored Pairs (RAID 10) instead of RAIDZ2. This would give you more IOPS and faster write speed without costing as much as SSD (though SSD would give you massively more IOPS).
 
Joined
Feb 2, 2016
Messages
574
i'm also not sure how i feel about the mirrored SSDs. on the high side you're looking at 2TB datastore size? that's a lot less than the 10-12TB i was looking for (iso's, exchange, sql, servers, clients, clusters).

FreeNAS allows for multiple pools. You'll still have a bulk data pool made of high-capacity, inexpensive conventional drives. You'd never store your ISOs or Plex media on SSDs. Those would stay on conventional drives. But, the VM image with the operating system would live on SSDs.

For example, I have a Plex server running as as VM on my XenServer. The Plex VM lives on the SSD pool from FreeNAS. That takes up just 40GB. That covers the operating system and the /var/lib/plexmediaserver partition where the metadata lives. Tiny footprint. At the same time, I have 3TB of ripped content sitting on a conventional media FreeNAS pool. That is NFS mounted to the Plex server.

We get the high speed, snappy response time where we need it for the operating system and databases by keeping them on SSD and the cheap storage for the media by leaving it on conventional media.

You seem to be looking at this as an either-or situation which is not the case.

I'll second @Zredwire's suggestion of splitting the difference and going with a stripe of mirrors. That'll give you incrementally more IOPS with each mirred pair compared to RAIDZ2 which is pretty much static no matter the number of spindles.

Cheers,
Matt
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
alright. think i'm about ready to pull the trigger.

picked up 12 12x4TB HGST drives. got a great price on a NIB 9201-16i HBA.

and now for (almost) the rest:

CPU: Intel - Xeon E5-2603 V4 1.7GHz 6-Core Processor ($215.99 @ SuperBiiz)
Motherboard: ASRock - X99 Taichi ATX LGA2011-3 Motherboard ($201.98 @ Newegg)
Memory: Crucial - 32GB (4 x 8GB) Registered DDR4-2133 Memory ($432.54 @ Newegg Marketplace)
Storage: 14 x Hitachi - Ultrastar 7K4000 4TB 3.5" 7200RPM Internal Hard Drive (Purchased For $95.00) (two spare)
Video Card: Zotac - GeForce GT 710 1GB PCIE x1 Video Card ($42.99 @ Amazon)
Power Supply: EVGA - SuperNOVA P2 750W 80+ Platinum Certified Fully-Modular ATX Power Supply ($202.29 @ OutletPC)
Case: Rosewill RSV-L4412, 4U Rackmount Server Case / Server Chassis, 12 SATA / SAS Hot-swap Drives, 5 Included Cooling Fans ($246.98 @ Newegg)
Other: Rosewill Server Rack Rails / Server Slide Rails / Server Rails , 26" Three Section Ball-Bearing Sliding Server Rail Kit (RSV-R27LX) ($49.99 @ Newegg Marketplace)
+ This guy -> two of these guys
+ CPU cooler

initial build was looking at one of the v6 E3's on a supermicro. It had ipmp, which i really dug, but...just felt a lot more expensive because it was genuinely more server grade. this mobo has two ethernet ports and i can admin by ssh/web. am i really missing out anything by not going with a supermicro? E3's look like they only do UDIMMs which are just painfully more expensive for the same amount of ram right now...
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
IPMI is very nice to have, especially when things go wrong. You can control power, reset the system, install FreeNAS, whatever, all remotely.

I'll put it yet another plug for used Supermicro gear. You're already doing a rackmount system... go Supermicro and you get a proven board, IPMI, a ton more drive bays (24 or 36 in 4U), redundant power supplies, a far more "mature"/well-built chassis than the Rosewill, etc. Just some examples:
https://unixsurplus.com/collections...-2x-e5-2680-2-8ghz-192gb-2-port-10gbe-sfp-nic
https://www.theserverstore.com/SuperMicro-846E16-R1200B-W-X9DRI-F-24-x-LFF-4U-SERVER_p_598.html

That first link gets you 16 cores at 2.7GHz, 4x the memory, 3x the drive bays. Add a small/cheap SSD (or two) into the internal drive bays for your boot device and you're golden.

By the way, I strongly recommend SSD boot devices, not thumb drives. Even a single SSD is orders of magnitude more reliable than two USB sticks.
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
ipmi is nice, but unless it's also coming in a low power and almost silent configuration, it's a non-starter.

i will admit that the rosewill case isn't exactly my dream come true, but it does what i need without having to spend another 1.2k to get it.

do i really need 16 cores and 4x the memory? no. not really. this is homelab+storage, not production for smb. there's nothing i could possibly ever do at home that'll require all that.

as far as boot...why SSD? these are usb3 micro drives which are going to be mirroring each other straight off the board with a Y connector. even the smallest ssd is going to be way larger than i need and more expensive. unless there's some sort of IO/bandwidth concern i'm missing...
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
What does IPMI have to do with "Silence" and "Low Power"?
Silence is actually achieved by a combination of CPU, CPU cooler, other components, fans and chassis ... the mainboard is not the key for silence.

I don't think that it's a good idea to boot of USB 3.0. The reason for SSD as a boot drive is reliability, not speed.

IPMI is not only nice. IPMI is something that makes a server a server.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
ipmi is nice, but unless it's also coming in a low power and almost silent configuration, it's a non-starter.

I will admit that the rosewill case isn't exactly my dream come true, but it does what I need without having to spend another 1.2k to get it.

do I really need 16 cores and 4x the memory? no. not really. this is homelab+storage, not production for smb. there's nothing I could possibly ever do at home that'll require all that.

as far as boot...why SSD? these are usb3 micro drives which are going to be mirroring each other straight off the board with a Y connector. even the smallest ssd is going to be way larger than I need and more expensive. unless there's some sort of IO/bandwidth concern I'm missing...
I started a couple of years ago with FreeNAS running it from USB sticks (at first a single later on mirrored). I lost 3 sticks due to corruption in that time (every time with a major FreeNAS update). And while it never was fatal I found it damn annoying. And before you ask: they were all sticks from a good brand. Since about 8 months I run FreeNAS from a small SSD (60 GB). I spent a whopping 30 euro on it. I have had not a single problem with my boot device since then. There is one thing I realy don't understand. The total costs of a decent FreeNAS server is from such an order that a small SSD (likely 120 GB these days) is just a small portion of the total budget. Why doing difficult about that?. It will serve you well for a long time and the extra space is not wasted on the long run.
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
What does IPMI have to do with "Silence" and "Low Power"?

ok. i guess we're playing connect the dots.

if you bothered to read the post before, you'd have probably noticed the mention of supermicro gear, because it features IPMI. supermicro servers are neither quite nor power generous as most of them have considerable redundant power supplies and are designed for enterprise workloads. but thank you for pointing out that the motherboard doesn't magically resinate thereby emitting noise. i'm glad we got that squared away.

on the usb/boot note, i'll just go ahead and refer you here. but maybe those guys don't know what they're talking about either.

ipmi does not a server make. it makes a headless device easier to manage.
 
Last edited:

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
I started a couple of years ago with FreeNAS running it from USB sticks (at first a single later on mirrored). I lost 3 sticks due to corruption in that time (every time with a major FreeNAS update). And while it never was fatal I found it damn annoying. And before you ask: they were all sticks from a good brand. Since about 8 months I run FreeNAS from a small SSD (60 GB). I spent a whopping 30 euro on it. I have had not a single problem with my boot device since then. There is one thing I realy don't understand. The total costs of a decent FreeNAS server is from such an order that a small SSD (likely 120 GB these days) is just a small portion of the total budget. Why doing difficult about that?. It will serve you well for a long time and the extra space is not wasted on the long run.

no doubt, brother. it's also one of the more inconsequential components of this build. i was honestly even looking at SATA DOMs (not on this board, but as alternative boot) as an option so as not to take up an M.2 or SATA port. what you said was valuable input based on experience which will make me further ponder my boot configuration. but saying that an SSD is more reliable than mirror'd USBs is just silly especially when you consider cost and wasted space (the other guy). god knows there's enough SATA ports of one type or another available if you're running your array off an HBA...
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
on the usb/boot note, I'll just go ahead and refer you here. but maybe those guys don't know what they're talking about either
And I'll just go ahead and refer you here. In short, the documentation is changing to reflect SSD as the most preferred installation option. In the past, SSDs were expensive, and USB drives weren't... but things have changed now. My current FN box is running on a mirrored pair of Intel 320 40GB SSDs that I bought on eBay for $25/ea. used. After 2+ years, with some unknown amount of wear on them when I purchased them, they are showing 15% and 16% consumed (total number of writes divided by the design life expectancy). I expect them to substantially outlast anything else in the box.

You're going to dump a couple grand, easy, into a nice FreeNAS system. Why boot and operate it from a $10 stick? I disagree that it's inconsequential... if the boot device dies, you'll start seeing odd behavior, boot failures, etc. And part of the whole reason to build a FreeNAS box is to have a solid, reliable, trustworthy place to store your data.

If you need more convincing, browse the forums for a bit... how many threads do you find where people are having boot issues running USB sticks, versus real SSDs?

As far as your power and silence issues... nothing that keeps 12 drives properly cooled is going to be silent. And, the power consumption of the rest of the chassis is pretty insignificant compared to the consumption of the drives themselves. If you are really concerned about every watt, you should be looking at 5400/5900RPM drives and running a lower TDP processor (Avoton, E3, Xeon-D, etc.) Unfortunately, power-sipping and high-performance FreeNAS box are pretty orthogonal.
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
And I'll just go ahead and refer you here.

touche. i laughed. but i don't know if you can get a jab in there with an uncommitted PR to the source doc that happened like 2 weeks ago. first place i look for documentation is the PR section. obviously. ;)

point taken. what i was driving at was the fact that of all the components in a FreeNAS build, the OS, assuming backed up config, is pretty much drag and drop. even if both crap out, you reinstall on a new device and import/rebuild config. data's not going anywhere.

the point that was being made was AN ssd over mirrored USBs as being more reliable, which is...a poorly framed argument.

as far as silence and power, i'm well aware that nothing short of earplugs is going to keep 12 platters silent, but saving where possible is the objective. and the less drives, the better. the 4U supermicro's start at what...like 24 drives? that's twice as many heads as what i'm going for. not saying they all need to be filled, but it'll be tempting. that plus dual PSUs with high performance fans both drawing current instead of capping out at what...maybe 430W/750W? and i was looking at an E3, but the damn mobos only support ECC UDIMMs which are expensive as fuck compared to RDIMMs. and 85W is pretty damn low TDP compared to mainline intel/amd offerings.

upload_2018-2-3_15-18-19.png
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
touche. I laughed. but I don't know if you can get a jab in there with an uncommitted PR to the source doc that happened like 2 weeks ago. first place I look for documentation is the PR section. obviously. ;)
True... I actually opened my own bug requesting the same change, not realizing it was a dupe. But, still, the winds they are a'changing.

point taken. what I was driving at was the fact that of all the components in a FreeNAS build, the OS, assuming backed up config, is pretty much drag and drop. even if both crap out, you reinstall on a new device and import/rebuild config. data's not going anywhere.
Right... but when I'm on the road and something craps out, and suddenly the wife can't watch Plex or get email, it's bad. I ensure reliability to also ensure a high WAF (Wife Acceptance Factor).

the point that was being made was AN ssd over mirrored USBs as being more reliable, which is...a poorly framed argument.
Seriously, it is. Not only do USB drives tend to have endurance issues as they're using cheaper media, they also don't typically wear-level as well, they have klugey firmware, etc. And, when they fail, they fail in weird ways. They often won't simply die, leaving ZFS to say "oops, that one's dead, let's go degraded and call it a day". They'll get slow as well, they'll refuse to write, etc.

as far as silence and power, I'm well aware that nothing short of earplugs is going to keep 12 platters silent, but saving where possible is the objective. and the less drives, the better. the 4U supermicro's start at what...like 24 drives? that's twice as many heads as what I'm going for. not saying they all need to be filled, but it'll be tempting. that plus dual PSUs with high performance fans both drawing current instead of capping out at what...maybe 430W/750W? and I was looking at an E3, but the damn mobos only support ECC UDIMMs which are expensive as **** compared to RDIMMs. and 85W is pretty damn low TDP compared to mainline intel/amd offerings.
As far as filling the bays... your inability to control yourself is a totally separate issue. :) You can get 24 or 36 drive bays in a 4U Supermicro platform. For what it's worth, my system (see sig) is showing 332 watts average over the past 7 days. That's 18 drives plus 4 SSDs, more memory, dual processors, etc. compared to what you're building.
 

shr00mie

Dabbler
Joined
Dec 27, 2017
Messages
13
@tvsjr: luckily my girlfriend's too busy to be plexing enough to care if it's not working. i'm all with you on remote administration. point was if i have to sacrifice IPMI to get the rest for substantially less, then so be it.

again. the arguments you're making include supports with specific points. the former did not and was sort of a blanket, which i don't really accept as counterpoints. i'm pickin' up what you're putting down and it's being noted. trust.

humm. that's not bad. but...WHAT THE HELL ARE YOU DOING THAT YOU NEED ALL THAT?!? dual E5s with fucking 192GB ram? did you rob a colo? i mean i guess for dedupe, but even then that's fuckin' overkill. and dual procs? for what. there's no way you got enough encryption/compression going on to necessitate that...i mean...i GUESS if that's also your plex server and you're encoding multiple concurrent streams from a pretty damn raw source...

p.s. ...am i seeing what i think i'm seeing? does that case have more hot swap in the rear?!?
 
Status
Not open for further replies.
Top