Am I missing anything big? – costs are almost identical

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
Need your fresh look on the list of components, and whether I am missing anything.

The objective is to build rather large (for me) FreeNAS instance and consolidate disperse across computers data + additional level of data protection.
landscape includes several Linux and Windows machines, few run VMware like instances for some experiments.
The storage requirements is ~30-40TB, what in term of disks would mean 7x8TB WD RED drives,
what in RAIDZ2 would mean (7-2)*8TB*0,8 = ~32TB available space - here special thanks to @cyberjock for very nice ZFS storage design documentation.

Boot device and few other components are the same for all possible options:
Boot: Samsung SSD 860 EVO 250 GB
PSU: be quiet! -- DARK POWER PRO 11 -- 650W (Atom) or 750W (Xeon)
UPS: APC Smart-UPS 750VA LCD (Atom) or APC Smart-UPS 1500VA LCD (Xeon)
M.2 for cash (not decided): Samsung SSD 970 Evo M.2 500 GB
SLOG - left for later...
- This part of the build is a constant, but what could be different: MB + CPU + RAM

And here I come to rather confusing results evaluating 3 different options (prices are local, Germany). 1 option is based on Atom, others on Xeon.

Option 1 – Atom – 1.610 €
A2SDi-8C-HLN4F (8 Cores, Benchmark: 4.852 / 771) – 470€
RAM: 4 x 32GB ECC 2400 (Samsung M393A4K40CB1-CRC) = 285x4 = 1.140 €
Cables, heatsink – included in the supply

Option 2 – Xeon-D – 2.350 €
X11SDV-8C-TP8F (8 Cores, Benchmark: 16.222 / 1.673) – 1.210 €
RAM: 4 x 32GB ECC 2400 (Samsung M393A4K40CB1-CRC) = 285x4 = 1.140 €
Cables – included in the supply
Heatsink – might need to be replaced by one with the fan – 25 €

Option 3 – Xeon Silver - 2.330 €
X11SPM-F – 375 €
Intel Xeon Silver 4110 (8 cores; Benchmark: 12.020 / 1.593) – 530 €
RAM: 4 x 32GB ECC 2666 (Samsung M393A4K40BB2-CTD) = 335x4 = 1.340 €
Heatsink – might need to be replaced by one with the fan – 35 €
HDD cables not included in the box – 50 €

Option 4 – Xeon 1650v4 – almost the same cost as Xeon-D or Xeon Silver, but close to EoL product line; a bit higher Benchmark: 14.234/2.182 , but significantly higher TDP jumping from 80-85W to 140W.

In overall system cost difference between Atom and Xeon based solution is about 17-18%
But the Xeon Bases machine has 3-4 times higher overall performance / and at least 2 times more power per core.
Initially assumed savings coming from the power bill (electricity is rather expensive here in Germany) would be substantial, but looking on real difference, it does not look like this, as there are many other components contributing to the bill and making the difference smaller.
Taking the Atom as a base line difference Xeon D consumes + ~50W, and for Xeon S + ~80W
In other word, there are only two economical options available: Atom, or Xeon, where Xeon Silver wins significantly.
With 18% difference in the cost, and +80W in the power bill ~250 Euro/year (with server running 24*7, what is not going to be the case) - the saving looks rather theoretical with big performance loss in case of Atom.

Here is the question, do I miss anything big, important?

And another question, does the Option 3 make sense, are these components compatible to each other?
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
IMO I would stay away from the atom's. Chassis is about all I see missing Have you looked at the 24/36 bay chaises? For Upgrade space down the road. Are you planing on Rack or Tower? I would look at used Supermicro Chaises they make them in all sizes like 8, 16, 24, & 36 Bay units. Also I like the Idea of having dual PSU's with ZFS for Redundancy I was worried about the same thing Power so I did a test and with Supermicro 847 24 Bay unit and 2x 1100W 80 Plus PSU'S and 13 Reds/IronWolfs Drives it runs at 300 watts a Hour X9 Board. I am sure with the X11 It would be less. Option 3 does work looking at Intel and Supermicro site. Also unless you keep your Cpu under havey load you will not see that 80w mark. IMO I like 3 or 4 but that is my option.
 
Last edited:

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
IMO I would stay away from the atom's.
I have one small openSUSE powered server - it runs as a clock for almost 10 years, not a heavy load, but still. For me the question was more like, does it make sense economically, and it looks like for this set up it does not really. Thanks for confirming my doubts.

Chassis is about all I see missing Have you looked at the 24/36 bay chaises? For Upgrade space down the road.
For now I am planning to have rack based one, but use it more as a desktop. It has 10x3,5" + 4x2,5" space and only 400mm depth.
Currently there is no proper space for a full size rack case. i guess, the next step will be not an upgrade, but the next machine...

Option 3 does work looking at Intel and Supermicro site.
thanks for checking

Also unless you keep your Cpu under heavy load you will not see that 80w mark. IMO I like 3 or 4 but that is my option.
That's what I am looking for.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Taking the Atom as a base line difference Xeon D consumes + ~50W, and for Xeon S + ~80W

These are “up to” numbers. When idling they will idle at approximately the same power usage. Perhaps the atom will be abut lower. But motherboard and HD’s controllers will use more power.

If you’re concerned about cost, scale back from 128GB of ram to say, 64GB or 32GB. Can always add more later.
 

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
These are “up to” numbers. When idling they will idle at approximately the same power usage. Perhaps the atom will be abut lower. But motherboard and HD’s controllers will use more power.
Thanks, appreciate your feedback, this confirms my understanding.

If you’re concerned about cost, scale back from 128GB of ram to say, 64GB or 32GB. Can always add more later.
Reading through the forum further I also slowly come to this conclusion, that 64GB or event 32GB would be more than sufficient.

Anything else could be optimized?
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Yes I stared with 16Gb back at Ver 7. I now run 128Gb in my main system and 50Gb in my back up. you can always add. Trust me if you add it freenas well use it freenas Loves memory.
 

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
@Snow, Just to get a prospective, what was pushing you to increase the memory size?
What is the workload on the system requiring such big capacity?
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
I plan on haveing all 24 disks filled by year end now 64 would prob be fine here. I also have 8-10 user on Plex A night so it uses around 20-24Gb And if its 4k I have seen up to 32Gb of ram used buy plex. Swap I have set to 16Gb, I have ARC set to 70Gb Max Yes it is prob Overkill so I use around 118Gb. I Found 16Gb Dimms on sale used for very cheap. So I garbed 8 sticks I plan on filling the other 8 slots. Also If I role out any vm's I can pretty much make them any size I want. I also use freenas as a lab so I will load all sorts of stuff in to vm's.
 

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
@Snow , thanks, this gives me more confidence, that I would be more than fine with 64GB.
You have mentioned VMs. That's another area where I am still on the learning curve. Would my understanding be correct?
Server has ESXi installed as a core, main system and the rest, including FreeNAS is installed as a separated virtual machine, when Plex (still not sure I would need it at all) is running in the jail on FreeNAS - some sort of double virtualization. Aven being virtualized FreeNAS should use disks directly via HBA.
Or there is something wrong in this understanding?
Does FreeNAS offers virtualization as part of it, that e.g. another Linux system could be run within FreeNAS?
 

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
@Snow , thanks, this gives me more confidence, that i would be more than fine with 64GB.
You have mentioned VMs. That's another area where i am still on the learning curve. Would my understanding be correct?
Server has ESXi installed as a core, main system and the rest, including FreeNAS is installed as a separated virtual machine, when Plex (still not sure I would need it) is running in the jail on FreeNAS - some sort of double virtualization.
Or there is something wrong in this? Does FreeNAS offers virtualization as part of it (besides Jail for e.g. plugins - I hope being not too much off from the reality)?
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Yes it is, You can run freenas as a vm but you need very Specific hardware. Just starting out I would stay away from VM'ing FN I would start off with just the normal FN, jails and built in vm in FN. As you learn work your way from that. I like to run a freebsd server, win2008 server, or just to test out any bootable software. 32Gb is a good start for FN. 64Gb would allow you to make a vm with 32, 24, 16Gb or 2 vms at 16Gb It is nice as most win servers req 8Gb min and 16Gb is just a good idea for them.
 

AndroGen

Dabbler
Joined
Jan 19, 2019
Messages
47
Ok, thanks. Then with configuration it is more or less clear...
 
Top