Will AMD Epyc 7351P & Supermicro H11SSL-i do FreeNAS 11?

Status
Not open for further replies.

RobKamp

Dabbler
Joined
Dec 29, 2011
Messages
26
Hi community,

Wil the items on the following list do FreeNAS 11?
  • AMD Epyc 7351P
  • Supermicro H11SSL-i
  • Fractal Design Define R6 Black
  • Kingston HyperX Fury black HX421C14FBK4/64
  • Samsung 960 EVO 250GB
I'm especially interested in using this as for VM and Docker purposes.

Any thoughts or comments?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Fractal Design Define R6 Black
Totally compatible with FreeNAS. No problems with that item.
Samsung 960 EVO 250GB
Also, completely compatible. Many have used these without difficulty.
I'm especially interested in using this as for VM and Docker purposes.

Any thoughts or comments?
As for the rest, you need to look at the hardware guide.


FreeNAS® Quick Hardware Guide
https://forums.freenas.org/index.php?resources/freenas®-quick-hardware-guide.7/

Hardware Recommendations Guide Rev 1e) 2017-05-06
https://forums.freenas.org/index.php?resources/hardware-recommendations-guide.12/

FreeNAS always lags well behind the cutting edge and the hardware you are contemplating is not only too new to be well supported, it is also AMD, which is not as well supported as Intel.
Your money would be better spend buying Intel components that are a couple generations old. I recently picked up a 10 core Xeon E5 for less than $200.
How many VMs are you thinking to run?
If you share the intent, suggestions can be made.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

RobKamp

Dabbler
Joined
Dec 29, 2011
Messages
26
If this is for business (work) you will want to go with new gear, but if this if for home use this would be compatible and give you more resources than you would have with the AMD alternative:
It's a bit of both. I'll have a look through the resources you sent!
 

RobKamp

Dabbler
Joined
Dec 29, 2011
Messages
26

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The only thing that would concern me is that network interface. Dual LAN with 10GBase-T with Intel[emoji768] X722 + X557.
The X557 driver for BSD was available in 2016 but I am not sure about the X722.
I wouldn't hesitate to buy it.
What else do you plan to use with it?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

RobKamp

Dabbler
Joined
Dec 29, 2011
Messages
26
The X557 driver for BSD was available in 2016 but I am not sure about the X722.
I found a driver over at Intel. Follow Link. I think it might work. If not in the early stages I have a spare dual gig ethernet or can use the X557.
 

RobKamp

Dabbler
Joined
Dec 29, 2011
Messages
26
What else do you plan to use with it?
My intentions are to run some development stuff, an OpenHAB server, a Grafana server, a database server, a RoonLabs audio server, a Unifi controller. Also store my music to listen via RoonLabs Audio Server. I cannot choose between VMs or Docker at the moment.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Hi, I have just prepared the following hardware for freenas 11.1
1. AMD Epyc 8 cores 7261
2. Supermicro H11SSL-i
3. 64 GByte Crucial 16GB DDR4 recigered ECC
4. Intel X540 T2
It works like a charme, no more HBA because of 16 SATA ports. It runs very cool Core temp never exceeds 30 degree 20 degrees ambient temperature. Very low electrical consumption. The X540 card also is quite cool dont know why was hot in the previous hardware setup. I have another production freenas on Xeon intel far more expensive plus 2 HBA IBM firmware changed. EPYC is another dimension so far due to its 128 PCI lanes.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Hi, I have just prepared the following hardware for freenas 11.1
1. AMD Epyc 8 cores 7261
2. Supermicro H11SSL-i
3. 64 GByte Crucial 16GB DDR4 recigered ECC
4. Intel X540 T2
It works like a charme, no more HBA because of 16 SATA ports. It runs very cool Core temp never exceeds 30 degree 20 degrees ambient temperature. Very low electrical consumption. The X540 card also is quite cool don't know why was hot in the previous hardware setup. I have another production freenas on Xeon intel far more expensive plus 2 HBA IBM firmware changed. EPYC is another dimension so far due to its 128 PCI lanes.
Good to know. Have you configured any Jails or Virtual Machines?
Please give as many details of your build as you are willing to share as it may help others plan their build.
You can even start your own build thread and share pictures as well.

Here is an example:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Some update on the build. Upgraded to freenas 11.2 RC1 beacuse this will be the production system after release.
The machine runs like a charme on 11.2. Now it is under burnout for a month or so. 24/7 working in moving files in and out on a test partner. Performance looks very very good because of no bottleneck on pci lanes. the bandwidth with the test partner is flat exceeding 300 mbyte/sec on 10 gbit network. The bottleneck is the test partner that is a windows server machine. Cpu temperature is 28 C° and power consumption with 8 disks is 120w or so maximum load 100w or so under average workload. Cpu load is under 5% maximum.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Some update. The nas has been attached to our production 10 Gb switched network for some performance test. Performances are crazy. Sustained 6Gbit per second in write on a raid 10 array. Our Xeon based production NAS looks as a scrap. I will post more updates on system stability next weeks.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Machine is still perfectly stable after three day of intense workload. Console is clean. Ups management is on. Mail subsystem is fine. So far no issues, just perfect. We are crazy Italians and I spend my spare time measuring the performance of this system under extreme flooding of write request. I am not sure, but appears we are the first worldwide doing these tests. Something very interesting is emerging. I am using one of our production machine as a baseline. The baseline machine is a Xeon 2620V3 based system 6 cores (cost 2 years ago more than 500Euro). 64 GB ram two IBM Serverraid firmware changed, X540T2, 10 Disks raid 10 plus dual 100GB ARC2 frontend SSD space 20TB (40 raw). Epyc machine Epyc (8 cores cost 500Euros +) + 64 GB ram + X540T2 + 8 disks RAID 10 no ARC2 so far because of is in test. Both machine use supermicro boards.
Temperatures on Idle: (values are approx)
X540 - 70C° on Xeon 40C° on Epyc this is a mistery so far.
CPU - 45C° on Xeon 28C on Epyc
Temperatures at max write flooding (2 TB from three different data sources even large files exceeding 50 GB).
X540 - 80+ on Xeon 50+ on Epyc this is a mistery so far.
CPU - 60C° on Xeon 30C on Epyc
Cpu Load in write floods
Xeon: 20% peak.
Epyc: 45% peak (far more than expected)
POWER: we did not compare power consumption because it is not fair. Xeon has two more disks and 2 HBA running hot. So of course they are too different devices to be compared.
PERFORMANCES:
Performances are measured on NIC band statistics from freenas.
Xeon: Sustained performance around 4Gbit/sec (peaking at 4.5 or so Gbit average in the lower of 4 Gb/s)
Epyc: Sustained performance around 8.5 Gbit/sec (peaking at 9 averaging at 8.4 or so) with only 8 disks. Xeon has 10.
So performance gain seems to be 2X+ (unexpected).
Consideration: if you want to squeeze performance out of freenas you must use modern technology, not, as suggested from freenas old fashioned processors and HBA and so on. So please freenas people make a favor to the world make some test on new technology!!!.
In Italy we say "you need to eat your own dog-food if you want to be trusted". This post will be the last one unless we have some stability issue that we will report immediately.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
In Italy we say "you need to eat your own dog-food if you want to be trusted".
I don't know what that is supposed to mean.
This post will be the last one unless we have some stability issue that we will report immediately.
It would be better if you could post more information about what hardware you used and what testing you did. Not every community member has the resources to purchase latest generation hardware. Having good test results from someone that does have the resources can be a help to us all.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
My friend, the setup is a less than thousand dollar one, memory out. If you have no resources to do this setup you are not simply interested in my notes. The problem is I (we) have risked 1000 dollars to do first in the world a setup that freenas people just neither take into consideration moreover discourages, I have already did that on a dual Asus motherboard that failed to boot with freenas I gave up no issues (very expensive failure).
Even if, so far, the Epyc setup is stunning, far far over your suggested atom ones, that, by the way costs the same or more. Of course you can spend 200 dollars on ebay for a processor maybe used or fake. But science is science you have to improve. You know what, every minute we save in nas operation is a huge gain at the end of the month. If a machine works what I have to post in this site?. We don't want any responsibilities on people copying us. We use vmware for virtual we dont either take into consideration jails and vm on freenas sorry for that.
Oh yeah I have to post that with the new interface I have not figured out yet how to changed a failed disk but is not an issue you can use command line. Wake up man, freenas is just a user interface on free BSD. The configuration of the machine has been posted, want to know our arc2 disks, ok browse the kingstone site and watch for tantalized SSD enterprise stuff. What else?.
Regards, we dont want nothing, just to help people in being brave and foolish and improve.
 

Brian Stretch

Dabbler
Joined
May 2, 2017
Messages
16
I've been running FreeNAS 11.1u6 virtualized on a H11SSL-NC and Epyc 7351P running ESXi 6.7, with four 2TB Micron 1100 in RAIDZ1 on the built-in LSI 3008 in pass-through mode. Gave FreeNAS 16GB RAM and 4 vCPUs initially, just bumped it up to 32GB after 80 days uptime. The pool is less than half full. I thought I was having problems with large file writes, they stall for a minute or so several times while copying from macOS over mere gigabit Ethernet, but now that looks like a macOS problem as ye olde Win7 box has no such issues. Mind you this pool hasn't been heavily used yet, my grand plans keep getting disrupted, but still. I'm thinking about moving the pool back to dedicated hardware but we'll see.

unzip -t of a 27.7GB zip file from the FreeNAS shell took 42 seconds.

Anyhow, the H11SSL series and AMD Epyc CPUs rock.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Hi, today we will promote the new machine to core2 backup resource. All our freenas pods are set up on raid10 that is only stochastically reliable, but very fast in day by day workload and also in resilvering a failed disk. On Raid 10 you can loose more than one disk but you must be lucky if you lose two disks on the same raid1 sub-array you are done. For this reason we have a two stages backup strategy. Every night a dedicated server copy all files been changed during the day to a core2 backup resource and after than if the operation is succesfull it performs the same operation on a core1 backup resource. So we are totally sure at least one copy of data is done. Core2 is a raid0 (yes raid 0) resource very very fast in writing data because it is just a backup of a backup. Core2 today will be in charge of the Epyc nas that is almost as fast as the raid0 stuff (yeah network is the bottleneck) and will remain operational for a month or so. I let you know the result of the operation.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
I've been running FreeNAS 11.1u6 virtualized on a H11SSL-NC and Epyc 7351P running ESXi 6.7, with four 2TB Micron 1100 in RAIDZ1 on the built-in LSI 3008 in pass-through mode. Gave FreeNAS 16GB RAM and 4 vCPUs initially, just bumped it up to 32GB after 80 days uptime. The pool is less than half full. I thought I was having problems with large file writes, they stall for a minute or so several times while copying from macOS over mere gigabit Ethernet, but now that looks like a macOS problem as ye olde Win7 box has no such issues. Mind you this pool hasn't been heavily used yet, my grand plans keep getting disrupted, but still. I'm thinking about moving the pool back to dedicated hardware but we'll see.

unzip -t of a 27.7GB zip file from the FreeNAS shell took 42 seconds.

Anyhow, the H11SSL series and AMD Epyc CPUs rock.

Hi Brian, 80 days are not bad at all we are testing our new stuff with 11.2RC1 and we have very very stable 10 pods. I am just now taking a look on memory profiles using SAMBA i dont know there is something strange in memory dynamics. It seems the machine is paging alot. It sounds strange to me pagination with 64 gigs ram on moving files over SAMBA, but it is not insane around 600 MB so far. The size of the files we found is not an issue, we are moving files of 80 GB without problems these days. I am sure if it is an issue it will disappear in the official release or so. Have you tested NFS on macOS? maybe it will fix your stalls. Anyway if you are paging too much you may well have stalls of seconds (dont know minutes) over the workload.
Think positive!.
 

Paolo Randi

Dabbler
Joined
Feb 20, 2015
Messages
16
Sorry, you are deadly right you never said "Use atom" you said quote: "Your money would be better spend buying Intel components that are a couple generations old". This is fine, conservative and respectful. I am not upset with you why I should be? I am just doing considerations. In my opinion the point is "dog-feeding" (https://en.wikipedia.org/wiki/Eating_your_own_dog_food).
Everybody can do their own consideration about that. So again sorry i never intended to offend anybody. And you are also right Im not friend of anyone!!, and I never ever use bad words. The rest is a communication issue. I dont use google translator and my English may sound unfair. Regards Paolo.
 
Status
Not open for further replies.
Top