Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

A good Ryzen server motherboard at last?

Joined
May 17, 2019
Messages
5
Thanks
1
#21
I've been using Ryzen 7 2700X for about 6 months and everything works fine. I recently noticed some wrong temp readings in FreeNAS UI and sysctl but that's about it. I also have overclocked RAM with absolutely no issues. I had problems with sticks working nicely together (had to up the voltage a bit). Ran memory tests for about a week and that too passed.

@Apollo Have you had wrong temp readings with your 1900x ?
I'm new to FreeNAS and set it up on a few year old dell latitude laptop I had laying around. It worked great, so I decided to build a PC without doing my research on AMD processors. I went with AMD because of the significant cost difference over its Intel counterpart. I am currently running FreeNAS 11.2-U4 with no issue other than the OS reading a high CPU temperature as well. It threw me off when the Dashboard was showing me the CPU was running in idle at 80C. I immediately shut it down and hooked a monitor into it and booted into the BIOS where it was stabilized idling at 35-41C. I contacted support and created a ticket (NAS-101802) and received cursory feed back stating the below.

"I suspect this problem is already be fixed in FreeNAS 11.3 nightly builds. You may backup your configuration and try to update to check it. There was 49C shift introduced at some point, so your 82C should be really 33C."

I noticed the nightly builds are labeled as "Very Unstable". From how I read your post, the temp reading wrong is nothing to be concerned about.

I've held off purchasing HDD storage to complete my build until I get this resolved either in the OS or my mind.
 

Apollo

FreeNAS Guru
Joined
Jun 13, 2013
Messages
915
Thanks
138
#22
If you are new to Freenas and are just waiting for the OS to stabilize before buying the HDD, then that's fine, but why not buy the HDD and play around with Freenas to get a hold of it and become familiar with its inner working. That would be a good way to start assuming you do not want to put all you data there to start with. Of course you can.
At least you can start with one drive.

My Threadripper is running without much load in the high 60 to lower 70 degrees C.
 
Joined
May 17, 2019
Messages
5
Thanks
1
#23
I have been running the dell laptop with that very same purpose. Its been a slow start with a few of the plugins because the install how-to's are limited to older versions. Such as CouchPotato, Transmission, ClamAV...
 
Joined
Oct 14, 2018
Messages
20
Thanks
3
#24
I have been running the dell laptop with that very same purpose. Its been a slow start with a few of the plugins because the install how-to's are limited to older versions. Such as CouchPotato, Transmission, ClamAV...
Playing around with your internal HDD on your laptop is not the same as having HDDs and setting up a raid config, running scrubs and resilvers. It’s a whole lot more involved but that depends on your usage. If you really need to have your own NAS as in actually keep your data safe and protect it from corruption you need to get dirty with ZFS. But I guess as a server solution/hosting what you’re doing works fine.
 
Joined
May 17, 2019
Messages
5
Thanks
1
#25
Playing around with your internal HDD on your laptop is not the same as having HDDs and setting up a raid config, running scrubs and resilvers. It’s a whole lot more involved but that depends on your usage. If you really need to have your own NAS as in actually keep your data safe and protect it from corruption you need to get dirty with ZFS. But I guess as a server solution/hosting what you’re doing works fine.
I couldn't agree more, but you have to start somewhere to know what you like or before you commit the dollars to a NAS build. Now that I've stuck with FreeNAS on the laptop for a few months and have enjoyed my experience, I have built a machine and am looking to get my hands dirty under the hood of the OS. A reference manual would be fantastic to understand the value of each feature and how to setup/diagnose/maintain. The current documentation is very limited in information.

It would be great to find HOW-TO setup tutorials for v. 11.2+ for the available plugins. Plex is the only well covered plugin that I have come across. I have setup CouchPotato and Transmission, but must have folder permissions setup incorrectly due to no download activity. Also, ClamAV throws me an error when starting the service. I seem to be missing an element to really break through in this respect. I won't give up until I get the knowledge, but I hate waste time in trying to figure it out.
 
Joined
Oct 19, 2017
Messages
28
Thanks
1
#26
I have been running a 1700x with SMT disabled in a stable state for 6 months. I had to disable SMT because of C6 power problems that Ryzen was known for. My system would lock up once a day.

I have not been bold enough to do a firmware update and re-enable since the kernel bug is still open https://bugzilla.kernel.org/show_bug.cgi?id=196683

I would honestly steer away from ryzen for this reason.
 
Joined
Oct 14, 2018
Messages
20
Thanks
3
#27
I couldn't agree more, but you have to start somewhere to know what you like or before you commit the dollars to a NAS build. Now that I've stuck with FreeNAS on the laptop for a few months and have enjoyed my experience, I have built a machine and am looking to get my hands dirty under the hood of the OS. A reference manual would be fantastic to understand the value of each feature and how to setup/diagnose/maintain. The current documentation is very limited in information.

It would be great to find HOW-TO setup tutorials for v. 11.2+ for the available plugins. Plex is the only well covered plugin that I have come across. I have setup CouchPotato and Transmission, but must have folder permissions setup incorrectly due to no download activity. Also, ClamAV throws me an error when starting the service. I seem to be missing an element to really break through in this respect. I won't give up until I get the knowledge, but I hate waste time in trying to figure it out.
Yeah the plugins/jail setups for a lot of things is copy pasting things from here and there. In the end you get used to it. More importantly though, if you want to learn how not to rely on plugins I'd suggest on reading how the rc scripts work (syntax, daemon etc). That way you can just make sense of things more. A lot of service rc files out there for the services you're looking for should be future proof, meaning you can probably use them with minor tweaks. Remember that a jail is just a FreeBSD instance so rc.d is going to work the same as a full blown FreeBSD system. Official docs here: https://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/

Yes, permissions are a bunny at first. My advice on it is: Make a user for every service if there isn't one already. Run the rc.d service as that user. Once that is done make a user in every jail for shared data access. Example: user plex should be in jail couchpotato and couchpotato rc.d service runs as user couchpotato. This makes your jails more secure and creates less headaches. You also learn what each service needs to run so you indirectly have to learn how rc.d scripts work. I've spent hours on this stuff and sometimes I still have to go and look for old rc.d scripts i've written for reference.

P.S
Some plugins i've installed run services as root. Which is bad. I've had to uninstall the plugins and create jail/install service manually. Get in the process of doing that and I promise you'll face less headaches since plugin maintainers don't abide to standards.
 
Joined
Oct 14, 2018
Messages
20
Thanks
3
#28
I have been running a 1700x with SMT disabled in a stable state for 6 months. I had to disable SMT because of C6 power problems that Ryzen was known for. My system would lock up once a day.

I have not been bold enough to do a firmware update and re-enable since the kernel bug is still open https://bugzilla.kernel.org/show_bug.cgi?id=196683

I would honestly steer away from ryzen for this reason.
I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.
 
Joined
Oct 19, 2017
Messages
28
Thanks
1
#29
I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.
sure, they are different kernels. If you look at that thread you will see that there is a hardware bug with the the first generation ryzens (not sure about 2nd gen).

You can read about all the people that have attempted to run freenas on ryzen here https://www.ixsystems.com/community/threads/ryzen-stability-on-11-0-u4.59017/
 

averyfreeman

FreeNAS Experienced
Joined
Feb 8, 2015
Messages
129
Thanks
8
#30
Keep in mind AM4 processors can only have 24 PCIe lanes. It's the Epyc that have 64 PCIe lanes. So an AM4 board that has 8+8+4+4+2 lanes is over-subscribed, and using the chipset's PCIe bus expander.

@Constantin, it does appear that you could use both M.2 slots for symetrical speed, mirrored SLOGs. One slot is PCIe 3.0 x 2 lanes and the other slot is PCIe 2.0 x 4 lanes. Thus, about the same speed.
Yeah I was looking up this board when I came across this thread. Ryzen server sounded like a cool cost effective alternative to Epyc until I saw the paltry PCIe lane comparison - the main reason I thought Epycs sounded so cool to begin with.

Where do you see 24 PCIe lanes, is that from the motherboard? If so, 8 of those lanes may be from the chipset because I was just looking at the Ryzen 2700X spec and it said 16 lanes. That might explain the discrepancy between PCIe 2.0 (chipset) and 3.0 (processor)

The Epyc, on the other hand, is a beast with 128 lanes straight from the CPU. Absolutely stunning throughput capabilities. I wonder how well it would run OmniOS for NVMe storage.

I also wonder how their memory and cache latency are compared to Intel line, though, for applications like high-speed packet processing (ala pfsense, etc.) This is of course wholly unimportant on desktops and most home servers, but may make a big difference for switching/routing speed (I virtualize OPNsense so I pay attention to this stuff).

I've been concerned about memory latency since I read this thread where one of the original pfSense devs got into it here: https://www.reddit.com/r/networking/comments/6upchy/can_a_bsd_system_replicate_the_performance_of/

As an aside Ryzen's main memory latency (access speed from processor to RAM) is horrid compared to the competing Intel processor (6900k), and also horrid compared to the FX-8350. Rizen sits at 98ns, compared to around 70ns of the Intel and FX-8350. When looking at the latency to the three levels of cache the L1 and L2 caches of Ryzen and the 6900k are generally comparable. The 6900k has higher L1 and L3 bandwidth, and Ryzen wins out in L2. However, Ryzen's L3 latency is 46.6ns, whereas the 6900k's is 17.3ns. The reason for this is that Ryzen's L3 cache is not a true general-purpose cache. It's a victim cache.
A victim cache generally works as a normal cache, until data needs to be pulled from it. Then, the data in the lower level cache and the data in the victim cache are swapped. The 8c/16t chips have 2 CCXs on them. Each CCX contains 8MB of the L3 cache, for a total of 16MB. Ryzen's architecture is such that if a thread on one CCX needs to access the cache in the other CCX, it needs to talk through a bus system that goes through the memory controller. The bandwidth of this interconnection is only 22GB/s, about the speed of DDR3-1600.
 

Arwen

FreeNAS Expert
Joined
May 17, 2014
Messages
1,098
Thanks
536
#31
Yeah I was looking up this board when I came across this thread. Ryzen server sounded like a cool cost effective alternative to Epyc until I saw the paltry PCIe lane comparison - the main reason I thought Epycs sounded so cool to begin with.

Where do you see 24 PCIe lanes, is that from the motherboard? If so, 8 of those lanes may be from the chipset because I was just looking at the Ryzen 2700X spec and it said 16 lanes. That might explain the discrepancy between PCIe 2.0 (chipset) and 3.0 (processor)

The Epyc, on the other hand, is a beast with 128 lanes straight from the CPU. Absolutely stunning throughput capabilities. I wonder how well it would run OmniOS for NVMe storage.
...
It's 24 PCIe 3.0 lanes from the AM4 socket;
https://en.wikipedia.org/wiki/Socket_AM4
If using one of AMD's chipsets, they take up 4 of those lanes, and give you back 4 to 8 lanes of PCIe 2.0.

Acording to my little DeskMini, if I had used an Athlon APU, it would have 2 less PCIe lanes for 1 of my NVME slots. That indicates to me they supply at least 2 less PCIe lanes than the Ryzen processors.
...
I also wonder how their memory and cache latency are compared to Intel line, though, for applications like high-speed packet processing (ala pfsense, etc.) This is of course wholly unimportant on desktops and most home servers, but may make a big difference for switching/routing speed (I virtualize OPNsense so I pay attention to this stuff).

I've been concerned about memory latency since I read this thread where one of the original pfSense devs got into it here: https://www.reddit.com/r/networking/comments/6upchy/can_a_bsd_system_replicate_the_performance_of/
Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.
 

averyfreeman

FreeNAS Experienced
Joined
Feb 8, 2015
Messages
129
Thanks
8
#32
It's 24 PCIe 3.0 lanes from the AM4 socket;
https://en.wikipedia.org/wiki/Socket_AM4
If using one of AMD's chipsets, they take up 4 of those lanes, and give you back 4 to 8 lanes of PCIe 2.0.

Acording to my little DeskMini, if I had used an Athlon APU, it would have 2 less PCIe lanes for 1 of my NVME slots. That indicates to me they supply at least 2 less PCIe lanes than the Ryzen processors.

Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.
I agree, competition is definitely helpful. It's an interesting tug-of-war between the benefits of economies of scale. I wonder what China will start developing in response to this new trade situation we're in re: Huawei/Qualcomm. If Intel were thrown into the mix, there might be some very interesting diversity that could result in the x86 space (disclaimer: statement purely reflective, not meant to advocate or oppose trade war in any way).
 
Joined
May 6, 2019
Messages
5
Thanks
1
#33
I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.
My 2700x has been up running FreeNAS for about 15 days since I last reboot for updates with zero issues. My 2950x is running Proxmox and it was actually getting soft lockups, but I think I figured out that it was just unstable running 4 x dual rank dimms @ 3200 so I dropped to 2933 and it's stable now.
It's 24 PCIe 3.0 lanes from the AM4 socket;
Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.
It's 4 cores per CCX, 2 CCX per die, 4 dies per chip on Epyc. 2nd gen Epyc is coming soon though, and that's 8 x 8 core dies and all of them communicate through a central IO die so minimum latency is higher but average latency is lower and consistent. Still no word on whether 2nd gen uses 2 CCX per die any more but there's no more NUMA for 2nd gen.

Also this board looks great: https://www.asrockrack.com/general/productdetail.asp?Model=X399D8A-2T#Specifications
just seems like it came out a bit late
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,128
Thanks
2,876
#34
For example, one cost optimized baseline Supermicro Epyc 7000 board (H11SSL-i) is available for USD120 more than the Ryzen board mentioned above, can handle 1TB of RAM (vs. 256MB), features three PCI-E 3.0 x16, another three PCI-E 3.0 x8, a m.2 slot at x4, 16 SATA ports (of which 2 can be used as SATADOMs), etc. Plus, you get SuperMicro tech support vs. AsRock. :D

As usual, SuperMicro offers far more iterations of this board with various options built-in if you want them to be. But with six PCIe @ x8+ slots, who cares? @Chris Moore's house heaters (aka 24+bay file servers) would likely be very happy using this board - simply add a HBA or two, perhaps a 10GB NIC, and still have 3 slots left over. For the long run, this seems like a much better server platform than the AsRock board above.
As much as I like new hardware, I would have a difficult time justifying the new memory and CPU to go with that new system board. The cost of those components is sure to be more than the $250 I paid for the system board, processor and RAM that I am using now and I really don't need any greater performance. I have actually been seriously contemplating going to less capable system board, processor and RAM in an effort to consume less power / generate less heat, even though the majority of the heat and power consumption of my system is in the drives.
 

Constantin

FreeNAS Guru
Joined
May 19, 2017
Messages
490
Thanks
147
#35
I have actually been seriously contemplating going to less capable system board, processor and RAM in an effort to consume less power / generate less heat, even though the majority of the heat and power consumption of my system is in the drives.
But then the house might get c....c...c... cold! :)

... and you may start missing out on those sumptuous wine and cheese platters the local electric company sends you annually for being such a good customer! :p

But seriously, low power boards do make sense for those of us who live in high-cost areas like the Northeast, CA, etc. and who don't need to transcode 8K entertainment. The -2C- version of my motherboard is only $450, offers two x8 slots, a 16-channel LSI2116 SAS chip, onboard SFP+, etc. all in a 25W TDP package. Granted, adding RAM will add cost but for SOHO use it likely would perform as well as the unit I ended up buying. Electrical cost considerations is what also drove me to reduce my pool disk quantity in favor of higher-capacity, Helium-filled drives.
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,128
Thanks
2,876
#36
But then the house might get c....c...c... cold! :)
It is summer and the problem now is the heat... ;)
... and you may start missing out on those sumptuous wine and cheese platters the local electric company sends you annually for being such a good customer! :p
I do own stock, but not enough to get invited to the party.
But seriously, low power boards do make sense for those of us who live in high-cost areas like the Northeast, CA, etc.
I am quite happy that I don't pay for electricity, but I do have trouble keeping the house cool this summer. It is just too hot outside, 86°F at 7:45 PM. My central Air Conditioning unit in my house died, so I am relying on the two supplemental units to keep the temp under control. The heat in my home computer room hit a high of 84°F today. How many people have redundant cooling in their home? I know, I am a little nuts. I was going to get a auto transfer generator system powered by natural gas, but I have not quite gotten to that point yet. Baby steps.
Electrical cost considerations is what also drove me to reduce my pool disk quantity in favor of higher-capacity, Helium-filled drives.
I am seriously considering this reduction in drive count, but for the heat reduction because electricity is free (for me) at the moment. I just can't quite manage the price bite of a full set of new drives. I just bought the ones I am using around the beginning of 2017. I need to get more hours out of them to amortize the investment. Then there is the cost of the new AC system. The first quote has come in at $6000 and that doesn't include the cost of the carpentry (drywall / paint, etc.) or new duct work. I am estimating about $10k for the project right now. I may need to sell a server to help cover the cost.
 

Constantin

FreeNAS Guru
Joined
May 19, 2017
Messages
490
Thanks
147
#37
Central AC (CAC) systems are not cheap, especially if you want them to get installed right. Many extant CAC systems are installed oversized re: capacity (which latent heat removal) as well as undersized ducts (which affects effective heat transfer, may increase power consumption, etc.).

I would ALWAYS run a heat loss / heat gain calculation on your home to "right-size" the system (aka Manual J). That may result in a lower-capacity system for which extant ducts are right-sized and it will perform better. A Manual D review of the duct work will identify issues that you can address as part of the replacement job.

Many utilities offer free access to consultants who will do a home review, blower door test, perhaps even a IR camera survey - usually for free. Take advantage of these programs... I would also check re: incentives the local utility is offering to see if you can replace your extant candidate unit with an even more efficient one (rebates).

It's certainly common in the US for homes to have multiple CAC outdoor units but they usually only "feed" one individual indoor unit each. Redundancy in the cooling system may get really expensive. Given how rare CAC failures usually are and how relatively gracefully our systems respond to such failures, I'd focus on something different... perhaps even have a script gracefully shut the system down if it's getting too hot.

I recall graceful shutdowns being built into FreeNAS for loss-of-power events (i.e. the UPS says help!) but does FreeNAS offer similar auto-shutdown events in the UI for temperature excursions and the like?
 

Chris Moore

Super Moderator
Moderator
Joined
May 2, 2015
Messages
9,128
Thanks
2,876
#38
but does FreeNAS offer similar auto-shutdown events in the UI for temperature excursions and the like?
In 11.1-U7 there is an alert for drive temperature, but I don't recall there being a feature to initiate a shutdown. Should be easy to script though.
 
Top