A good Ryzen server motherboard at last?

abnexus

Dabbler
Joined
May 17, 2019
Messages
13
I've been using Ryzen 7 2700X for about 6 months and everything works fine. I recently noticed some wrong temp readings in FreeNAS UI and sysctl but that's about it. I also have overclocked RAM with absolutely no issues. I had problems with sticks working nicely together (had to up the voltage a bit). Ran memory tests for about a week and that too passed.

@Apollo Have you had wrong temp readings with your 1900x ?

I'm new to FreeNAS and set it up on a few year old dell latitude laptop I had laying around. It worked great, so I decided to build a PC without doing my research on AMD processors. I went with AMD because of the significant cost difference over its Intel counterpart. I am currently running FreeNAS 11.2-U4 with no issue other than the OS reading a high CPU temperature as well. It threw me off when the Dashboard was showing me the CPU was running in idle at 80C. I immediately shut it down and hooked a monitor into it and booted into the BIOS where it was stabilized idling at 35-41C. I contacted support and created a ticket (NAS-101802) and received cursory feed back stating the below.

"I suspect this problem is already be fixed in FreeNAS 11.3 nightly builds. You may backup your configuration and try to update to check it. There was 49C shift introduced at some point, so your 82C should be really 33C."

I noticed the nightly builds are labeled as "Very Unstable". From how I read your post, the temp reading wrong is nothing to be concerned about.

I've held off purchasing HDD storage to complete my build until I get this resolved either in the OS or my mind.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,449
If you are new to Freenas and are just waiting for the OS to stabilize before buying the HDD, then that's fine, but why not buy the HDD and play around with Freenas to get a hold of it and become familiar with its inner working. That would be a good way to start assuming you do not want to put all you data there to start with. Of course you can.
At least you can start with one drive.

My Threadripper is running without much load in the high 60 to lower 70 degrees C.
 

abnexus

Dabbler
Joined
May 17, 2019
Messages
13
I have been running the dell laptop with that very same purpose. Its been a slow start with a few of the plugins because the install how-to's are limited to older versions. Such as CouchPotato, Transmission, ClamAV...
 

arn0z

Dabbler
Joined
Oct 14, 2018
Messages
21
I have been running the dell laptop with that very same purpose. Its been a slow start with a few of the plugins because the install how-to's are limited to older versions. Such as CouchPotato, Transmission, ClamAV...

Playing around with your internal HDD on your laptop is not the same as having HDDs and setting up a raid config, running scrubs and resilvers. It’s a whole lot more involved but that depends on your usage. If you really need to have your own NAS as in actually keep your data safe and protect it from corruption you need to get dirty with ZFS. But I guess as a server solution/hosting what you’re doing works fine.
 

abnexus

Dabbler
Joined
May 17, 2019
Messages
13
Playing around with your internal HDD on your laptop is not the same as having HDDs and setting up a raid config, running scrubs and resilvers. It’s a whole lot more involved but that depends on your usage. If you really need to have your own NAS as in actually keep your data safe and protect it from corruption you need to get dirty with ZFS. But I guess as a server solution/hosting what you’re doing works fine.

I couldn't agree more, but you have to start somewhere to know what you like or before you commit the dollars to a NAS build. Now that I've stuck with FreeNAS on the laptop for a few months and have enjoyed my experience, I have built a machine and am looking to get my hands dirty under the hood of the OS. A reference manual would be fantastic to understand the value of each feature and how to setup/diagnose/maintain. The current documentation is very limited in information.

It would be great to find HOW-TO setup tutorials for v. 11.2+ for the available plugins. Plex is the only well covered plugin that I have come across. I have setup CouchPotato and Transmission, but must have folder permissions setup incorrectly due to no download activity. Also, ClamAV throws me an error when starting the service. I seem to be missing an element to really break through in this respect. I won't give up until I get the knowledge, but I hate waste time in trying to figure it out.
 

ykhodo

Explorer
Joined
Oct 19, 2017
Messages
52
I have been running a 1700x with SMT disabled in a stable state for 6 months. I had to disable SMT because of C6 power problems that Ryzen was known for. My system would lock up once a day.

I have not been bold enough to do a firmware update and re-enable since the kernel bug is still open https://bugzilla.kernel.org/show_bug.cgi?id=196683

I would honestly steer away from ryzen for this reason.
 

arn0z

Dabbler
Joined
Oct 14, 2018
Messages
21
I couldn't agree more, but you have to start somewhere to know what you like or before you commit the dollars to a NAS build. Now that I've stuck with FreeNAS on the laptop for a few months and have enjoyed my experience, I have built a machine and am looking to get my hands dirty under the hood of the OS. A reference manual would be fantastic to understand the value of each feature and how to setup/diagnose/maintain. The current documentation is very limited in information.

It would be great to find HOW-TO setup tutorials for v. 11.2+ for the available plugins. Plex is the only well covered plugin that I have come across. I have setup CouchPotato and Transmission, but must have folder permissions setup incorrectly due to no download activity. Also, ClamAV throws me an error when starting the service. I seem to be missing an element to really break through in this respect. I won't give up until I get the knowledge, but I hate waste time in trying to figure it out.

Yeah the plugins/jail setups for a lot of things is copy pasting things from here and there. In the end you get used to it. More importantly though, if you want to learn how not to rely on plugins I'd suggest on reading how the rc scripts work (syntax, daemon etc). That way you can just make sense of things more. A lot of service rc files out there for the services you're looking for should be future proof, meaning you can probably use them with minor tweaks. Remember that a jail is just a FreeBSD instance so rc.d is going to work the same as a full blown FreeBSD system. Official docs here: https://www.freebsd.org/doc/en_US.ISO8859-1/articles/rc-scripting/

Yes, permissions are a bitch at first. My advice on it is: Make a user for every service if there isn't one already. Run the rc.d service as that user. Once that is done make a user in every jail for shared data access. Example: user plex should be in jail couchpotato and couchpotato rc.d service runs as user couchpotato. This makes your jails more secure and creates less headaches. You also learn what each service needs to run so you indirectly have to learn how rc.d scripts work. I've spent hours on this stuff and sometimes I still have to go and look for old rc.d scripts i've written for reference.

P.S
Some plugins i've installed run services as root. Which is bad. I've had to uninstall the plugins and create jail/install service manually. Get in the process of doing that and I promise you'll face less headaches since plugin maintainers don't abide to standards.
 

arn0z

Dabbler
Joined
Oct 14, 2018
Messages
21
I have been running a 1700x with SMT disabled in a stable state for 6 months. I had to disable SMT because of C6 power problems that Ryzen was known for. My system would lock up once a day.

I have not been bold enough to do a firmware update and re-enable since the kernel bug is still open https://bugzilla.kernel.org/show_bug.cgi?id=196683

I would honestly steer away from ryzen for this reason.

I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.
 

ykhodo

Explorer
Joined
Oct 19, 2017
Messages
52
I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.

sure, they are different kernels. If you look at that thread you will see that there is a hardware bug with the the first generation ryzens (not sure about 2nd gen).

You can read about all the people that have attempted to run freenas on ryzen here https://www.ixsystems.com/community/threads/ryzen-stability-on-11-0-u4.59017/
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
Keep in mind AM4 processors can only have 24 PCIe lanes. It's the Epyc that have 64 PCIe lanes. So an AM4 board that has 8+8+4+4+2 lanes is over-subscribed, and using the chipset's PCIe bus expander.

@Constantin, it does appear that you could use both M.2 slots for symetrical speed, mirrored SLOGs. One slot is PCIe 3.0 x 2 lanes and the other slot is PCIe 2.0 x 4 lanes. Thus, about the same speed.

Yeah I was looking up this board when I came across this thread. Ryzen server sounded like a cool cost effective alternative to Epyc until I saw the paltry PCIe lane comparison - the main reason I thought Epycs sounded so cool to begin with.

Where do you see 24 PCIe lanes, is that from the motherboard? If so, 8 of those lanes may be from the chipset because I was just looking at the Ryzen 2700X spec and it said 16 lanes. That might explain the discrepancy between PCIe 2.0 (chipset) and 3.0 (processor)

The Epyc, on the other hand, is a beast with 128 lanes straight from the CPU. Absolutely stunning throughput capabilities. I wonder how well it would run OmniOS for NVMe storage.

I also wonder how their memory and cache latency are compared to Intel line, though, for applications like high-speed packet processing (ala pfsense, etc.) This is of course wholly unimportant on desktops and most home servers, but may make a big difference for switching/routing speed (I virtualize OPNsense so I pay attention to this stuff).

I've been concerned about memory latency since I read this thread where one of the original pfSense devs got into it here: https://www.reddit.com/r/networking/comments/6upchy/can_a_bsd_system_replicate_the_performance_of/

As an aside Ryzen's main memory latency (access speed from processor to RAM) is horrid compared to the competing Intel processor (6900k), and also horrid compared to the FX-8350. Rizen sits at 98ns, compared to around 70ns of the Intel and FX-8350. When looking at the latency to the three levels of cache the L1 and L2 caches of Ryzen and the 6900k are generally comparable. The 6900k has higher L1 and L3 bandwidth, and Ryzen wins out in L2. However, Ryzen's L3 latency is 46.6ns, whereas the 6900k's is 17.3ns. The reason for this is that Ryzen's L3 cache is not a true general-purpose cache. It's a victim cache.
A victim cache generally works as a normal cache, until data needs to be pulled from it. Then, the data in the lower level cache and the data in the victim cache are swapped. The 8c/16t chips have 2 CCXs on them. Each CCX contains 8MB of the L3 cache, for a total of 16MB. Ryzen's architecture is such that if a thread on one CCX needs to access the cache in the other CCX, it needs to talk through a bus system that goes through the memory controller. The bandwidth of this interconnection is only 22GB/s, about the speed of DDR3-1600.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
Yeah I was looking up this board when I came across this thread. Ryzen server sounded like a cool cost effective alternative to Epyc until I saw the paltry PCIe lane comparison - the main reason I thought Epycs sounded so cool to begin with.

Where do you see 24 PCIe lanes, is that from the motherboard? If so, 8 of those lanes may be from the chipset because I was just looking at the Ryzen 2700X spec and it said 16 lanes. That might explain the discrepancy between PCIe 2.0 (chipset) and 3.0 (processor)

The Epyc, on the other hand, is a beast with 128 lanes straight from the CPU. Absolutely stunning throughput capabilities. I wonder how well it would run OmniOS for NVMe storage.
...
It's 24 PCIe 3.0 lanes from the AM4 socket;
https://en.wikipedia.org/wiki/Socket_AM4
If using one of AMD's chipsets, they take up 4 of those lanes, and give you back 4 to 8 lanes of PCIe 2.0.

Acording to my little DeskMini, if I had used an Athlon APU, it would have 2 less PCIe lanes for 1 of my NVME slots. That indicates to me they supply at least 2 less PCIe lanes than the Ryzen processors.
...
I also wonder how their memory and cache latency are compared to Intel line, though, for applications like high-speed packet processing (ala pfsense, etc.) This is of course wholly unimportant on desktops and most home servers, but may make a big difference for switching/routing speed (I virtualize OPNsense so I pay attention to this stuff).

I've been concerned about memory latency since I read this thread where one of the original pfSense devs got into it here: https://www.reddit.com/r/networking/comments/6upchy/can_a_bsd_system_replicate_the_performance_of/
Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.
 

averyfreeman

Contributor
Joined
Feb 8, 2015
Messages
164
It's 24 PCIe 3.0 lanes from the AM4 socket;
https://en.wikipedia.org/wiki/Socket_AM4
If using one of AMD's chipsets, they take up 4 of those lanes, and give you back 4 to 8 lanes of PCIe 2.0.

Acording to my little DeskMini, if I had used an Athlon APU, it would have 2 less PCIe lanes for 1 of my NVME slots. That indicates to me they supply at least 2 less PCIe lanes than the Ryzen processors.

Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.

I agree, competition is definitely helpful. It's an interesting tug-of-war between the benefits of economies of scale. I wonder what China will start developing in response to this new trade situation we're in re: Huawei/Qualcomm. If Intel were thrown into the mix, there might be some very interesting diversity that could result in the x86 space (disclaimer: statement purely reflective, not meant to advocate or oppose trade war in any way).
 

Junglist724

Cadet
Joined
May 6, 2019
Messages
5
I don't know how that would be relevant to FreeNAS. FreeBSD kernel is not the same as the Linux kernel. If it's affecting Linux distros does not necessarily mean it will affect FreeBSD or FreeNAS. Frankly, I do not know much about this issue or what's causing it exactly so take my words with a grain of salt. If you or @Junglist724 know more I'd like to hear it in detail particularly the power problems.
My 2700x has been up running FreeNAS for about 15 days since I last reboot for updates with zero issues. My 2950x is running Proxmox and it was actually getting soft lockups, but I think I figured out that it was just unstable running 4 x dual rank dimms @ 3200 so I dropped to 2933 and it's stable now.
It's 24 PCIe 3.0 lanes from the AM4 socket;
Due to the way AMD created the 32 core version, inter-proccessor communication is slower when leaving it's core complex. (I think it's an 8 core complex, 4 of them per 32 processor EPYC). But, still more choices than just what Intel decides we want.

It's 4 cores per CCX, 2 CCX per die, 4 dies per chip on Epyc. 2nd gen Epyc is coming soon though, and that's 8 x 8 core dies and all of them communicate through a central IO die so minimum latency is higher but average latency is lower and consistent. Still no word on whether 2nd gen uses 2 CCX per die anymore but there's no more NUMA for 2nd gen.

Also this board looks great: https://www.asrockrack.com/general/productdetail.asp?Model=X399D8A-2T#Specifications
just seems like it came out a bit late
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
For example, one cost optimized baseline Supermicro Epyc 7000 board (H11SSL-i) is available for USD120 more than the Ryzen board mentioned above, can handle 1TB of RAM (vs. 256MB), features three PCI-E 3.0 x16, another three PCI-E 3.0 x8, a m.2 slot at x4, 16 SATA ports (of which 2 can be used as SATADOMs), etc. Plus, you get SuperMicro tech support vs. AsRock. :D

As usual, SuperMicro offers far more iterations of this board with various options built-in if you want them to be. But with six PCIe @ x8+ slots, who cares? @Chris Moore's house heaters (aka 24+bay file servers) would likely be very happy using this board - simply add a HBA or two, perhaps a 10GB NIC, and still have 3 slots left over. For the long run, this seems like a much better server platform than the AsRock board above.
As much as I like new hardware, I would have a difficult time justifying the new memory and CPU to go with that new system board. The cost of those components is sure to be more than the $250 I paid for the system board, processor and RAM that I am using now and I really don't need any greater performance. I have actually been seriously contemplating going to less capable system board, processor and RAM in an effort to consume less power / generate less heat, even though the majority of the heat and power consumption of my system is in the drives.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
I have actually been seriously contemplating going to less capable system board, processor and RAM in an effort to consume less power / generate less heat, even though the majority of the heat and power consumption of my system is in the drives.
But then the house might get c....c...c... cold! :)

... and you may start missing out on those sumptuous wine and cheese platters the local electric company sends you annually for being such a good customer! :p

But seriously, low power boards do make sense for those of us who live in high-cost areas like the Northeast, CA, etc. and who don't need to transcode 8K entertainment. The -2C- version of my motherboard is only $450, offers two x8 slots, a 16-channel LSI2116 SAS chip, onboard SFP+, etc. all in a 25W TDP package. Granted, adding RAM will add cost but for SOHO use it likely would perform as well as the unit I ended up buying. Electrical cost considerations is what also drove me to reduce my pool disk quantity in favor of higher-capacity, Helium-filled drives.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
But then the house might get c....c...c... cold! :)
It is summer and the problem now is the heat... ;)
... and you may start missing out on those sumptuous wine and cheese platters the local electric company sends you annually for being such a good customer! :p
I do own stock, but not enough to get invited to the party.
But seriously, low power boards do make sense for those of us who live in high-cost areas like the Northeast, CA, etc.
I am quite happy that I don't pay for electricity, but I do have trouble keeping the house cool this summer. It is just too hot outside, 86°F at 7:45 PM. My central Air Conditioning unit in my house died, so I am relying on the two supplemental units to keep the temp under control. The heat in my home computer room hit a high of 84°F today. How many people have redundant cooling in their home? I know, I am a little nuts. I was going to get a auto transfer generator system powered by natural gas, but I have not quite gotten to that point yet. Baby steps.
Electrical cost considerations is what also drove me to reduce my pool disk quantity in favor of higher-capacity, Helium-filled drives.
I am seriously considering this reduction in drive count, but for the heat reduction because electricity is free (for me) at the moment. I just can't quite manage the price bite of a full set of new drives. I just bought the ones I am using around the beginning of 2017. I need to get more hours out of them to amortize the investment. Then there is the cost of the new AC system. The first quote has come in at $6000 and that doesn't include the cost of the carpentry (drywall / paint, etc.) or new duct work. I am estimating about $10k for the project right now. I may need to sell a server to help cover the cost.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Central AC (CAC) systems are not cheap, especially if you want them to get installed right. Many extant CAC systems are installed oversized re: capacity (which impedes latent heat removal) as well as undersized ducts (which affects effective heat transfer, may increase power consumption, etc.).

I would ALWAYS run a heat loss / heat gain calculation on your home to "right-size" the system (aka Manual J). That may result in a lower-capacity system for which extant ducts are right-sized and it will perform better. A Manual D review of the duct work will identify issues that you can address as part of the replacement job.

Many utilities offer free access to consultants who will do a home review, blower door test, perhaps even a IR camera survey - usually for free. Take advantage of these programs... I would also check re: incentives the local utility is offering to see if you can replace your extant candidate unit with an even more efficient one (rebates).

It's certainly common in the US for homes to have multiple CAC outdoor units but they usually only "feed" one individual indoor unit each. Redundancy in the cooling system may get really expensive. Given how rare CAC failures usually are and how relatively gracefully our systems respond to such failures, I'd focus on something different... perhaps even have a script gracefully shut the system down if it's getting too hot.

I recall graceful shutdowns being built into FreeNAS for loss-of-power events (i.e. the UPS says help!) but does FreeNAS offer similar auto-shutdown events in the UI for temperature excursions and the like?
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
but does FreeNAS offer similar auto-shutdown events in the UI for temperature excursions and the like?
In 11.1-U7 there is an alert for drive temperature, but I don't recall there being a feature to initiate a shutdown. Should be easy to script though.
 

arn0z

Dabbler
Joined
Oct 14, 2018
Messages
21
The new update (FreeNAS-11.2-U5) fixes the temp reading issues on my Ryzen 7 2700X.
 
Joined
Jul 20, 2019
Messages
4
I have been working this motherboard up to try to determine if it would be a nice solution to replace one of our old opteron servers (one I bought from IX years ago, in fact, I think). But I have to say there are a lot of issues with it that not even low-end regular motherboards have. It might not be suitable for our use.

The biggest problem is power delivery. This motherboard, even though its like $300, has a sub-standard power delivery subsystem on it. It's worse than even the lowest-end regular AM4 socket motherboard costing 1/3 as much money. With 'reasonable' airflow the mobo can only deliver 85W to the CPU socket reliably (cTDP setting in the CBS/NBIO sub-menu of the BIOS setup). That's a big problem. If I set the cap to 100W (which any other motherboard in the world can do), the VRMs overheat and the system feathers the frequency all the way down to 500MHz for a second or two every 20 seconds to compensate. I literally have to take a fan and angle it directly onto the VRM heatsink with the case open to prevent it from overheating. The VRM overheating is because ASRock really cheaped out on the power delivery circuitry... and I mean more than usual. I don't understand why. Insofar as I can tell its just a weak 3-phase. If you really want to push 100W into the socket you need a ton of direct airflow going over those VRM heatsinks. And I do mean a ton. And even then you are pushing it.

The second biggest problem is that the DDR sockets are a few millimeters (literally!) too close to the CPU socket. This prevents the socket from being able to accommodate most low-profile AM4 coolers. I was able to finally find one that fit, but its heat pipes still smack up against the first DDR4 stick in a way that is not ideal. The only AM4 low-profile cooler that I have tested that even remotely fits is the Noctua NH-L9x65 SE-AM4.

The third problem is the IPMI has serious issues. The HTML interface works but cannot be disabled. The VGA mirroring feature via the HTML interface works for a while, but sometimes stops accepting typing and the page has to be reloaded. The ipmitool SOL interface does not work very well. You can receive console data from it in sporatic bursts but you can't type (it only goes one way), and sometimes it gets stuck and requires you to issue a bmc warm reset to get it unstuck. In addition, I've managed to lockup the motherboard several times just doing normal reboots (probably an artifact of the early BIOS for Zen 2) and this unfortunately also locks up the IPMI so there is no way to recovery the machine short of putting a physical power cycler on the plug. Finally, while the IPMI implements a dedicated ethernet port, it also has automatic fail-over to the regular ethernet ports and this cannot be disabled, representing a security risk.

Other more minor problems including a lack of 5-pin USB2 headers for the front-panels. It has a USB3 header but most 2U cases don't have USB3 front panels so YMMV.

--

So those are the problems. Now what about the good things?

Well, the board does take ECC memory just fine. I was able to stick 64GB of 2133 ECC UDIMMs into it. I dunno about overclocking the memory... its a server, so not really my cup of tea but I expect you can push the 4 slots to 2666 @ 1.35v if you really want to. I would probably not go above that, not if you need reliability, because the motherboard is clearly low-spec. (The memory is going to be capped at 64GB with Ryzen, though its possible that 128GB might work with 32GB DIMMs. But I don't have 32GB DIMMs so I can't test that. If it does work it will be at 2133 though due to the quad loading of the traces).

The board DOES TAKE ZEN 2 CPUs. With a BIOS update you can stick any of the Ryzen Zen 2 CPUs into it and it will work. So far I've messed around with a 3600X (I've been using a 2700X for power envelope testing). I do have a 3900X but I don't want to rip it out of the mobo its currently in so I'm waiting for a second one to stick into the board. In terms of power delivery the skinny is that if you want to run a 3900X in this motherboard you have to realize that the power/performance sweet spot is at 100W socket power (around 150W at the wall sans PCIe devices). You can set cTDP to less and still run a 3900X, but it will have sub-optimal performance. Needless to say this also means that the upcoming 3950X will have the same problem. That said if you want to run a 3600[X] or a 3700X, it should work fairly well and even be close to the sweet spot in terms of power/performance.

The board has two M.2 slots on the motherboard, easily accessible, and eight SATA ports. Storage isn't a problem.

The board does run a COM port to the backplate, if you decide you need one it is there.

The board idles at around 25-30W (very good).

Generally speaking I give this motherboard (the X470D4U) a C- grade. You can make it work at reasonable power envelopers if you really put in tons of airflow (a minimum of 4 40mm fans), possibly even with ducting, but it isn't fun and you definitely need incoming power to be on a switched power so you can hard-power-cycle it when the IPMI stops working. Make sure your case has the necessary airflow.

On the flip-side, I intend to replace our quad-socket opteron with a 3900X and our bulk package building tests have the 3900X (with a 100W cTDP setting in the BIOS... 150W at the wall) blowing away the 48-core opteron by over 12%. That's with the opteron burning 1000W at the wall. Is that nuts or what?

-Matt
 
Top