Gigabyte GA-D525TUD Dual Core D525 Atom---8GB RAM ??????

Status
Not open for further replies.

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Hello Forum,

I have been using FreeNAS 8 for some time now and am happy with it from a "home" NAS system perspective, i.e. backups for laptops, a music and movies server for MythTV and XBMC around the house. What I am beginning to struggle with is the third usage I wanted it for......iSCSI target for ESXi5.

I have the target setup, connected to my ESXi5 server and have around 15 VM's (various OS's for various work I have to do), that's not the issue. What I am seeing is that I can only have 3 VM's running at one time, after that performance drops like a stone in a pond.

I have seen a number of threads where it appears that folks have 8GB of RAM running with the D525 Atom CPU. The advertised "limit" for my mobo is 4GB, however, has anyone else tried installing 8GB on the "Gigabyte GA-D525TUD" mobo ? Did it recognize the 8GB ?

Thanks in advance

Leenux_tux
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
I have a Supermicro D510 Atom which also had an advertised max of 4GB of RAM. I took a chance and tried 8GB (~$180) and it worked. So give it a try, you can almost always return it if it doesn't work. I'm not sure it will help with your VM problem, that's a lot of VM's for an Atom. I think there's something about the architecture of an ATOM that will really limit what you can do. I can't recall what it is off the top of my head.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
VT, Atom processors lack virtualization technology.

That is almost certainly the problem. That an Atom's aren't exactly high performance. Just the background operations from 3 different OSes probably max out what an Atom can process effectively.
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Hello all,

Many thanks for the prompt responses.

Reading a couple of the comments I think I may have not been clear on my setup in regards to VMWare.....

I have the "Gigabyte GA-D525TUD" system setup as the NAS/iSCSI "target" for ESXi5 ( as well as the other stuff I mentioned) , however, the system that is actually running VMWare is NOT the Atom unit, but separate Asus based server running a Phenom "Black" edition CPU (quad core) with 8GB of RAM and no hard drives. This system boots from USB and uses a single iSCSI block device presented by the Atom based FreeNAS box to store and run the Virtual Machines. I also have an NFS share on FreeNAS where I store ISO images. This is also available to VMWare via an NFS mount.

One other thing I have done (not sure if this is important) is to enable a second 1GB NIC (motherboard based) on the Asus (VMWare) box and dedicated that to the iSCSI storage path.

I think I may just purchase some ore RAM for the Atom unit and see if/how it works :smile:

I hope that's clearer :smile:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That makes more sense. I was hoping someone that uses ESXi5 and seems to have a clue wouldn't be so dumb as to run a bunch of VMware machines on an Intel Atom computer.

I know that Atoms can't max out a 1Gb NIC(at least from what I have read here). Personally, my FreeNAS box has 2 NICs as well as my primary desktop. Both have a single cable that runs from one to the other and I tweaked the NIC settings to optimize the performance for network usage between those 2 devices(jumbo frames, etc). Thanks to about 2 hours of reading and tweaking I can consistently get 120MB/sec reading and writing from a CIFS share from FreeNAS! The rest of the home LAN uses the standard settings(usually 60-80MB/sec) and they work great too. You may want to consider the same for your iscsi setup.

As another possible cause, it could be that the Atom just can't dish out the data fast enough. When you experience the slowdown you are having issues with does the FreeNAS have alot of hard disk usage?
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Hello noobsauce80,

Must admit, I have not checked the disk I/O on the FreeNAS box, it's something I should monitor when starting up the VM's.

I was originally thinking it may be more to do with RAM as I have set jumbo frames on FreeNAS via "mtu 9000 up" on the "Interfaces" section of "Networking". When I have set it to 9000 my FreeNAS box stops all network activity after around 2 days operation. I tried stopping and restarting networking via the console and from a "putty" login but I kept getting error messages stating there was not enough RAM available to start networking. I have since altered the mtu to 4500 (mtu 4500 up) to see if that helps. This was done yesterday so I still have a day or so to see if networking dies on the system.

One thing I did think about was to buy two USB3 thumbdrives and add them as a mirrored cache for ZFS. USB3 is soooooooo fast compared to USB2 and is much cheaper/smaller than an SSD. I'm a little short on space in my Chembro case !!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm not sure if it would help. If the USB3.0 ports use a PCI bus(stupid, but a friend wanted to buy one) then you'll be capped. Also not all USB3 drives are created equal, and some are very poor for write performance. I'm not sure if there's any scenario where using a USB3 drive as a mirrored cache would be beneficial at all. AFAIK the whole purpose for the cache was to dump data to someplace else when the zpool can't keep up. In your case I don't think the problem is hard drives but CPU processing power. I set up a cache on my FreeNAS server when I was first experimenting with it, and the cache never used any space. I presumed from reading the manual that because my zpool could keep up with the necessary writes there was no reason to use the space. As it is, you can buy 40GB SSDs for less than $100. One of those would be prefect for cache(assuming that a cache would help your performance).

I do know that jumbo frames aren't always the same. Intel uses 9014bytes for the 9k jumbo frame. Other manufacturers require the setting to be 9000bytes because they exclude a 14 byte header. I've tried to use jumbo frames on my home LAN in the past, but with the lack of consistency I always had devices that didn't work "quite right". That's why I have 2 separate NICs with 2 different IPs and subnets so I can run the jumbo frames from 1 machine to another. Both have the same Intel NIC cards so I know they'll work together.

Perhaps you should try disabling jumbo frames and see if that fixes anything? I know from my personal problems with jumbo frames that when they aren't both agreeing on their packet size performance goes way down the drains. I'm talking less than 10MB/sec on gigabit.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
While I haven't don't have an ESXi/FreeNAS configuration setup like you have (yet), I'd definitely consider a second NIC for iSCSI traffic. Can you add another NIC to your FreeNAS server and dedicate all your iSCSI traffic to a separate network? VMware suggests that you separate the iSCSI traffic from the LAN traffic.

I manage a small ESX4i cluster at work that connects to an iSCSI SAN. We've always partitioned the iSCSI traffic to a separate network, using multiple 1Gb NIC's. I don't have all the particulars memorized - it's in my documentation at work.

OOC, what OS's are you running on your VM's and how many are you trying to run concurrently? Is 8Gb enough in your ESX5i host?

One other thing I have done (not sure if this is important) is to enable a second 1GB NIC (motherboard based) on the Asus (VMWare) box and dedicated that to the iSCSI storage path.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi leenux_tux,

What do "top", "gstat -f p2" & "zpool iostat" report when your performance drops? Are you running out of proc? The disks running at capacity?

You are running your filer pretty far below spec with memory, the system requirements say at least 6GB or really 8GB minimum. If you can I would go out & score a pair of Samsung MV-3V4G3D 4GB sticks & see if it works...I see a number of memory dealers who say\sell 8GB kits that are supported on that Atom board. Boot a Memtest+ disk on it & let it do a couple of passes on the 8GB and you should know with a pretty high degree of certainty if it works or not.

What happens if you get rid of the jumbo packets & run a proper MTU? I'm not a big fan of jumbo packets....they had their place a decade ago when it took a whole 1Ghz proc to drive the NIC\deal with the interrupts but today a $30 Intel CT card can run full-bore and barely touch the processor. That said, your board uses a Realtek chip, but Realtek chips are, in a word, crap....not that you can do much about it since all you have is a PCI slot for expansion. Don't take my word for it though....here's what the guy who wrote the driver for it had to say:

http://fxr.watson.org/fxr/source/dev/re/if_re.c?v=FREEBSD62

beginning at line 45

http://fxr.watson.org/fxr/source/pci/if_rl.c

beginning at line 48

Adding in an Intel PRO/1000 GT is probably the best solution if the NIC is found to be the problem.

I see you are running a 3 disk raidz with a hot spare, is there any way you can move the data elsewhere and try re-configuring the pool so you have a pair of mirrors instead? That will double the number of IOPS you can turn and lower the load on the proc at the same time. You wouldn't lose any capacity compared to how you have it configured now and would even gain a little bit of reliability in the deal (you could survive losing 2 drives as long as they aren't part of the same mirror).

All this might just be academic of course....

My filer runs pretty well I think, it will "dd" out a file locally at over 400MB/s and CIFS writes will turn over 100MB/s from a Windows 7 box but I just now "dd"ed out a file in a FreeBSD VM on my ESXi box to the filer over iSCSI and got a paltry 15MB/s. I know there are a bunch of advanced settings for iSCSI on both the FreeNAS & ESXi sides but I have no idea if they are set anywhere close to optimally or if one side is or the other is causing my poor performance. I've mucked about with them and can certainly make my VM performance much worse but lacking any official guidance or advice on what to do I have not had any success in making it perform better. This is one of the main reasons keeping me from putting my filer into proper production.

-Will
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
folks,

Just wanted to thank everyone for replies/suggestions. I'm working away for a week or so and can't complete any of the tests offered, however, this is something that I believe could be of use to the general "freeNAS community" so will look at the tests when I get back

Leenux_tux
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Folks,

Just as an FYI....

I purchased a couple of 4GB sticks from a web site in the UK and eh-voila it works !!! Even though the documentation states that the maximum amount of supported RAM is 4GB. This now means I might have a go at upgrading the RAM on my ESXi server from 8GB to 16GB :smile:

All checks I have done indicate that the server is using the installed 8GB, the FreeNAS GUI (reports, physical memory utilization), System Information/Memory (8167MB). SSH command prompt using "sysctl hw.physmem", reports "hw.physmem: 8564690944".

Looks like I now need to update my signature !!
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Just as an FYI...

I have been doing some testing now that I have 8GB of RAM installed on my FreeNAS system and not 4GB as previous. The increase in performance has been quite an eye opener. I can now run 5 VM's, a mixture of Windows 2000/NT/2003/2008 and Centos 5 with ease. I have not tried any more as I don't think I would have a need to, this is a home office system after all and not a production box.

The VM's are in general, just more "snappier" than before

The command "gstat -f p2" provides some interesting results whilst getting all 5 VM's to "do stuff", i.e. getting Centos to do an update via "yum update -y", 168 updates to download and install. Firing some random queries at Oracle on the Win2k3 box and running repeated "dir /s /b *.*" on the win2008 box.

Take a look at this as well...

"iozone -M -e -+u -T -t 32 -r 128k -s 40960 -i 0 -i 1 -i 2 -i 8 -+p 70 -C"

iozone appears to be quite a comprehensive disk analysis tool, trouble is you go so much data back it's difficult to figure it all out !! I'm hoping here "http://www.cyberciti.biz/tips/linux-filesystem-benchmarking-with-iozone.html" might have some good pointers
 
Status
Not open for further replies.
Top