Please guide me to narrow down performance issue with build

Status
Not open for further replies.

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Firstly, thank you in advance to anyone who can offer some assistance.

I had been lurking around here for awhile, reading and learning mostly (or trying to), posted a few times. I still consider myself new to FreeNAS and FreeBSD, cause I am. I do not consider myself an "IT" expert really, but I have done quite a bit (IT related) over the years in most of the fields, and both my jobs are IT centric but limited to one field. I tend to think I can do 5th grade math on par with a 3rd grader.

My goal has been to migrate my Debian CIFS file server over to FreeNAS. I finally put together my FreeNAS server, been testing and attempting to burn it in as much as possible. And I am very happy with the journey to a degree so far.

FreeNAS Server:
Build FreeNAS-9.2.1.6-RELEASE-x64 (ddd1e39)
CPU - Xeon 1230v2
Motherboard - Supermicro X9SCM-F-O (turned HT off)
Memory - 32GB Crucial 1600mhz ECC
HDDs - 12x 4TB Seagate NAS drives (2x6 vdevs RAIDZ2)
SATA Controllers - Highpoint 2720 SGL x2 (They are not configured for raid, latest firmware. I know these aren't ideal, but I had a bunch of them new already)
autotune - off
compression - lz4
atime - off
dedup - off
Datasets used for CIFS shares are inheriting
CIFS service and shares are at defaults, no smb.conf changes

HTPC:
Linux Mint 17-x64 Mate Edition
CPU - AMD A8-6500
Motherboard - some ASUS mobo (Realtek NIC)
Memory - 8GB Corsair 1866mhz
HDD - 500GB 2.5" WD Blue

Network:
Cisco SG300-20 (SMB Switch)
Info - it is set to layer 3 mode, the HTPC and FreeNAS are on two different vlan subnets. The routing works, or seems to be fine, I am still trying to get a full grasp of this switch as its mostly GUI based.

All of this is in a standalone test network prior to going "in home production".


I seem to have a performance problem and not exactly sure what the cause is.
I have attempted several things to narrow it down, but haven't done any real tweaking with configs or settings, both hardware or in the OSes or software. I wanted to keep it clean until I could figure out were the bottleneck is.

When I try to copy files over to the CIFS shares I set up, max I have seen it go is 47 MB/sec, but usually it bounces between 43 and 44.5 MB/sec. I am transfering a 1.5GB file.

What I have tried so far:

- I originally had 16GB of memory, upgraded to 32GB, sorta had the other pair sitting as spares while verifying they were good.
3 passes and then 5 passes later of Memtest.
- Installed a Intel NIC (see below).
- Put both systems on the same subnet, also tried direct connection, no change.
- Checked zpool status seems ok (not sure what to look for exactly).


DD tests - Compression is turned off:

dd if=/dev/zero of=/mnt/vdev1/folder1/testfile bs=4M count=10000
41943040000 bytes transferred in 137.288874 secs (305509387 bytes/sec)

dd if=/dev/zero of=/mnt/vdev1/folder1/testfile bs=2048k count=50k
107374182400 bytes transferred in 365.267786 secs (293960175 bytes/sec)

IPERF tests w/Realtek NIC:

iperf -c 192.168.5.5 -t 30
0.0-30.0 sec 2.45 GBytes 703 Mbits/sec

iperf -c 192.168.5.5 -t 30 -P 10
0.0-30.0 sec 3.26 GBytes 931 Mbits/sec

IPERF tests w/Intel NIC:

iperf -c 192.168.5.5 -t 30
0.0-30.0 sec 3.29 GBytes 942 Mbits/sec

iperf -c 192.168.5.5 -t 30 -P 10
0.0-30.0 sec 3.26 GBytes 944 Mbits/sec

So the Intel NIC improved drastically on the first test, but copying to the CIFS shares still will not go over 47 MB/sec in both directions.

I attempted to remove the Highpoint controller as the bottleneck by placing one drive on one of the onboard SATA3 6Gb/s ports. Created a striped volume, dataset, and the share. The transfer to it was only 41.9 MB/sec.

I took the FreeNAS server and client off of the Cisco switch and connected them to a new dumb TrendNet Gig switch I have to try and see if that was the problem. But it didn't change anything.

Client (Realtek NIC) > FreeNAS (onboard Intel NIC - striped volume) transfer was 43.4 MB/sec
Client (Intel NIC) > FreeNAS (onboard Intel NIC - striped volume) transfer was 44.8 MB/sec

Client (Intel NIC) > FreeNAS (onboard Intel NIC - RAIDZ2 volume) transfer was 46.4 MB/sec


I am at the point that I believe the only major thing I could change hardware wise on the FreeNAS server is replacing the Highpoint cards, but from the tests it doesn't seem like they are causing too much trouble, but I could be wrong.

I could possibly attempt to install a client on one of the SSDs I have laying around to see if the WD Blue is limiting the transfer.

I am not sure if I should enable Autotune or not? I saw a few posts stating this function was broken lately and wouldn't really help my situation much.


Does it seem like I am working through this properly?
Or is there something standing out that I may have missed or am failing to test?

I see a lot of threads talk about tunables and other tweaks and know sometimes Samba needed some adjustments prior to v4.
But I don't want to go blindly changing things without some advice from people who know FreeNAS well.

Please let me know if you need any other information.

Also please move this thread if it better fits somewhere else, I just wasn't sure at this point where to put it, thanks.
 

willnx

Dabbler
Joined
Aug 11, 2013
Messages
49
Any reason you're using CIFS for FreeNAS to Linux file transfer?
You could rule out an issue with the protocol by mounting an NFS export and seeing what speed you get with dd from the Linux box.

What speeds were you expecting/ what is your perf goal?

Are you doing a straight copy from FreeNAS to the Linux file system?
Or were you doing a copy from one folder to another within the file system on FreeNAS from the Linux client? IE - copy from /vol/media/movies/new to /vol/media/movies/old when you are mounting the share at /vol/media.
While it might sound like a small difference, it's quite major. When you do a move like that, you effectively cut your bandwidth in half; the move will transfer the file to your Linux client, then back to the FreeNAS box.

If you are doing a move within the file system of FreeNAS from your Linux client, then ~45MB/s is pretty dang good; the only thing that will speed it up is moving to a 10Gbe network.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Thanks for doing your own homework first. It's refreshing to see someone try what I call "the common sense stuff" and see it not work.

Don't enable autotune. It's not worth it and it doesn't do what it was designed to do. It will definitely hurt performance for everyone, but for some the bottleneck is the LAN speed so they never notice.

I'm strongly against the Highpoint cards for various reasons and to be honest I'd sell what you have and get something more appropriate with the money. But, that's probably not your problem here.

Can you post a debug file from your server so I can look at your configuration more closely?
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Gentlemen, Thank you for your quick replies, sorry for the delay, day job is keeping me busy.

@ willnx

I agree CIFS isn't ideal here, most of my boxes at home are Linux, with a few BSD based ones like PFsense. Sadly I have two Windows boxes for work related items and I would like to have family and friends (Windows and Mac only) to access it via my VPN or locally to store family photos and what not that would suck to lose.

I will work tonight on testing NFS and FTP to rule out CIFS as the culprit.
I expected at least 60MB/sec+, ideally I would like as close to 110MB/sec+ as I can get.

I am copying straight from the Linux Mint box (/home/test) to FreeNAS (/mnt/vdev1/folder1), so not between two locations on FreeNAS "through" the HTPC which would cause that sort of loop.


@ cyberjock

Thank you, I did my best to read and learn as much as I could before jumping in too much.

I concur, I don't want to use them, but figured I had few new so it was worth the try to test FreeNAS out a bit. I do plan on picking up some M1015s and going that route before I transfer stuff off my Linux server. I don't have tons of time to dedicate to it and I'm in no rush. I want to build it right, having it working right, and learn how and why.

I will work on getting the debug file tonight, not sure how to do it in FreeNAS, but I'll figure it out.


A few extra things that I thought of today:

- Is there anything on this motherboard that I should have ensured was or was not enabled, other than HT?

- I assumed I didn't have to flash the onboard sata controller on the motherboard, is this correct?

Are there any other tests/checks I could perform as well while I am at it?
 
Last edited:

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
This evening I was able to set up an NFS share, first ran into some problems due to nfs-common not being installed and took me a bit to figure out why I couldn't mount it, then with permissions, but Band-Aid fixed it by just mounting it as root just to test it out quickly.

I was able to write at roughly 94 MB/sec to the FreeNAS NFS share from a different Linux Mint HTPC (using an AMD E350 APU and Realtek NIC at the moment).

Reading from the NFS share was about 75 MB/sec.

I will try the original HTPC from the first thread along with the Intel NIC again tomorrow and see the difference.

I couldn't do more due to lack of time tonight

@cyberjock - I did hunt down the "save debug" option.... this is what you are wanting correct?
I will work on getting it tomorrow if I have some time in the evening.

Thanks again
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It is. That file is all the good info on how you did your pool.

But at those speeds you kind of ruled out the network and the server as the bottleneck. So it's almost certainly something with that particular HTPC's hardware or software that isn't "quite right".
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Roger that, thanks.

I'll work on setting up a Windows 7 box tomorrow to see what happens.
Been wanting to try out PC-BSD, so that's an option too, good chance to see how well both iXsystems backed systems tie together.
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
My apologies I wasn't able to get the debug file this evening, it will be a few days, but I know how to get it now so that's a start.
I plan on switching over to 9.2.1.7 over the weekend anyway, may be good to start fresh as well, and run through it again.

I did load up PC-BSD 10 onto a box, which will help narrow down the possibility it was Linux Mint 17, Mate, or related hopefully.
Will also get a Windows box up soon to have more options.

Always good to learn something new as well. :)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
3 demerits for not having the debug file!

Drop and give me 20!
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
After struggling all weekend to do those 20 pushups, I was able to get back to testing again. ;)

cyberjock has the debug file now, but I'll keep updating this anyway, so when the problem shows itself or a solution is found
at least its here if someone else needs it.


I started over with 9.2.1.7, install went fine.
Created two vdevs, a dataset, and a CIFS share. Made a user and a group. Same as before.

Tested with the PC-BSD 10 box I set up on the one HTPC (AMD E350, 4GB RAM, Intel Pro NIC)
Connected with the basic 8port TrendNet Gig switch I tested with earlier, didn't feel like messing with the Cisco switch tonight.

Transferring a 1.5GB file wrote between 35-38MB/s, read it back at 35-41MB/s
Transferring a 3.4GB file wrote between 32-37MB/s, read it back at 38-43MB/s

So no real change or improvement using PC-BSD 10.
But I have to say it was very straight forward, with KDE and all.
I set it to use the user name and password for the user when I browsed to the share, didn't need to type it each time.
One note, despite it looking like firewall pass exceptions are already there for Samba, I had to turn the firewall off. Not sure why.

The HTPCs I have are using laptop HHDs. So 5400rpms and I think both have 8MBs of cache.
If these are part of the limiting factor, which is possible, at least I know roughly the speeds I will get transferring my data off the
Debian server which is using WD Greens.

I have a bunch of WD Raptor drives laying around I plan on testing with next, with a beefer box.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Yeah, when doing throughput tests 5400RPM laptop drives aren't the best choice at all. Those E-350s also SUCK for performance and I wouldn't be surprised if they were supressing your throughput even more. Those E-350s were slow 2+ years ago when they came out and they aren't getting any better. IMO those should never have been released.

Just as a sidenote you have two pools, but named them vdev1 and vdev2. Mildly entertaining since each pool is only a single vdev.

So you should do some local dd tests to prove the pool can provide the required throughput. Once that is done you should do iperf tests again to validate your network between a desktop and your server. You should use the most powerful desktop you have because a slow desktop means slow performance to/from the server. Once that's done you'll need to then attempt file transfers using the fastest storage media you have. Ideally this would be an SSD or RAM disk so you don't have a throughput bottleneck with your desktop's disk.
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
I agree the E-350 sorta do suck. I figured since my HTPCs will be interfacing FreeNAS the most then I would use them to test. I have two HTPCs with E-350s and the third with a AMD A8-6500 APU which I tested with in the initial post. The E-350s do a fine job when used just as an tiny HTPC running XBMC, low power, and HW acceleration. Since they do this well, I never found the need to replace them just yet.

But I'll set up another box with an SSD and see what happens. It'll take me tonight to get one ready.
I ripped apart my "production" home network and started replacing workstations, servers, network gear, firewalls, etc.
Taking a whole different approach this time and I want FreeNAS to be a big part of it.

From the dd tests before you didn't seem to think that network or the pool was causing the bottleneck.
I'll still run a bunch more, just to make sure nothing happened after the upgrade and reconfiguring.


I am sort of confused now with the whole VDev - Pool thing now.
Your guide states that a VDev is one or more HHDs grouped together in RAIDZ and a Zpool is one or more VDevs.
I clicked on "create vdev" when I created both of them, figured that was creating a VDev and not a pool.
I thought I was creating what you have for Example 2 in your guide.

But you're saying I have two pools with one VDev each. :confused:

Could this be causing a problem?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Your iperf tests were 900Mb/sec+ which is what I'd expect.

Your dd tests were about what I'd expect, maybe 100MB/sec higher would have been ideal. You got about 300MB/sec which is still better than 1Gb LAN can do though. Part of me wants to blame the Highpoint for the poor performance. Highpoint is just a total f'in nightmare and I'd never ever recommend them to anyone, even someone I hate. They just aren't fit for ZFS servers in my opinion.

But, based on the server hardware, your iperf and your dd test there appears to be no bottleneck on the FreeNAS side of things. So the question turns to "where is the bottleneck" and I'm betting it's your low powered clients.
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Ok, thanks for going into that more.

Still don't disagree with not using the Highpoint controllers, after doing initial reading here, I debated on even trying them.
But since I had a bunch new, I figured why not give it a go.

Some M1015s are at the top of my list, trying to find a good deal for ones with the high profile bracket.

The low powered HTPCs are probably where the speed issue is coming from, especially the laptop HSDs.
The one I tested with initially has a AMD A8-6500 3.5GHz Quad-core APU, which I wouldn't call beefy or weak.
But since it got similar numbers to the other HTPC, even with the Intel Pro NIC, the similar 5400rpm HDDs seem to be
the thing to eliminate next.

I'll hop right on it after the day job and update.
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Ok, this evening I was able to finish building a system that a friend was waiting on anyway, just dropped a SSD in it to test.

PC:
Linux Mint 17-x64 Mate Edition
CPU - AMD FX 4130 3.8GHz Quad-core
Motherboard - some ASUS mobo
Memory - 8GB GSkill 1600mhz
HDD - some 60GB OCZ SSD (new) I had collecting dust
NIC - Intel Pro NIC - same one, don't know the model

Still using the basic TrendNet 8 port switch, only the PC and FreeNAS on it, no VLANS or anything. Seems to be performing as good as the Cisco SG300-20 anyway.

I transfered the same two 1.5GB and 3.4GB files.
Write: 1.5GB file 44-47MB/sec 3.4GB file 45-47MB/sec
Read: 1.5GB file 48-50MB/sec 3.4GB file 49-51MB/sec

Something isn't working right.

Next I plan on testing by:
- Removing the Highpoint controllers completely
- Installing something else other than FreeNAS

I think those are the next logical steps. If the dd and iperf tests show that the server is capable, I took the lower powered HTPCs out of the equation now, then I am not sure what other obvious things I can do.
I would be tempted to say it is just the Highpoint controllers, but I got the same results when I connected a drive right to the motherboard before.
 
Last edited:

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
I have removed the Highpoint controllers and been testing with just a 6HDD RAIDZ2 directly connected to the motherboard.
Went back to using the Cisco SG 300-20 switch.

iperf tests are a steady 944Mbits/sec both directions after several checks

dd write tests seem to bounce around between 389 to 498MB/sec using bs=2M count=10000
dd read tests are in the 415-653MB/sec using b2=2M count=10000

Copying from the Linux Mint 17 box with the SSD I get:
writes at a steady 48MB/sec
reads between 51-54MB/sec

So only a modest improvement with the Highpoint controllers removed, both setups should saturate the connection.

At this point I can only be lead to believe it is a CIFS limitation or issue on one of the ends.
I have tested with Linux Mint and PC-BSD 10 and there is no real change.

Will test with NFS this afternoon sometime and maybe just for the heck of it mess with some CIFS Aux parameters.
I only see old threads talk about adding/changing CIFS Aux parameters and autotune, which lead me to believe the defaults
are good so I haven't messed with Aux parameters at all. I know always had to tweak them under my Debian file server.
If tweaking those increases the speed, I'll be a little upset, cause I've been spending tons of time chasing down other things
because no one has chimed in with that being an option.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You need to verify your clients can read and write higher than the max ~50 MBps you are seeing. You have lots of hardware on the FreeNAS side it can saturate 1GBe in it's sleep. All the server power in the world won't help a client that can't write the transfer fast enough. Bone stock CIFS on 1/2 the hardware you have will max out 1Gb ethernet. Many people maxing out 10GBe have less hardware on the server side than you do and that is bone stock CIFS. It is unlikely tweaking parameters is your problem.

It looks like your AMD box with the SSD should be OK, but it takes a decent hard drive to write at 125MBps to max out your ethernet. The old OCX drives were slow writing. So you need a verified FAST client.

I can't help you much with MINT to me that is just another variable. But a Win7 box on 3 or 4 year old hardware, 2Gb ram and a new spinny drive will read at >105 MBps all day long and write large sequential files at saturated speeds. So being a windows guy. I would grab one of your drives and throw a quick and dirty Win7 install on the SERVER. Just share out some data for testing on a regular share. I would do the same on a WIN7 client that I thought had a shot at saturating 1GBe. Move a big (~1GB) file and just visually watch the transfer speed. If it can't do it under these conditions... you have a HARDWARE limit on the slowest box. But you are working towards a config that you KNOW can saturate your line. Swap the OS to bone stock FreeNAS and it will maintain the speed easily on your good hardware. Set up a single stripe or mirrored pool to test, but frankly your dd test on your pool looks fast and fine.

Obviously if you are a linux geek the same can be done under those conditions. I use a windows example only because I KNOW that it isn't getting in the way. BUT the key point is can your slowest HARDWARE do it? Physically small spinny drives can be slow, and I haven't seen a fast SSD on your client side (you listed no model number but it was small and old, so I am assuming not fast). But your new spinners can easily max out your network, which has already been verified to run near optimal.

Get two boxes maxing out your Gigabit Ethernet on a file transfer. Look for the slowest link in the system. In all likelihood the fast new FreeNAS box is fine, out of the box settings on far less hardware can easily max out your ethernet.

Good luck,
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
For perspective. My old ASUS P5NE-SLI Core Duo 6400, 2.13 Ghz, with 2GB Ram (circa 2008). Maxes out 1Gbe under both Win7 and FreeNAS. 105MBps all day long on the stock realtek nic. HDD 3 years or so 1TB WD.

FreeNAS may want all that ram and hardware, but it can still max out 1Gb ethernet on hardware that should be collecting dust.

Mike
 

SwampRabbit

Explorer
Joined
Apr 25, 2014
Messages
61
Thanks for the feedback mjws00

The SSD is an OCZ-Agility3, so not exactly really old. Bought a bunch new for testing, sometime last year.
In theory they can perform up to 525MB/sec writes and 475MB/sec reads.

Already had tested that "AMD box with the SSD" with Linux Mint to make sure it was decent.
dd test with bs=2M count=10000 which it wrote at 496MB/sec

I'm pretty sure 95% of my hardware, boxes, and network gear can do this, even the low power slower stuff.
Prior to this I could push and pull around 89MB/sec with CIFS off my Debian server which is far far weaker than this FreeNAS box.
That is why it is so darn puzzling. :mad:

I rebuilt everything, stock but updated BIOSes, have fresh installs of OSes, verified hardware is working great. But CIFS is still a turd.

I want to use ZFS for the added benefits it provides and I'd like to use FreeNAS because it is easier and simpler to deal with than Debian on a regular basis.
I've kept FreeNAS stock (installed several times to several USB drives), only configured the pool, and created a user and group each time.

I'm not strictly a Linux, Windows, or UNIX guy. I always say to each their own, for their own purpose, I usually have one of them all running in the house.
But you have a point about installing Windows on something for testing, planned on it, just haven't gotten around to it.

I don't know if Linux Mint is the problem or not, but PC-BSD 10 was about the same. And iX Systems makes that too.
I would suspect they made sure it played nicely and performed well with their own NAS system. :confused:

I'll load Windows 7 up on a client and test it with FreeNAS as a next step.
Maybe after that depending on the results, try CIFS with Linux Mint to Windows 7.
I'll be traveling for work this week, so it'll have to wait until next weekend.
 
Status
Not open for further replies.
Top