SOLVED Slow transfer speeds on first freenas server, need cache drive?

Status
Not open for further replies.

Samfb24

Cadet
Joined
May 26, 2019
Messages
7
Hey guys, I just built my first freenas server using donated hardware from an old rig. The parts I used were used simply because I had them around the house and for no other particular reason. The specs are:
-OS: Freenas 11.2
-Motherboard: Asus M5A97
-CPU: AMD FX -8350 8-core
-RAM: 16GB DDR3
-Storage: 6x 2TB (3GB/s) mechanical drives in a raidz2 pool

While testing from a Windows desktop on the same gigabit network I'm noticing poor speeds when transfering files. Small files (5gb roughly) are fine for a while at about 100MB/s but eventually drops to 30MB/s roughly. Larger files experience very poor speeds almost immediately. How can I fix this? I have other smaller drives around the house, perhaps if I throw one of these in as a cache drive that will do the trick?
 

Joe55

Dabbler
Joined
Sep 27, 2017
Messages
10
The thing with raid arrays is they are only as fast as the slowest drive. You said this was donated hardware - did you check smartctl to see if any were in bad shape and did you run any tests on the drives? Have you verified all of the SATA ports on the MB function correctly and the cables aren't damaged?

Are all drives in AHCI mode? Is the Motherboard firmware/bios up to date?
 

Samfb24

Cadet
Joined
May 26, 2019
Messages
7
Your point is correct but the issue I am having is that the speeds start out fine and then deteriorate as the transfer moves along. I get what you're saying but I don't think it's the issue I'm having since the speeds start out fine. This indicates to me the hardware can handle it, but something is bogging it down along the way. I honestly feel like my problem is cache as I think the cache filling up is when I see the speeds drop. I'm only speculating though.
 

Joe55

Dabbler
Joined
Sep 27, 2017
Messages
10
Your point is correct but the issue I am having is that the speeds start out fine and then deteriorate as the transfer moves along. I get what you're saying but I don't think it's the issue I'm having since the speeds start out fine. This indicates to me the hardware can handle it, but something is bogging it down along the way. I honestly feel like my problem is cache as I think the cache filling up is when I see the speeds drop. I'm only speculating though.

Destroy the existing raid array, create three 2x2TB stripped arrays. If it happens on each array it's not a problem with the disks.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,829
I'd consider looking at the RAM, see how much is spare - at 16GB you may be low for a 8TB pool.

Also, how full is this pool - IIRC, the best performance is below 50% fill and performance really takes a hit once the pool is 80%+ filled.

You can certainly try using a SLOG drive - a good SSD will do the trick, though one with power protection is a better choice even if you have a UPS. I used to use the S3710 series from Intel, now I use the P4801x.
 

Samfb24

Cadet
Joined
May 26, 2019
Messages
7
Destroy the existing raid array, create three 2x2TB stripped arrays. If it happens on each array it's not a problem with the disks.

I will keep this in mind and perform the test if I cannot resolve this without needing to wipe out the array.

I'd consider looking at the RAM, see how much is spare - at 16GB you may be low for a 8TB pool.

Also, how full is this pool - IIRC, the best performance is below 50% fill and performance really takes a hit once the pool is 80%+ filled.

You can certainly try using a SLOG drive - a good SSD will do the trick, though one with power protection is a better choice even if you have a UPS. I used to use the S3710 series from Intel, now I use the P4801x.

The pool is completely empty, no data lives on it at this point in time. As for the RAM, I have watched at the same time I'm doing a transfer and it never goes above 10-12gb usage. Although it does seem to steadily rise during the transfer.

This is quite discouraging as I never had these issues on my other server running a Windows storage pool.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
I don’t have anything particularly clever to say other than “I don’t think you need a cache”. I have a simple setup with 5 mechanical drives in a raidz2, and a gbit interface, and I get a solid 100MB/s write from Windows.

Can you test gbit speed over the network? Someone will be along with a test program momentarily I don’t doubt.

How’s CPU doing during those transfers?

Should we maybe be suspicious of your onboard SATA ports?

I don’t think it’s the concept that’s at fault here - writing to a single vdev shared via SMB over a gbit interface - I think it’s the consumer hardware somehow.

Only because I am doing exactly what you are doing, conceptually, but on a supermicro board. And my performance is fine.
 

Joe55

Dabbler
Joined
Sep 27, 2017
Messages
10
Only because I am doing exactly what you are doing, conceptually, but on a supermicro board. And my performance is fine.

Using a dd test with compression disabled on my 5x1TB striped array I get 9.2Gbps write on a 50GB test file.

So now I have to get a 10Gbps card hah. Maybe I'll try USB 3.0 or something instead.

OP, have you run a dd test yet on a dataset with compression disabled? That would help narrow things down. You can make new datasets without formatting.
 

Samfb24

Cadet
Joined
May 26, 2019
Messages
7
I don’t have anything particularly clever to say other than “I don’t think you need a cache”. I have a simple setup with 5 mechanical drives in a raidz2, and a gbit interface, and I get a solid 100MB/s write from Windows.

Can you test gbit speed over the network? Someone will be along with a test program momentarily I don’t doubt.

How’s CPU doing during those transfers?

Should we maybe be suspicious of your onboard SATA ports?

I don’t think it’s the concept that’s at fault here - writing to a single vdev shared via SMB over a gbit interface - I think it’s the consumer hardware somehow.

Only because I am doing exactly what you are doing, conceptually, but on a supermicro board. And my performance is fine.
I get gigabit transfer speeds to other devices on the network without issues.

The CPU on the freenas server doesn't climb to high usage during transfers either.

Another thing that comes to mind is the raid controller I'm using. Out of the 6 drives 4 are connected via an adaptec 3405 raid controller in JBOD. The other 2 are connected to onboard SATA.

Using a dd test with compression disabled on my 5x1TB striped array I get 9.2Gbps write on a 50GB test file.

So now I have to get a 10Gbps card hah. Maybe I'll try USB 3.0 or something instead.

OP, have you run a dd test yet on a dataset with compression disabled? That would help narrow things down. You can make new datasets without formatting.

I have not run dd test. Looking this up now as I'm not familiar with how to do that.
 

Joe55

Dabbler
Joined
Sep 27, 2017
Messages
10
I get gigabit transfer speeds to other devices on the network without issues.

The CPU on the freenas server doesn't climb to high usage during transfers either.

Another thing that comes to mind is the raid controller I'm using. Out of the 6 drives 4 are connected via an adaptec 3405 raid controller in JBOD. The other 2 are connected to onboard SATA.



I have not run dd test. Looking this up now as I'm not familiar with how to do that.

cd /mnt/into-your-pool/dataset-without-compression then run the commands.

Some dd commands heavily use CPU. You want to avoid those.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
Realtec NIC...

Might be time for an upgrade

Have Fun
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Hey guys, I just built my first freenas server using donated hardware from an old rig. The parts I used were used simply because I had them around the house and for no other particular reason. The specs are:
-OS: Freenas 11.2
-Motherboard: Asus M5A97
-CPU: AMD FX -8350 8-core
-RAM: 16GB DDR3
-Storage: 6x 2TB (3GB/s) mechanical drives in a raidz2 pool

While testing from a Windows desktop on the same gigabit network I'm noticing poor speeds when transfering files. Small files (5gb roughly) are fine for a while at about 100MB/s but eventually drops to 30MB/s roughly. Larger files experience very poor speeds almost immediately. How can I fix this? I have other smaller drives around the house, perhaps if I throw one of these in as a cache drive that will do the trick?

I'd expect xfer of a large number of small files to occur at < 100MB/s, but I'd expect xfer of large files to be around 100MB/s. The FreeBSD realtec NIC driver is not a great performer, so could be part of the story. But your use of adaptec 3405 raid controller in JBOD, is a concern. Why not just use the six on board SATA ports? That raid controller may not be working as a pure HBA as required by zfs.

As a point of reference I recently had this combo in use with FreeNAS - M5A78L-M USB3 + AMD 960T + 16GB ECC + intel NIC - with a 4 x 2TB pool at 80% capacity. This setup achieved 100MB/s writes over a SMB share for large files on an otherwise idle pool.
 

Tigersharke

BOfH in User's clothing
Administrator
Moderator
Joined
May 18, 2016
Messages
893
I get gigabit transfer speeds to other devices on the network without issues.

The CPU on the freenas server doesn't climb to high usage during transfers either.

Another thing that comes to mind is the raid controller I'm using. Out of the 6 drives 4 are connected via an adaptec 3405 raid controller in JBOD. The other 2 are connected to onboard SATA.

I'm sure using a raid card is not ideal as it may have its own caching or other methods of operation which could inhibit or reduce efficiency with ZFS. I cannot say how eveything would work together with some on a raid card which *may* have its own raid enabled and other drives with direct SATA connections, but it is something I would definitely check to be certain. The scenario could be this: the transfer goes fine until it gets rolling and begins to use the cache from the raid card, then you have two competing caches which might get out of sync or conflict in some way which causes ZFS to do retransmissions or corrections etc.

I don't claim to be an expert but it seems like a reasonable guess. The raid card people would know more than me and might say whether its probable.

As for ethernet cards, if you need to replace, I'd look into a Intel 82540EM PRO/1000 MT Gigabit PCI Ethernet Network Adapter Card which could still be found fairly cheaply and perfectly supported by FreeBSD.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
It's your raid card that you failed to mention at the start and the realtek nic. Fix that then start testing with dd and iperf. You also never mention of it's a read test it write tests. I can be 100% positive you don't need any kind of slog or cache device. They don't help the way people think they do.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
4 are connected via an adaptec 3405 raid controller in JBOD.

Not a good choice. The Adaptecs are terrible controllers on FreeBSD. Replace this with something like an LSI 9211-8i. May or may not be related to your existing problem. Note that replacing the controller might require you to rebuild your pool depending on Adaptec's definition of "JBOD".

Realtec NIC...

And the Realtek parts are extremely well known for performing poorly. The Realtek interfaces are powered by either one or two hamsters running on an exercise wheel to move your bits around. We find that often one of them gets sick or maybe dies. Realteks are not real tech.

How can I fix this? I have other smaller drives around the house, perhaps if I throw one of these in as a cache drive that will do the trick?

No. We don't recommend adding L2ARC ("read cache") until you get out to 64GB of RAM (possibly get away with it at 32GB) because doing L2ARC on small-memory systems forces the ARC to flush data more quickly and the system doesn't get a chance to quantify the best candidates for SSD caching. As for SLOG, many people incorrectly think that this is "write cache" because of YouTubers etc. who have no clue. You get fastest write performance in ZFS simply by turning off sync writes. Everything else - especially including adding SLOG - causes writes to go slower.
 

Samfb24

Cadet
Joined
May 26, 2019
Messages
7
Success!

You guys were correct, it was in fact the RAID controller causing the problem. I removed the controller and have all 6 drives now plugged into the motherboard. Rebuilt the array, now I'm getting consistent gigabit speeds read/write with zero hiccups. I can live with this.

The Realtec NIC doesn't seem to be causing a bottleneck at the moment so I will be leaving that alone for the time being. I also don't intend to add any more drives any time soon but when I do I'll be purchasing an HBA to use instead of RAID.

I appreciate all you guys for helping me move in the right direction!

Just for reference, the problem I was experiencing was only affecting write speeds. Read speeds were always fine during my testing.
 
Status
Not open for further replies.
Top