Yet another 10Gb slow speed thread...

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
Greetings!

I've done a ton of reading on this subject and I think my bottleneck are my drives/pool but just wanted to see if you all agree. I have a FreeNAS host direct connected (DAC) to a ESXi 6.7 host with a handful of Virtual Machines connecting to FreeNAS over SMB and NFS. I've noticed that I can't
achieve greater speeds than 200MB/s when copying files over NFS and SMB. iPerf, as usual shows that there are no issues with the connections from the Virtual Machines to FreeNAS. I always see near 10Gb speeds.

FreeNAS host:
FreeNAS-11.1-U7
Intel(R) Xeon(R) CPU E5-2403 0 @ 1.80GHz
16GB
Mirror Pool with 4 2TB Seagates

The 10Gb NICs are Dell Broadcom 57810S Dual-Port 10GbE SFP+ with the latest drives installed. I have tried enabling jumbo frames with no luck. The ESXi host is running RAID10 with 15K RPM SASs.

Are my drives on the FreeNAS host the issue? I will gladly provide more information if needed.

Thank you.

Screen Shot 2020-02-02 at 1.41.23 PM.png
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Junior,

In that ton of reading you said you did, ever found out something about Sync Writes ? Did you tried to disable it to see the effect on your measurements ? Here, I do iSCSI so the client host (my ESXi) is the one choosing what is to be sync written and what is not.

Good luck with your diagnosis. Objective performance measurement is incredibly hard to achieve...
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
Hey Junior,

In that ton of reading you said you did, ever found out something about Sync Writes ? Did you tried to disable it to see the effect on your measurements ? Here, I do iSCSI so the client host (my ESXi) is the one choosing what is to be sync written and what is not.

Good luck with your diagnosis. Objective performance measurement is incredibly hard to achieve...

I forgot to mention that I did so. No improvements.
Thanks for your input.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
@Heracles So after further testing with Sync Disabled, I was able to nearly max the transfer speed but only for a few seconds before quickly slowing back down to mid 200MB/s. At every test I only saw a quick 5 second burst during the transfer at the very beginning.

A little closer to a solution? What would cause this?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

That sounds like saturating the caching on the host. As long as you have free RAM, FreeNAS can take it all and buffer everything in RAM. Once RAM is loaded, it must now waits after the drives.

Are you running any jails / plugins / vm that would consumme RAM ?

Did you checked if your drives are SMRs ? If they are, no need to look anywhere else : you will have to take them out and replace them with non-SMR drives.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
Hi again,

That sounds like saturating the caching on the host. As long as you have free RAM, FreeNAS can take it all and buffer everything in RAM. Once RAM is loaded, it must now waits after the drives.

Are you running any jails / plugins / vm that would consumme RAM ?

Did you checked if your drives are SMRs ? If they are, no need to look anywhere else : you will have to take them out and replace them with non-SMR drives.

Hi @Heracles.

There are no jails/plugins or VMs running or exist.

Research shows that my drives are possibly indeed SMR (according to Amazon Reviews). Replace them and speeds would improve?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi Junior,

Research shows that my drives are possibly indeed SMR (according to Amazon Reviews). Replace them and speeds would improve?

Well, that depends how you define "improvment". If you consider to speed up a motorcycle from 5 km/h to 500 km/h as improvement, then Yes it is. You can also consider that it would be well beyond improvement and would deserve a stronger term!

SMRs are to be avoided at all costs. They are beyond terribly bad for FreeNAS. I would say that it is almost mandatory to purge all of them from your setup. If indeed they are SMR, you will have to re-design your storage (number of drives, size, architecture, ...) and plan how you will migrate your data from these horrors to your new drives.

Too bad but there is no way out of this : SMRs are to be replaced or endured.

 
Last edited:

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
I will get rid of them as soon as possible and will bump up the ram as recommended.

My server is limited to 4 HDDs so what is the correct procedure to replace my mirror pool? Buy two at the time and resilver? Should I consider another raid type for getting the most out of 10Gb? And 5400 or 7200RPM?

Sorry for all the questions but it appears I have two FreeNAS experts at hand so mind as well ask;).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I misread your original message. If you're not actually storing VM's on the FreeNAS, the memory isn't such an issue.

But there's still a lot of good ZFS wisdom in the linked post. Especially with respect to 5400/7200... if your choice were between a 4TB 7200RPM disk and an 8TB 5400RPM disk (which might have similar cost) in most cases you ought to go for the 8 and treat it like it were a 4, capacity-wise. What you're trying to do is to allow ZFS to have lots of space for contiguous allocation. Your drive can read at maybe 150-200MBytes/sec if it can read sequentially but only 1-10MBytes/sec-ish if it is having to constantly seek.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
I misread your original message. If you're not actually storing VM's on the FreeNAS, the memory isn't such an issue.
No problem @jgreco. I will keep that in mind. I am taking my time reading that thread right now. Thanks for sharing as it will answer most of my questions.

On the replacing part. Is the procedure for replacing those drives is as I mentioned above? Buy in pairs of two?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
One at a time is probably better. If you make a mistake doing things two at a time, ... well, badness.
 

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
@jgreco and @Heracles For budget reasons under current events, I replaced those drives with 4 WD Reds 4Tb (5400rpm) for the meantime and hope to bump to a total of 8 in the near future. Speeds are slightly better but I think now I am simply being held back by the total of drives?

NFS transfer speeds are 280 MB/s (steady)
SMB transfer speeds are 250 MB/s (steady). However, I can write from FreeNAS to VM (host has SSD) at around 325 MB/s.

Both tests were done with SYNC disabled.

DD results in FreeNAS using this command yields 200 MB/s write and 225 MB/s read.

Code:
Write
dd if=/dev/zero of=/mnt/pool/test/tmp.dat bs=2048k count=50k

Read
dd if=/mnt/pool/test/tmp.dat of=/dev/null bs=2048k count=50k


What do you think? Should I consider bumping up the RAM? Would it help in this situation?

By the way, iSCSI performance is horrible. Starts at around 425 MB/s but quickly drops and fluctuates all over the place staying mostly around 100MB/s. Pool usage is at 30%. Again, more RAM?

No expecting a miracle here as I am satisfied with the performance as it serves my needs perfectly but wouldn't mind squeezing a bit more if only minor adjustments or upgrades are to be made.

Thank you.
 
Last edited:

junior466

Explorer
Joined
Mar 26, 2018
Messages
79
So is this the same 16GB system and you're now trying to do iSCSI (block storage)? I suggest circling back up to my post #8 and the link to the

https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

Block storage is extremely stressful and is unlikely to be pleasant if the system is starved for resources.

So much useful information on that post! Will start reading it now (I will admit that I glanced at it before but I wasn't doing much besides serving a couple SMB shares). I will be upgrading the RAM as soon as possible.

Thanks for your help!
 

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I'm in the same boat and I have the same CPU (E5-2403 0 @ 1.80GHz) and 8x HGST Ultrastar He10 drives ... yet, I get about 100MB - 200MB ... I just don't get it. I've had other 8-drive RAID arrays with SPINNING drives (granted, it was RAID-5 and not RAID-6) which produced 800MB/s ... but here it's just crap no matter what I do.

An enterprising Tech could dedicate himself to solving this problem, advertise he'd solve it for $125 remotely ... and make bank.

I think most people don't even post threads but probably read them ... and even of those posted, there are MANY.
 

Mark Levitt

Explorer
Joined
May 21, 2017
Messages
56
@jgreco and @Heracles For budget reasons under current events, I replaced those drives with 4 WD Reds 4Tb (5400rpm) for the meantime and hope to bump to a total of 8 in the near future. Speeds are slightly better but I think now I am simply being held back by the total of drives?

Are they new 4TB WD Reds with the model number of WD40EFAX? If so, I'm sorry to tell you those are SMR drives as well.

If they are the older model WD40EFRX, then they are OK.
 
Top