iSCSI Performance Problems

Status
Not open for further replies.
Joined
May 10, 2017
Messages
838
Is that on the HDD pool or the SSD pool?

That is to either pool, but just the first few GBs while it's caching to RAM, then it decreases to the speeds on the screenshots I posted above.

And for comparison, same transfer but without jumbo frames:

upload_2018-10-13_7-24-44.png
 

Attachments

  • upload_2018-10-13_7-23-49.png
    upload_2018-10-13_7-23-49.png
    5.9 KB · Views: 352
Last edited:

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
You don't need to zero-fill them, just check off the "mark drives as new (destroy data)" box when you export the pool in the FreeNAS GUI.
Will do.

even if you're a ZFS God.
I couldn't be further from that. I'll test mirrors.

Perhaps tuning the SMB settings would be a better option than dealing with the overhead of a block filesystem?
Optimizing SMB with the following tweaks:

server multi channel support = Yes
strict allocate = No
read raw = Yes
write raw = Yes
server signing = No
strict locking = No
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
min receivefile size = 16384
use sendfile = Yes
aio read size = 16384
aio write size = 16384


Only improved overall performance by 100~150MB/s I've never seen 1GB/s writes w/ SMB.

Definitely possible to get 1GB/s+ with SMB, at least while FreeNAS is caching to RAM:
Reading cached data from the server isn't an issue at all. I'll saturate 10Gbit. However I use the server on a primarily write basis so this is where issues start (on both pools).

And if the OP wants to try (and besides enabling jumbo frames) these are the only tunables I'm using (found somewhere on the this forum):
I can try tweaking with these once I setup the array again.

Right now I'm backing up the pool to my cold storage before I take down the HDD array.
 
Joined
May 10, 2017
Messages
838
Reading cached data from the server isn't an issue at all. I'll saturate 10Gbit. However I use the server on a primarily write basis so this is where issues start (on both pools).

I was talking about writes, the first few GB are cached to RAM by FreeNAS, IIRC about 1/8 of your installed RAM is used for cache, so for example I can get 1.1GB/s when I start writing to any pool, or for the total transfer if the total transfer size isn't larger than around 4GB, after that the speed drops to what the pool can sustain, around 900MB/s for the SSD pool, 400MB/s for the HDD pool.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
@jgreco I wanted to ask if this looks configured correctly for what you mean by mirrors. (it's not a configuration I've used before)
Screenshot_1.png
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
Well testing this using a series of configurations the only two that showed any meaningful insight as to the performance were the following.

unencrypted mirrors volume everything else default settings. The test file is a 19.2GB .mp4 file.

Both tests used a 1500 mtu as testing 9000 showed no meaningful benefit. I'm assuming this may have to do with it was useful in FreeNAS 9.3 but perhaps there were some improvements to the network driver since then.

Using the following Auxiliary parameters the result was as follows:
server multi channel support = Yes
strict allocate = No
read raw = Yes
write raw = Yes
server signing = No
strict locking = No
socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072
min receivefile size = 16384
use sendfile = Yes
aio read size = 16384
aio write size = 16384

Screenshot_2.png


Same test but removing every Auxiliary parameter but server multi channel support = Yes yielded:
Screenshot_3.png


That is the first time I've seen it touch 1GB/s writes but the performance is so inconsistent and all over the place only peaking at the very beginning & end of the transfer.
 
Joined
May 10, 2017
Messages
838
It's not clear to me if you tried the tcp tunables I posted before, default values aren't appropriate for 10GbE.

upload_2018-10-18_19-30-18.png
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
It's not clear to me if you tried the tcp tunables I posted before, default values aren't appropriate for 10GbE.
I did not because my memory is crap at best. I will get back to you about that after I try it but today my schedule is pretty full so likely tomorrow afternoon.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This could be because of the base performance of the discs in your underlying pool. What is your hardware configuration here?

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Joined
May 10, 2017
Messages
838
Not what I was hoping to see. The tunables look as if they made absolutely no difference at all.

Bummer, the problem seems network related to me, since the beginning of the transfer, and while FreeNAS is caching the writes to RAM, should max out the link, assuming the source drive in your desktop is fast enough, like a NVMe device or an SSD raid.
 

Windows7ge

Contributor
Joined
Sep 26, 2017
Messages
124
Bummer, the problem seems network related to me, since the beginning of the transfer, and while FreeNAS is caching the writes to RAM, should max out the link, assuming the source drive in your desktop is fast enough, like a NVMe device or an SSD raid.
Yeah, booting from an NVMe M.2. I can sustain well over 1GB/s reads. What doesn't make sense to me is why the performance is so sporadic for 1 large file transfer. I tried the tunables in combination with the auxiliary parameters mentioned before and tries jumbo packets in conjunction with that. The best peak I saw was 750MB/s but no transfer has sustained anything much higher than 500MB/s.
 
Status
Not open for further replies.
Top