Slow transfer performance

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
Hi guys

im trying to replace a old NAS box (SMB1, and 2x mirrorred 1TB 7200rpm drives) - Fyi i get copy speeds around 20-25mb (1x gigabit nic)


ive just built a freenas ESXI VM on a dell R720 server, using LSI 9211 HBA card as passthrough for the drives.

Im running:

FreeNAS-11.2-U6
Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz (4 cores)
8 GiB ram
6 x 2TB Seagate BarraCude 2.5inch 5400rpm 128mb cache drives - in RAID-Z2
1x gigabit nic (currently configured)
Freenas auths with AD on windows server 2019 VM (same box)
SMB3 share
SMB service has additional "nfs4:mode = simple" set so permissions work for AD users.


however, testing the transfer speed between a nvme client on the network and Freenas, im getting very slow speed with big spikes. 500kb to 100mb/s but mainly slow.
also browsing through folders is slow to responsed.

most all settings are default. what can i do to speed it up?? what have i done wrong?

PS: the HDDs are not ideal :( but can't afford enterprise 2.5inch drives.
 

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
some info

ive run iperf and it maxes out the GB nic
ive run iozone -a. and it seem to indiciate high speeds

ive run "lan speed test" using 8GB random files transfer. i get like 4.8Mbps on write. then 900Mbps on read (max out NIC)

doesnt seem to be network related. what is wrong with my write speeds :/

8GB RAM with 4GB not being used
CPU barely 20%


zpool status -v
root@FREENAS[~]# zpool status -v
pool: Storage
state: ONLINE
scan: scrub repaired 0 in 0 days 00:06:47 with 0 errors on Sun Sep 15 00:07:22 2019
config:

NAME STATE READ WRITE CKSUM
Storage ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/7bf18b34-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0
gptid/8502a178-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0
gptid/8e0aa9bc-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0
gptid/9709186d-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0
gptid/a0251831-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0
gptid/a9396ede-b91d-11e9-a7c5-000c293c0daf ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:38 with 0 errors on Fri Oct 18 03:45:38 2019
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors

camcontrol devlist
Root@FREENAS[~]# camcontrol devlist
<VMware Virtual disk 2.0> at scbus2 target 0 lun 0 (pass0,da0)
<NECVMWar VMware SATA CD00 1.00> at scbus3 target 0 lun 0 (pass1,cd0)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 8 lun 0 (pass2,da1)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 9 lun 0 (pass3,da2)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 10 lun 0 (pass4,da3)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 11 lun 0 (pass5,da4)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 12 lun 0 (pass6,da5)
<ATA ST2000LM015-2E81 SDM1> at scbus33 target 13 lun 0 (pass7,da6)
<ATA WDC WD2500BEVT-2 1A01> at scbus33 target 14 lun 0 (pass8,da7)

LSPCI
root@FREENAS[~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 08)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.0 ISA bridge: VMware Virtual Machine Communication Interface (rev 08)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
02:00.0 USB controller: VMware USB1.1 UHCI Controller
02:01.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
02:02.0 USB controller: VMware USB2 EHCI Controller
02:04.0 SATA controller: VMware SATA AHCI controller
03:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

ifconfig
root@FREENAS[~]# ifconfig
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
ether 00:0c:29:3c:0d:af
hwaddr 00:0c:29:3c:0d:af
inet 10.1.1.4 netmask 0xffffff00 broadcast 10.1.1.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
groups: lo
 
Last edited:

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
You could check your sync settings, "zfs get sync". It should be set to standard for most use cases.
Also are those drives SMR?
 

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
zfs sync is set to standard

i disabled it and ran a "lan speed test" it got 900mbps, so i guess thats caching in ram @ 1gig nic speed.
tried the same test in windows 10 coping 4gb file, it ran at 110MBps (1gig nic speed) but then dropped to <50mb range.

whats causing this?

also just general nagivating through folders is slow and painful process.



SMR, not sure, the specs are here, page 7
https://www.seagate.com/www-content...op-fam/barracuda_25/en-us/docs/100807728a.pdf
ST2000LM015 2TB = 4 write/read heads, 2 disks?
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Based on some looking around it does look like those drives are SMR and that would explain your issue. As soon as you exhaust the RAM cache on the drive it has to write directly to the drive and transfer rates fall off a cliff. SMR are meant for WORM scenarios, archive drives. They are about the worst thing for general NAS drives.

This link is about the drive in your system. The graph at the top of this link seems to show what you are describing, dropping to 2MB/s transfer rates.
https://www.reddit.com/r/DataHoarder/comments/9m124c/the_one_drive_in_my_server_that_probably_shouldnt/

Folder navigation is more likely metadata caching issues which I've seen on 11.1 U6+ (11.1 U5 was fine and fast).
 
Last edited:

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
Hi Solway.
I struggled with painfully slow windows explorer directory browsing especially in directories with many files.

Adding these

ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no

to the "Auxiliary Parameters" of the FreeNAS SMB service fixed this issue for me.
In my opinion a sluggish browsing experience is the absolute pits...
 

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
wish i had a 3.5" dell R720 now

@Frikkie thanks i'll test

i mainly use this zfs for <50mb files, but over mulitple clients. as well as redirection of folders for AD.

would a slog device help? 2xmirror 2.5 sdd?
 

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
Ive just tested with a 120GB SSD used as ZIL.

In LAN speed test
i get 400mbits writing
and 800-900mbits reading

However, using windows to tranfers say 5gb file, it maxes out at 115MB/s then about 2/3rds way through is drops to <50MB/s and random spikes.

Copying file back to pc, it'll be slow for a few seconds then speed up to max 115MB/s.

I have 8GB ram. do i need a Arc drive too? or add more ram?

is there a way to speed up the response of reading/navigating folders?

i found 2nr Samsung PM863a 240GB drives that were going cheap. Do i need to mirror the ZIL because im on a UPS as well?
 

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
@Solway Did you set the aux parameters I suggested and restart the FreeNAS SMB service?

What is "atime" and "record size" set to on the dataset you are using to test your speeds?
 

Solway

Dabbler
Joined
Aug 14, 2019
Messages
25
@Solway Did you set the aux parameters I suggested and restart the FreeNAS SMB service?

What is "atime" and "record size" set to on the dataset you are using to test your speeds?

yes done the aux parameters
record size 128k
atime = on

also set sync to always (when adding ZIL drive)

note: i have also "nfs4:mode = simple" set for correct way to use AD and perimssions for folder redirection
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
would a slog device help? 2xmirror 2.5 sdd?
Unlikely. A SLOG isn't a cache, it's simplified as a LOG of in RAM transaction groups before they have been fully committed to disk.

If you do want to work with a SLOG you want the lowest latency NVMe/PLP device you can get. Intel P3700, Optane, etc.
 
Top