Freenas 11.2 U2.1 abysmal gui / system performance

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
Hello,
i'm running Freenas now for more than half a year in preparation to make the migration away from Unraid.
Specs:
Intel(R) Xeon(R) CPU E3-1231 v3
Supermicro X10SLM-F
32GB ECC

I have 2 pools:

Mass Storage 14x4TB Raidz2
SSD Pool 3x240GB Raidz1

Boot drive 16GB USB3.0 / 64GB SSD SATA (upgraded from USB stick yesterday fresh install)
The boot drive and the SSDS are connected to the onboard SATA.
HDD pool is on a LSI HBA IT Mode

Now i migrated all my stuff over and the performance is not what i was expecting.
The gui is barely usable the dashboard does not even fully load.
Reporting does not load any graphs, accessing SMB shares takes a lot of time.

Over all the system is painfully slow. Migrating back is currently not an option so i want to try and fix this but i'm out of ideas.

Currently i have ~15 VMs running some on the SSD storage some on the HDDs performance is not great.

If i run top i can see the system is not doing anything

last pid: 46988; load averages: 0.42, 0.54, 0.54 up 0+04:47:25 17:05:15
68 processes: 1 running, 67 sleeping
CPU: 0.2% user, 0.0% nice, 2.8% system, 0.0% interrupt, 96.9% idle
Mem: 824M Active, 10G Inact, 19G Wired, 805M Free
ARC: 16G Total, 8516M MFU, 7780M MRU, 3455K Anon, 164M Header, 391M Other
15G Compressed, 22G Uncompressed, 1.50:1 Ratio
Swap: 10G Total, 10G Free

PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
243 root 27 52 0 285M 237M kqread 7 4:26 4.13% python3.6
46987 root 1 33 0 17668K 10716K sbwait 0 0:00 0.69% rrdtool
46981 root 1 32 0 17668K 10600K vlruwk 0 0:00 0.68% rrdtool
46985 root 1 32 0 17668K 10600K vlruwk 6 0:00 0.62% rrdtool
46986 root 1 32 0 17668K 10600K vlruwk 5 0:00 0.61% rrdtool
2403 root 4 20 0 6264K 2136K zcw->z 3 0:22 0.29% nfsd
4405 root 11 20 0 1308M 1261M nanslp 0 1:31 0.23% collectd
4096 root 14 52 0 88384K 21688K select 7 0:08 0.21% rrdcached
46833 root 1 20 0 8196K 3940K CPU2 2 0:00 0.06% top

I looked at similar threads and found the solnet-array-test-v2.sh script i ran that against my boot drive to see if that is maybe the issue

Enter grep match pattern (e.g. ST150176): SDSSDP064G

Selected disks: ada3
<SanDisk SDSSDP064G 3.1.0> at scbus6 target 0 lun 0 (pass18,ada3)
Is this correct? (y/N): y
Performing initial serial array read (baseline speeds)
Mon Mar 18 17:07:10 CET 2019
Mon Mar 18 17:09:35 CET 2019
Completed: initial serial array read (baseline speeds)

Array's average speed is 240.44 MB/sec per disk

Disk Disk Size MB/sec %ofAvg
------- ---------- ------ ------
ada3 60103MB 240 100

Performing initial parallel array read
Mon Mar 18 17:09:37 CET 2019
The disk ada3 appears to be 60103 MB.
Disk is reading at about 240 MB/sec
This suggests that this pass may take around 4 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
ada3 60103MB 240 240 100

Awaiting completion: initial parallel array read
Mon Mar 18 17:13:49 CET 2019
Completed: initial parallel array read

Disk's average time is 250 seconds per disk

Disk Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
ada3 63023063040 250 100

Performing initial parallel seek-stress array read
Mon Mar 18 17:13:49 CET 2019
The disk ada3 appears to be 60103 MB.
Disk is reading at about 92 MB/sec
This suggests that this pass may take around 11 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
ada3 60103MB 240 95 40

Awaiting completion: initial parallel seek-stress array read
^C
NAS3#
I don't know if that is normal behaviour but i canceled the test after 50minutes it still was not completed.
But the speeds are ok im my opinion.

I'm out of ideas i don't think another reinstall would make a difference.
Anyone has similar issues or another idea where to look for clues i'm normaly more of a windows guy when it comes troubleshooting knowledge.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Mass Storage 14x4TB Raidz2
Pool layout? Do you have 14 drives all in one vdev? That would be the problem if you do.
Over all the system is painfully slow. Migrating back is currently not an option so i want to try and fix this but i'm out of ideas.
If you have a bad pool layout, you will need to get the data out of the pool to reconfigure.
Also, how full is your pool. ZFS performance tanks as the pool approaches capacity.

Please share the output of zpool list an zpool status
LSI HBA IT Mode
We should also look at the firmware version. Please share the output of sas2flash -list
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
oh i did not know that somehow that was not an issue in the tutorials i read or i missed it.

Is the gui performance linked to pool performance ?

So split into 2 vdevs essentially that will be tough to manage. Can i remove empty drives from a vdev ? Then it could be possible since im only at 60% capacity and have probably half the files mirrored offsite. Edit: just remembered not possible :/

NAS3# zpool status
pool: ESX
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0 days 00:07:09 with 0 errors on Sun Feb 10 09:07:09 2019
config:

NAME STATE READ WRITE CKSUM
ESX ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/6f7bec82-d6f7-11e8-a20b-0cc47a7794c5 ONLINE 0 0 0
gptid/6fd674d4-d6f7-11e8-a20b-0cc47a7794c5 ONLINE 0 0 0
gptid/70357d79-d6f7-11e8-a20b-0cc47a7794c5 ONLINE 0 0 0

errors: No known data errors

pool: NAS3
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0 in 0 days 12:15:27 with 0 errors on Sun Feb 10 21:15:31 2019
config:

NAME STATE READ WRITE CKSUM
NAS3 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
gptid/0e7c5de4-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/0f1f5694-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
da13 ONLINE 0 0 0
gptid/1094950c-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/114ce502-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/1201beda-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/12b00346-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/b709b960-8e7b-11e8-bc88-0cc47a7794c5 ONLINE 0 0 0
gptid/f6523200-22d5-11e8-9419-0cc47a7794c5 ONLINE 0 0 0
gptid/14d88e43-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/15aea15a-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/165cbf13-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/18c33847-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0
gptid/1bc9dc82-7556-11e7-96a5-0cc47a7794c5 ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0 days 00:01:46 with 0 errors on Sat Mar 16 03:46:46 2019
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
ada3p2 ONLINE 0 0 0

errors: No known data errors

NAS3# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
ESX 664G 171G 493G - - 49% 25% 1.00x ONLINE /mnt
NAS3 50.5T 31.7T 18.8T - - 31% 62% 1.00x ONLINE /mnt
freenas-boot 58.5G 3.05G 55.5G - - - 5% 1.00x ONLINE -

NAS3# sas2flash -list
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

Adapter Selected is a LSI SAS: SAS2008(B2)

Controller Number : 0
Controller : SAS2008(B2)
PCI Address : 00:01:00:00
SAS Address : 500605b-0-03e2-fdb0
NVDATA Version (Default) : 14.01.00.08
NVDATA Version (Persistent) : 14.01.00.08
Firmware Product ID : 0x2213 (IT)
Firmware Version : 20.00.04.00
NVDATA Vendor : LSI
NVDATA Product ID : SAS9211-8i
BIOS Version : N/A
UEFI BSD Version : N/A
FCODE Version : N/A
Board Name : SAS9211-8i
Board Assembly : N/A
Board Tracer Number : N/A

Finished Processing Commands Successfully.
Exiting SAS2Flash.
NAS3#

Thanks for your advice :)
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
oh i did not know that somehow that was not an issue in the tutorials i read or i missed it.

Is the gui performance linked to pool performance ?
I am not sure where it is stated, but I know it is somewhere, but the general guidance is no more than 10 drives in a single RAIDz2 vdev, and no more than 11 in a RAIDz3 vdev. According to your zpool status output, you have a RAIDz3 with 14 drives in it. It also looks like you must have replaced one of them at the command line instead of through the GUI, because it is listed as da13 instead of the gptid:
Code:
  pool: NAS3
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 12:15:27 with 0 errors on Sun Feb 10 21:15:31 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        NAS3                                            ONLINE       0     0     0
          raidz3-0                                      ONLINE       0     0     0
            gptid/0e7c5de4-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/0f1f5694-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            da13                                        ONLINE       0     0     0
            gptid/1094950c-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/114ce502-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/1201beda-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/12b00346-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/b709b960-8e7b-11e8-bc88-0cc47a7794c5  ONLINE       0     0     0
            gptid/f6523200-22d5-11e8-9419-0cc47a7794c5  ONLINE       0     0     0
            gptid/14d88e43-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/15aea15a-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/165cbf13-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/18c33847-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0
            gptid/1bc9dc82-7556-11e7-96a5-0cc47a7794c5  ONLINE       0     0     0

errors: No known data errors

That is going to limit pool performance and the system dataset is stored in the pool by default which might impact GUI performance. The reason for the guidance is to preserve performance. I don't think there is not a real limit, I have actually seen a pool with 45 drives in a vdev, but it is detrimental in many ways. Smaller vdevs perform better and give a smaller error domain. If you are not able to move your data out of the system to reconfigure the pool, this will not cause a catastrophic failure, but having two vdevs instead of one would make the system perform better. I can't say with certainty how that would impact the new GUI that was introduced in version 11.2.
So split into 2 vdevs essentially that will be tough to manage. Can i remove empty drives from a vdev ?
There are no empty drives in a vdev. All drives in all vdevs of a pool are always used unless they are hot spare drives. Since you are in a RAIDz3 configuration, you could remove two drives and still have one drive of redundancy. I am not advocating that and it wouldn't likely help you much.
Based on the zpool list output
Code:
NAS3#    zpool list
NAME           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
ESX            664G   171G   493G        -         -    49%    25%  1.00x  ONLINE  /mnt
NAS3          50.5T  31.7T  18.8T        -         -    31%    62%  1.00x  ONLINE  /mnt
freenas-boot  58.5G  3.05G  55.5G        -         -      -     5%  1.00x  ONLINE  -
You already have almost 32TB of data and the pool is at 62% capacity. Just keep an eye on that, between 80 and 90% capacity is where it really starts to slow down. How many drive bays do you have? It might be that you can shuffle some things around.
NAS3# sas2flash -list
This section is almost as big a problem as the pool layout. The SAS controller you have is using an old firmware:
Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

        Adapter Selected is a LSI SAS: SAS2008(B2)

        Controller Number              : 0
        Controller                     : SAS2008(B2)
        PCI Address                    : 00:01:00:00
        SAS Address                    : 500605b-0-03e2-fdb0
        NVDATA Version (Default)       : 14.01.00.08
        NVDATA Version (Persistent)    : 14.01.00.08
        Firmware Product ID            : 0x2213 (IT)
        Firmware Version               : 20.00.04.00
        NVDATA Vendor                  : LSI
        NVDATA Product ID              : SAS9211-8i
        BIOS Version                   : N/A
        UEFI BSD Version               : N/A
        FCODE Version                  : N/A
        Board Name                     : SAS9211-8i
        Board Assembly                 : N/A
        Board Tracer Number            : N/A

        Finished Processing Commands Successfully.
        Exiting SAS2Flash.
On the line that says Firmware Version : 20.00.04.00, it should be 20.00.07.00 and there were a lot of corrections made between sub-version 04 and sub-version 07. That definitely needs to be updated, but you have the sas2flash utility built into FreeNAS, so it is just a matter of downloading the new firmware.
You don't need to crossflash the card, but this document should help you get the latest version of the firmware loaded:

Detailed newcomers' guide to crossflashing LSI 9211/9300/9311 HBA and variants
https://www.ixsystems.com/community...shing-lsi-9211-9300-9311-hba-and-variants.54/
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
ok updating the firmware should not be an issue. I'm running this card for a couple of years now on that firmware without issues but you are right that could help. I'll do that next.

I have stopped now all VMs stored on the system and there should not be anything else accessing the system, but now i can't even access the gui or ssh anymore it's completely locked up.
SSH is giving me "Network error: Connection refused" so it is still alive somewhat.
iSCSI is still fine and performs nicely for some reason but all other services seem to have crashed :/

I'm not having any bays left, i could free all drives but that would take some time and i'm not sure if that would ultimately fix my issue.
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
I went back to my old install on usb since the new one did note make a difference
LSI is flashed now to P20 firmware
while beeing on console a saw a lot of errors containing File "middlewared/client/client.py"
found these in /var/log/messages

Mar 18 16:40:11 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:40:11 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:40:23 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:40:23 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:40:41 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:40:41 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:41:37 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:41:37 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:41:40 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:41:40 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:41:45 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:41:45 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:41:45 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:41:45 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:42:13 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:42:13 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:23 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:25 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:43:26 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:26 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 316, in __init__']
Mar 18 16:43:35 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:35 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Mar 18 16:43:39 NAS3.it.local uwsgi: [sentry.errors.uncaught:702] ['timeout: timed out', ' File "django/core/handlers/exception.py", line 42, in inner', ' File "django/core/handlers/base.py", line 244, in _legacy_get_response', ' File "freenasUI/freeadmin/middleware.py", line 296, in process_request', ' File "freenasUI/middleware/auth.py", line 8, in authenticate', ' File "freenasUI/middleware/client.py", line 20, in __enter__', ' File "middlewared/client/client.py", line 320, in __init__', ' File "middlewared/client/client.py", line 313, in __init__', ' File "middlewared/client/client.py", line 170, in connect', ' File "ws4py/client/__init__.py", line 215, in connect']
i checked /var/log/middlewared.log too but there are only warningsr
The system is fine after a reboot for about 3-5 minutes after that it starts slowing down fast.

I think i have to give up on FreeNAS and try to migrate back to Unraid i have no clue why the system seems to be killing itself.
Or are there any other ideas out there ?
 

Attachments

  • middlewared.log.txt
    9 MB · Views: 400

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
Try turning off AD monitoring and setting 'aio write size = 0', 'aio read size = 0' as auxiliary parameters on your shares.
made the changes and rebooted same issue performance is ok for about 5 minutes now 3 hours later i'm back to unusable logging in to ssh is taking about 30 sec.
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
I think i have to give up on FreeNAS and try to migrate back to Unraid i have no clue why the system seems to be killing itself.

I've never used unraid, but looking at the comparison chart: File System: FreeNAS - OpenZFS; unraid - XFS (default), BTRFS. How did you move your HDDs from one to the other?
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
I've never used unraid, but looking at the comparison chart: File System: FreeNAS - OpenZFS; unraid - XFS (default), BTRFS. How did you move your HDDs from one to the other?

No 2 old Unraid systems were consolidated on this FreeNAS system new HDDs file were all copyed
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
No 2 old Unraid systems were consolidated on this FreeNAS system new HDDs file were all copyed

Interesting. All comments above regarding pool layout/HBA firmware could impact pool performance, but shouldn't impact the speed your GUI loads.

i'm running Freenas now for more than half a year in preparation to make the migration away from Unraid.

Is your server the same you were using before or you did a fresh install? What version are you running?

32GB ECC

I have 2 pools:

Mass Storage 14x4TB Raidz2
SSD Pool 3x240GB Raidz1

Currently i have ~15 VMs running

Maybe the performance issue us related with the amount of memory available. With FreeNAS + 56TB volume + 15 VMs, you might be with a high swap. Have you checked that?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
when you say 15vm's do you mean vm's running on freenas or freenas just hosting the virtual disks over nfs/smb?
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
Is your server the same you were using before or you did a fresh install? What version are you running?
New Server fresh install with 11.2 now i'm running the latest 11.2 U2.1

Maybe the performance issue us related with the amount of memory available. With FreeNAS + 56TB volume + 15 VMs, you might be with a high swap. Have you checked that?

Memory is fine VMs are running on a HP server there is a spoiler in my first post showing top statistiks.

when you say 15vm's do you mean vm's running on freenas or freenas just hosting the virtual disks over nfs/smb?

there is only one docker running on freenas the rest is on a HP server the storage is connected over iscsi for ssds and nfs for hdd storage
 
Joined
Jan 4, 2014
Messages
1,644

seanm

Guru
Joined
Jun 11, 2018
Messages
570
'Using more than 12 disks per vdev is not recommended. The recommended number of disks per vdev is between 3 and 9. With more disks, use multiple vdevs.'

It's probably a failure of the FreeNAS UI if making this mistake is so easy. Is there no big screaming warning?
 

Manini

Dabbler
Joined
Jul 30, 2017
Messages
10
It's probably a failure of the FreeNAS UI if making this mistake is so easy. Is there no big screaming warning?
The vdev was already created in the old gui and i can't remember beeing warned ( then i would habe problably not done this )
but again the question is i a big vdev such an issue that the system becomes completely unusable to the point where the sshd service is not responding anymore because that has to be a design flaw than.

I'm trying now to get the data off but that is impossible i have to reboot every 30 minutes and 15TB left to transfer :(
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
The vdev was already created in the old gui and i can't remember beeing warned ( then i would habe problably not done this )
but again the question is i a big vdev such an issue that the system becomes completely unusable to the point where the sshd service is not responding any more because that has to be a design flaw than.

I'm trying now to get the data off but that is impossible i have to reboot every 30 minutes and 15TB left to transfer :(
Your problem is not the size of your vdev. The performance difference is not huge. Mostly it's a problem during rebuilds. You got something else going on.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Install 11.1 on different boot drive and import pool? 11.2 isn't really stable yet.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Install 11.1 on different boot drive and import pool? 11.2 isn't really stable yet.
It's super stable for me, zero unscheduled downtimes. But lots of people seem to be finding issues with different setups.
 
Top