Problem Memory Usage Full and Crash Freenas

Status
Not open for further replies.
Joined
Jun 16, 2016
Messages
4
Hello,

i would use Freenas for copy by iSCSI a save VEEAM. but my memory usage is constantly full after 3 backup. The Freenas memory full then crashed, now my disks is degraded mode :( and the repair full the memory...
I have not installed the driver for the RAID card, this may be the problem ?

for install the driver, just to copy mrsas.ko in /boot/kernel ? and reboot freenas ?

Freenas 9.10 stable
Hardware SuperMicro
16Go RAM
Raid Card Mega Raid SAS9341-8i
6 disk 3To in JBOD
RAid-Z 5 Disk + 1 Spare


For the choise of mahcine is not me, sorry ..

memory.PNG



Thanks for your reply !
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
It's normal for FreNAS to use all available memory, that's the ARC.
That's not exactly correct. It's normal for FreeNAS to use all available memory for ARC, up to an encroach buffer space of about 1GB. Homeboy here was down to 150MB of free RAM, which is bad. That should not happen. He's got iSCSI running, and a 6x3TB pool. I don't think 16GB is nearly enough memory under those conditions.

Also, RAID card is a super. super. super. bad idea.

Also, original poster: il y a bcp de monde ici qui peut parler francais. Tu peux poser quelques phrases en francais lorsque tu crois ca soit necessaire. ;)
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Joined
Jun 16, 2016
Messages
4
i now raid card is bad idea... i search a solution to stabilize the server. i can't change the raid card by HBA. i know is not good for Freenas.. How i can solve the problem ? For freenas 9.1 we can use ZFS only ?

I think i'll recreate the pool for correct degraded mode. actually the datas aren't critical

Code:
  pool: STORAGE
state: DEGRADED
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 227G in 2h19m with 1447 errors on Thu Jun 16 11:22:37 2016
config:

        NAME                                              STATE     READ WRITE CKSUM
        STORAGE                                           DEGRADED     0     0 2.88K
          raidz1-0                                        DEGRADED     0     0 6.43K
            spare-0                                       DEGRADED     0     0   392
              gptid/00d5ce9a-2f08-11e6-8227-0cc47a774d84  DEGRADED     0     0 9.33K  too many errors
              gptid/0322b910-2f08-11e6-8227-0cc47a774d84  ONLINE       0     0     0
            gptid/0144dbca-2f08-11e6-8227-0cc47a774d84    DEGRADED     0     0 9.95K  too many errors
            gptid/01b8edcf-2f08-11e6-8227-0cc47a774d84    DEGRADED     0     0 9.93K  too many errors
            gptid/02289eba-2f08-11e6-8227-0cc47a774d84    DEGRADED     0     0   457  too many errors
            gptid/029f3ce5-2f08-11e6-8227-0cc47a774d84    DEGRADED     0     0 9.71K  too many errors
        spares
          557876966861069494                              INUSE     was /dev/gptid/0322b910-2f08-11e6-8227-0cc47a774d84

errors: 1447 data errors, use '-v' for a list

  pool: freenas-boot
state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: none requested
config:

        NAME           STATE     READ WRITE CKSUM
        freenas-boot   ONLINE       0     0     0
          mfisyspd6p2  ONLINE       0     0     0

errors: 4 data errors, use '-v' for a list
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's no "stabilize the server" that's guaranteed to work; ZFS thinks you should restore the entire pool from backup, which is not a good sign.

The best process to move forward is:

1) Build a new server with a better design:

1A) RAIDZ1 is bad. RAIDZ1 with a *spare* is crazy; that should just have been RAIDZ2 to start with.

1B) 16GB for a Veeam target is kinda tight.

1C) So, I'd say something with a proper HBA, and then some larger drives if you can (will be faster), and use RAIDZ2.

2) Copy data from old server to new server
 
Joined
Jun 16, 2016
Messages
4
i rebuild my raid with 6 disk Raid Z2. i created zvol to 2To for my LUN ISCSI. when i start my copy memory free is 100Mb :/

memory2.PNG
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
i rebuild my raid with 6 disk Raid Z2. i created zvol to 2To for my LUN ISCSI. when i start my copy memory free is 100Mb :/
Agreeing with this:
He's got iSCSI running, and a 6x3TB pool. I don't think 16GB is nearly enough memory under those conditions.
 

Sakuru

Guru
Joined
Nov 20, 2015
Messages
527
That's not exactly correct. It's normal for FreeNAS to use all available memory for ARC, up to an encroach buffer space of about 1GB.
Ah, good to know. Thank you!
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Ah, good to know. Thank you!
Yeah, it's really nice how it works. ZFS will consume all available RAM as an "ARC", but, if other processes demand RAM, ZFS will give up the least useful portions that it has commandeered automatically. If everything is working right, and you have enough RAM, usually you will see about 1GB (sometimes a bit less) is always maintained free and clear.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, it's really nice how it works. ZFS will consume all available RAM as an "ARC", but, if other processes demand RAM, ZFS will give up the least useful portions that it has commandeered automatically. If everything is working right, and you have enough RAM, usually you will see about 1GB (sometimes a bit less) is always maintained free and clear.

That may be less-true in more recent versions as I believe some work was done to integrate ZFS more closely with the system memory management. Haven't had time to look...
 
Joined
Jun 16, 2016
Messages
4
Hello,


i'm always on my crash problem. Here my proposals, i want yourr opinion. I can't change the material or upgrade actually.

- Use 2 disks in mirroring for my lun iscsi for test.
- Limit the bandwitch of my backup to permit te copy on the disks whitout full the memory.
- If i upgrade my memory to 32 GB, it will be good ?
- Use NFS instead of iSCSI.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hello,


i'm always on my crash problem. Here my proposals, i want yourr opinion. I can't change the material or upgrade actually.

- Use 2 disks in mirroring for my lun iscsi for test.
- Limit the bandwitch of my backup to permit te copy on the disks whitout full the memory.
- If i upgrade my memory to 32 GB, it will be good ?
- Use NFS instead of iSCSI.

None of these. Your problem is that you are using the MFI driver because your card is a RAID card.

You may be able to crossflash the 9341 into being a 9300, that's supposed to be possible, and this is the thing that would probably fix your issue. Crossflashing changes your card from a RAID card to a simple HBA, which is what FreeNAS needs to function properly.
 
Status
Not open for further replies.
Top