FN11 -- swap_pager_getswapspace(#): failed.

Status
Not open for further replies.

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
I logged into the FN11 webui to create a jail today, and noticed a *lot* of these entries.

I haven't seen them before, and have made no hardware changes in a couple of years.

I've attached a picture of the graph of my swap space as well, but it doesn't look like it was full unless I'm misreading.

Does anyone have any ideas?

Code:
Jul 10 00:00:00 freenas newsyslog[91109]: logfile turned over due to size>200K
Jul 10 00:00:00 freenas syslog-ng[1613]: Configuration reload request received, reloading configuration;
Jul 10 00:20:27 freenas swap_pager_getswapspace(15): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(12): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(7): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(15): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(15): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(12): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(10): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(8): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(7): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(7): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(13): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(4): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(6): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(3): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(10): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(9): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(2): failed
Jul 10 00:20:27 freenas swap_pager_getswapspace(5): failed
Jul 10 00:20:30 freenas swap_pager_getswapspace(16): failed
Jul 10 00:20:33 freenas swap_pager: out of swap space
Jul 10 00:20:33 freenas swap_pager_getswapspace(12): failed
Jul 10 00:20:34 freenas swap_pager_getswapspace(16): failed
Jul 10 00:21:02 freenas swap_pager: out of swap space
Jul 10 00:21:02 freenas swap_pager_getswapspace(3): failed
Jul 10 00:21:03 freenas swap_pager_getswapspace(16): failed
Jul 11 00:00:01 freenas syslog-ng[1613]: Configuration reload request received, reloading configuration;
Jul 12 00:00:01 freenas syslog-ng[1613]: Configuration reload request received, reloading configuration;
Jul 12 21:56:47 freenas collectd[3442]: aggregation plugin: Unable to read the current rate of "freenas.local/cpu-7/cpu-interrupt".
Jul 12 21:56:47 freenas collectd[3442]: utils_vl_lookup: The user object callback failed with status 2.
Jul 13 00:00:00 freenas syslog-ng[1613]: Configuration reload request received, reloading configuration;
Jul 13 09:01:42 freenas /autosnap.py: [tools.autosnap:607] Failed to destroy snapshot 'tank/jails/owncloud@auto-20170413.0900-3m': could not find any snapshots to destroy; check snapshot names.
Jul 13 09:01:42 freenas /autosnap.py: [tools.autosnap:607] Failed to destroy snapshot 'tank/jails/.warden-template-VirtualBox-4.3.12@auto-20170413.0900-3m': could not find any snapshots to destroy; check snapshot names.



bonus question: Should I be worried about the last 2 lines in the output showing failures to delete snapshots? they're both from April 13, back when I was on FN9.3.
swap.JPG
swap-zoomout.JPG
 

Attachments

  • swap-zoomout.JPG
    swap-zoomout.JPG
    72.6 KB · Views: 443

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
You have something wrong with your system. Full hardware specs, what jails or vms do you have? What workflows?

Sent from my Nexus 5X using Tapatalk
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
System:

MOBO/CPU: Asrock C2750D4I
HDD: 6x 2TB WD20EFRX RED RAIDZ2
RAM: 2x 8GB Kingston ECC DDR3
CASE: Fractal Node 304
PSU: 300W SILVERSTONE ST30SF

Jails: Multiple from FN9.3
Transmission (plugin), Plex, sickrage(plugin), couchpotato(plugin), owncloud(std), jumpbox(std: endpoint for ssh access, socks5 proxy, sslh for owncloud/ssh multiplexing [used to be in a corp env where I had to run a wrapper around ssh and fwd it to 443 in order to tunnel through the ISA firewall])

Jail: FN11
seafile (created after I posted this thread)

VM:
1x ubuntu 16.04 LTS Headless: vcpu 7, mem 15360 MiB (Mostly Idle, haven't used this VM much since migrating owncloud data to the VM about a week ago)
Unsure how to check current resource usage if this may be an issue, but can shutdown the VM if needed.

Workflow:
Mostly a seedbox/transcoding platform for plex
Owncloud (to be seafile) for sync between multiple platforms (Win 7 PCs and Android access.)

Warning light in UI is flashing yellow:
OK: July 9, 2017, 2:47 p.m. - There is a new update available! Apply it in System -> Update tab.
WARNING: July 9, 2017, 2:47 p.m. - New feature flags are available for volume tank. Refer to the "Upgrading a ZFS Pool" section of the User Guide for instructions.


As you know from the other thread https://forums.freenas.org/index.php?posts/393074/ I'm working to see if I can get rid of all of my plugins and 9.3 jails and migrate to FN11 std jails.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
htop reports the following on the ubuntu VM

CPU: ~1% of 1 cpu
mem: 1.3G of 14.7G
swap: 22M of 15G
Load avg: 0.32 0.24 0.14
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
System:

MOBO/CPU: Asrock C2750D4I
HDD: 6x 2TB WD20EFRX RED RAIDZ2
RAM: 2x 8GB Kingston ECC DDR3
CASE: Fractal Node 304
PSU: 300W SILVERSTONE ST30SF

Jails: Multiple from FN9.3
Transmission (plugin), Plex, sickrage(plugin), couchpotato(plugin), owncloud(std), jumpbox(std: endpoint for ssh access, socks5 proxy, sslh for owncloud/ssh multiplexing [used to be in a corp env where I had to run a wrapper around ssh and fwd it to 443 in order to tunnel through the ISA firewall])

Jail: FN11
seafile (created after I posted this thread)

VM:
1x ubuntu 16.04 LTS Headless: vcpu 7, mem 15360 MiB (Mostly Idle, haven't used this VM much since migrating owncloud data to the VM about a week ago)
Unsure how to check current resource usage if this may be an issue, but can shutdown the VM if needed.

Workflow:
Mostly a seedbox/transcoding platform for plex
Owncloud (to be seafile) for sync between multiple platforms (Win 7 PCs and Android access.)

Warning light in UI is flashing yellow:
OK: July 9, 2017, 2:47 p.m. - There is a new update available! Apply it in System -> Update tab.
WARNING: July 9, 2017, 2:47 p.m. - New feature flags are available for volume tank. Refer to the "Upgrading a ZFS Pool" section of the User Guide for instructions.


As you know from the other thread https://forums.freenas.org/index.php?posts/393074/ I'm working to see if I can get rid of all of my plugins and 9.3 jails and migrate to FN11 std jails.
OMG what are you doing! stop stop stop you can not run that many jails and a vm that has 16GB of memory.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
OMG what are you doing! stop stop stop you can not run that many jails and a vm that has 16GB of memory.
Huh, I guess this is my first Danger Will Robinson moment.

I'd always understood that jail and vm memory was dynamically allocated based on load, etc, and since the VM was just sitting idle it wasn't being given to the vm.

What would be a good alternative? I've dropped the VM down to 4cpu and 4GB ram.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Just to give you some perspective this is what your memory utilization and swap should look like. This is my system, notice swap is zero and has never been more than zero. If you are ever using swap something is probably wrong. The alternative is to not use as many jails and vm's or get better hardware if you want to run this much stuff. You have a simple overuse problem.
Screenshot from 2017-07-13 11-52-39.png
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
Just to give you some perspective this is what your memory utilization and swap should look like. This is my system, notice swap is zero and has never been more than zero. If you are ever using swap something is probably wrong. The alternative is to not use as many jails and vm's or get better hardware if you want to run this much stuff. You have a simple overuse problem.

Ah, gotcha.
Unfortunately my reporting doesn't got back too far, but it was clearly next to nothing prior to the vm.

After cutting the ram in the vm, I'm sitting at < 600M of used swap. Ideally I'll be able to finagle seafile into a jail and then I can get rid of the VM completely, ending up wtih just 2 jails. seafile and everything else.
swap.JPG

Thanks for the help, I've clearly got a *lot* more to learn about FreeBSD

ETA:
How would I go about determining what is currently using the swap since it's supposed to be unused?

Looks like it was exactly how it should have been after the upgrade to 11 and before I started playing with the first bhyve vm:
swap1.JPG
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
It's probably still the vm. I bet it will stay in swap until you reboot or maybe reboot the vm? I don't know enough about bhyve to know for sure.
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99
The dropoff today was inclusive of the time the VM was powered off, and after power on with the new resource allocation.

Thanks again for your help, I was beginning to think I had some bad hardware juju coming my way and didn't want to deal with more than the pending doom of my avaton system considering both bugs that plague my board.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you are ever using swap something is probably wrong.
I'd tend to agree, but my FN11-U1 box (dual Xeon X5650, 48 GB RAM, one small VM to play with RancherOS, one plugin jail with Plex, which is disabled) is using significant amounts of swap (i.e., it's been close to maxing out the 4 GB of swap it has, and is currently using just under 2 GB), and being on the receiving end of a replication is bringing the machine to its knees. There's a lot wrong with what @eldo was/is doing, to be sure, but the fact that his system is using swap doesn't necessarily prove it.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I think bhyve is messing with stuff and should be investigated more.

Sent from my Nexus 5X using Tapatalk
 

eldo

Explorer
Joined
Dec 18, 2014
Messages
99

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Out of curiosity, what changes did you make around July 7?
I'm not certain, but I think that's when I started using that box as a replication target for my main box.
 

pomah

Explorer
Joined
Jul 30, 2012
Messages
55
Hi any suggestions to what might be wrong?

I am running 3 services: smb, ssh and ups.

2 plugins: transmission and syncthing

That is it, 2 users, used as streaming for media and storing files in a home environment.

jNh8Onk.png
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
2 plugins: transmission and syncthing

Doubling the RAM would be a good idea. The minimum of 8GB RAM holds for situations where no plugins are used.
 

sam09

Cadet
Joined
Mar 28, 2017
Messages
8
I'm having the same problem with swap utilization. I'm running the latest stable FreeNAS. My system specs (roughly, I'm out of town right now):

HPE Proliant ML10 Gen9
2-core Pentium G4400
16Gb memory (ECC)
3x3Tb HDD (mirror), 120Gb SSD, 1Tb USB external drive

No jails, no plugins. The system is used mostly as a network drive, very low utilization, using SMB and NFS services. There is also a Rancher VM running a basic web server and Nextcloud, again on very low usage. I used to run the VM with 6Gb allocated memory for a couple of months continuously without a problem, until I started getting these messages with failed swap space allocation. I disabled some non-critical containers and lowered the VM allocated memory to 4Gb about a week ago, which dropped swap utilization to about 2 out of 4Gb (quite precisely by 2Gb) and solved the problem. However the swap utilization started to ever so slowly creep up again and this morning I got a message about failed swap space allocations. As I'm out of town I can't check the exact trends right now.

I was going to order some more memory (upgrade from 16Gb to 40Gb) this weekend to get some machine learning models running on a VM, but this problem has me concerned. I read about a bug with SMB that can lead to increasing memory usage under certain conditions, but being far from a systems engineer I don't know if these are related or how I could investigate the issue further. At least the SMB memory bug should be fixed with the next release, but I'm a bit hesitant to make significant investments to extra memory before I have an idea what's going on. I thought I would check the memory usage of running processes once I get back and have access to my system again, but I'm not exactly sure how it should look as I've had no compelling reason to familiarize myself with the system until now (probably a mistake). So does anyone have any other suggestions of things I should check once I get back?
 

pomah

Explorer
Joined
Jul 30, 2012
Messages
55
Doubling the RAM would be a good idea. The minimum of 8GB RAM holds for situations where no plugins are used.

Well lets see how this goes, so far it has not stabilized, still goes up... can someone explain why the memory usage does that? Should it not load everything and be happy? Why does it ramp up, down and up again?

vn7Uh4U.png
 
Status
Not open for further replies.
Top