System suddenly cannot ping its own IPv4 address

Cheese_Echidna

Dabbler
Joined
Jan 16, 2022
Messages
13
Hi,
Approximately some time between 24 and 72 hours ago my server suddenly stopped being able to ping itself; I haven't changed anything in that time.
I noticed because a notification that should have come through did not and after some investigating I found that the bash scripts I had written to send me notifications had also stopped working.
When I went to the shell, I tried to curl my ntfy instance but found that it timed out. I have come to believe that this is not a problem with ntfy.

Here's what I know: If I ping the local IP of the server while on the same network it succeeds.
If I ping the local IP of another device on the network it succeeds.
If from any device I do an nslookup/dig of the servers domain name (hosted by cloudflare and dynamically updated) it returns the correct IPv4 address of my server.
From a remote device (not on the same network) if I try to curl the ntfy instance or ping any of the domains associated, it succeeds.
If from the server itself I try to ping its domain name it fails.
If from the server itself I try to ping its IP address it fails.
If I do a traceroute from the server, it does not do any hops.

Because of this pretty much everything has stopped working because all of my apps use the domain names of each other in their API fields instead of the local IPs, and now none of these are resolving.
Does anyone know what might be happening?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Check the routing table with netstat -r command. Does it look ok?
Are you running out of space? df -h
 

Cheese_Echidna

Dabbler
Joined
Jan 16, 2022
Messages
13
Code:
root@NAS[~]# netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
default         192.168.0.1     0.0.0.0         UG        0 0          0 enp3s0
172.16.0.0      0.0.0.0         255.255.0.0     U         0 0          0 kube-bridge
192.168.0.0     0.0.0.0         255.255.255.0   U         0 0          0 enp3s0


For df -h the highest Use% was my main pool at 72%.
I should say that recently I did get the notification that said that my drive was 85% full and so I went through and deleted a bunch of files and now it's down to 72%. The hitting 85% did happen before the problem started, but the problem started before I deleted the files.
I can't imagine how that could have caused it though.

Thanks for your help.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
I think you posted in the wrong forums. enp3s0 and kube-bridge indicates that you're running Linux-based SCALE. You're in FreeNAS legacy forum, which is FreeBSD-based.

Can you run uname -a to confirm?
 

Cheese_Echidna

Dabbler
Joined
Jan 16, 2022
Messages
13
I have reposted on the correct forum:
 
Top