Can't connect to webserver on Librem mini

I did some updates on my Librem mini. Now I can no longer connect to my nextcloud instance on it. I only noticed that much later, because I tested locally on the mini, which works. On localhost, everything works.

So… first I thought it’s nextcloud. Then I thought it’s apache. Then I blamed DNS.
(Errors are “not reachable” or “no route to host”, depending on client)
However…
I can ping mini
I can ssh user@mini
I can telnet mini 22
But I can’t telnet mini 80
Then I run nc -l -p 81 and connected to that with telnet. Again, only possible from localhost
So, that sounds very much like some firewall, but iptables is empty.

Any ideas what else I could check?

Scattergun ideas …

Confirm whether it is being blocked outbound on the source computer or inbound on the destination computer. (Unlikely to be the former but just eliminating it.)

Use netstat -antp to confirm that something is listening on port 80 on the destination computer on the LAN IP address. (It may be listening separately on each IP address or it may be listening on the ‘any’ IP address. If it were listening only on localhost then you would get similar behavior to what you are observing.) I am making the assumption that you are experiencing problems when on the local LAN, rather than trying to access from the internet, which would introduce a whole range of additional points of failure.

If the expected program is listening on port 80 then shut that service down and put something else (e.g. nc) on port 80 to fault isolate as to whether trying to connect to port 80 is failing always or only when it’s the expected program.

Some other firewall? e.g. nftables

Is this IPv4 only? i.e. we can ignore IPv6?

tcpdump may help if you are really stuck.

You can eliminate DNS by connecting via IP address. That won’t necessarily work fully with HTTP (depending on web server configuration) but it will at least confirm that you can connect and communicate on port 80.

2 Likes

PS You can’t listen on port 80 (any privileged port i.e. any port number below 1024) unless either you are root or you have the cap_net_bind_service capability (or who knows maybe other ways).

As @kieran mentioned, you could use tcpdump or tshark or some other similar tool to monitor the network traffic coming in, in this case you should see the TCP SYN packet coming in when you try to connect using telnet. If you see that, then you know the packet arrived and the next question is then what happened to it.

Example of how to use tshark:

sudo tshark -i wlan0

The above command should give one line of output for each packet received and sent on the given interface, in my example wlan0, in your case you would specify the ethernet interface on the mini, you can use e.g. ip link to see what the interfaces are called.

The following is one way to check which ports have some process listening on them:

ss -tln

For example, if an ssh server is running then that will show something is listening on port 22, and if a web server is running it will show something listening on port 80 and/or port 443.

2 Likes

@kieran, @Skalman thanks guys, you rock :slight_smile:

Luckily, I’m root though :wink:

Ok, some more data points: nmap seemed to contradict netstat, as the former indicated ipv4-only-apache, and the latter ipv6-only-apache listening (truncated output):

nmap 192.168.0.10
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http

netstat -antp
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
[…]
tcp6 0 0 :::80 :::* LISTEN 901/apache2

I wasn’t aware of ss, but interpret its output (asterisk) to mean ipv4/ipv6

ss -tln
State      Recv-Q    Send-Q    Local Address:Port    Peer Address:Port    Process
[...]                                             
LISTEN     0         511       *:80                  *:*

Anyway, locally on the machine, I can connect with both, ipv4 and ipv6 address.

Basically, as you hinted, I needed to confirm that the connection is blocked inbound.
And indeed (tshark) :

4 3.521678144 192.168.0.16 → 192.168.0.10 TCP 74 39910 → 80 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=2754146134 TSecr=0 WS=128
5 3.521739439 192.168.0.10 → 192.168.0.16 ICMP 102 Destination unreachable (Communication administratively filtered)

Same for IPv6, and same if I create a test server (nc -l -p 81) and telnet to it:

17 5.798550353 192.168.0.16 → 192.168.0.10 TCP 74 42350 → 81 [SYN] Seq=0 Win=64240 Len=0 MSS=1460 SACK_PERM=1 TSval=2755290359 TSecr=0 WS=128
18 5.798709605 192.168.0.10 → 192.168.0.16 ICMP 102 Destination unreachable (Communication administratively filtered)

So, I guess this tells me that that incoming connections are indeed blocked, but I don’t know by what :frowning_face:
iptables -L is empty, and nft is not installed (and I didn’t knowingly do anything in that direction). I did upgrade to Byzantium a while back.

I have a feeling it will be something very simple, or embarrassing. Probably both :wink:

2 Likes

OK, but was that tshark run on the server (the mini) or on the client side?

Both can be interesting, to see all the following four datapoints:

  1. SYN packet sent from the client
  2. SYN packet arriving at the server
  3. response sent from server
  4. response arriving at client

If you only observed points 1 and 4 above, then you do not yet have any evidence that the server was ever involved.

Assuming for a moment that you were running tshark only on the client, one possibility then is that the SYN packet never reached the server and the " ICMP 102 Destination unreachable (Communication administratively filtered)" message comes from a switch or router that sits in between. Then, maybe, even though you thought things stopped working due to updates of the mini, the cause could be something else in your network, maybe something in between decided to start blocking port 80. All this is just speculating, of course. :slight_smile:

Anyway, my main point is: if tshark shows a packet sent from the client, don’t assume that it arrives at the server. Instead, verify that by running tshark on the server.

1 Like

Ah, sorry for being unclear. All that output is from the mini (server).

What does the iptables command show exactly, does it look like this?

purism@pureos:~$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

I think the above is the default way it looks when nothing is blocked.

1 Like

yes, that’s what I have :frowning_face:

OK. I wonder who is generating the ICMP “Destination unreachable (Communication administratively filtered)” packets.

Another thing you can try is to look at kernel networking statistics like this:

sudo cat /proc/net/snmp

EDIT: probably better to use the nstat command, simply like this:

sudo nstat

If you do that before and after a port 80 connection attempt, to see which counters changed, then that tells you something about what the kernel did. For example, the “Ip” counter “InReceives” should go up, and probably also some “Icmp” counter corresponding to the ICMP " Destination unreachable (Communication administratively filtered)" message being sent.

About the statistics shown by nstat, see: https://www.kernel.org/doc/html/latest/networking/snmp_counter.html

You can also add some iptables rules to get logging of packets you are interested in, for example like this:

Log incoming TCP packets incoming to port 80:

sudo iptables -I INPUT -p tcp --dport 80 -j LOG

Log outgoing ICMP packets:

sudo iptables -I OUTPUT -p icmp -j LOG

Then, the log from sudo journalctl should show those packets, corresponding to what you saw using tshark.

The returned ICMP “Destination unreachable (Communication administratively filtered)” packets you saw indicate that traffic is blocked, which is a different behavior compared to what happens if there is simply no process listening on the given port. An experiment to check that could be to turn off the ssh service so that noone is listening on port 22, then try to connect to port 22 and look at the corresponding traffic with tshark. Then you should see a SYN packet coming in, but instead of that ICMP “Destination unreachable” packet there should be a TCP RST packet returned.

If you want/need to go deeper, this might be useful:

1 Like

Well, all of that is quite intersting, but the results I get seem to be not… :upside_down_face:
nstat reports quite as expected (IpInReceives 2, IpOutRequests 1, IcmpOutMsgs 1, IcmpOutDestUnreachs 1, probably one unrelated Ip i/o)
iptables logging confirms what we had so far.
Shutting down ssh and apache yield the expected TCP RST for 22 and still has the unreachable for 80.

While at it (with sshd still disabled), I configured apache to also listen on 22, 81 and 8000.
Well… 80, 81 and 8000 are all the same. However, apache on 22 works (connect with curl, as browsers refuse to connect that port).

So… the picture stays the same for me: It looks like, at the kernel level, it is decided that connections from other machines are ok on 22 but not on any other port.

I tried the set_ftrace_filter approach above and captured 3s (11MB) of kernel traces while connecting. However, I guess the function names from the example are outdated, and looking for in keywords (tcp, icmp, ip, net, …) didn’t lead to interesting results. “recvmsg” is the most promissing, but… not really.
Also, the trace doesn’t look like it could provide hints on why something happens.

I was wondering, even though unlikely, whether some broken/missing apparmor rules could cause something like this. I found “network inet stream” and inet6 in /etc/apparmor.d/abstractions/apache2-common, which looks good. The only thing odd is that apache is not in the output of apparmor_status. I have no clue if it should be, but I’d expect it if there are rules :man_shrugging:

For nmap you may also need nmap -6 xxxx:... where xxxx:... is the full literal IPv6 address. Otherwise I think nmap might do IPv4 only.

For netstat yes it does give confusing and misleading output. Makes you think that it is IPv6 only but I assure you that it will accept connections for either IPv4 or IPv6 (or not at all! in your scenario).

If nc won’t work on port 80 (with Apache shut down) then wouldn’t we conclude that it’s not an Apache issue?

You’ve looked at ip6tables output?

Checked startup logs for any unusual situations during boot?

Are you able to live boot and then use nc to listen on port 80 to confirm that at least in a live boot configuration port 80 is usable?

Really at this stage I am drawing a blank.

One more thing to check: try the netfilter nft command instead of iptables. The package name is nftables.

https://wiki.nftables.org/wiki-nftables/index.php/Main_differences_with_iptables

I think netfilter/nftables is supposed to be “the new iptables”, providing the same functionality in a better/new way but also providing more functionality, there cold be rules that nft knows about, that are not visible via iptables.

So, you could try commands like this:

sudo nft list tables

The above shows which tables exist, probably including “table ip filter” which is probably the most interesting one now. Then, to look at the contents of that table:

sudo nft list table filter

Maybe that will show something that is not visible via iptables.

How did it go with this, did you ever solve it?

(I have a similar problem now on a Librem 14 where there seems to be some kind of firewall in place that I cannot figure out how it works, I would like to open some ports but don’t know how to do it.)

Unfortunately, so far I didn’t get back to it. Life came in between in many ways.
However, still interested in the solution :slight_smile:

I had a similar issue, the problem was that my system uses firewalld to interface with iptables, so to open the port i needed i had to do something like this:
sudo firewall-cmd --add-port=80/tcp --permanent
this video her was helpful in understanding the issue https://www.youtube.com/watch?v=a6EUoDNMnHA

2 Likes

I think though, that firewall-cmd is updating the live system. You do not need to restart the firewalld service.
Also for your test you don’t want to add a permanent rule to your firewall.
So maybe just

firewall-cmd --add-port=8000/tcp

is better.

One last comment:
firewalld communicates over dbus, so if you invoke firewall-cmd without sudo it will ask you for your authentication and update the firewall (nft) immediately.

1 Like

In my case i want the rule to persist between reboots, so that is why i included --permanent. Yes, I see that it is not necessary to restart the service.

Thanks a lot!
A few days before you posted your solution, I noticed firewalld on my system for the first time. Seemed like something worthy to investigate. Now, finally I found the time to try your solution and it worked quite nicely. However, I’d like to add (for others and future me), that --permanent does not have an immediate effect, as the man page also mentions. Thus, for me it was
firewall-cmd --add-port=80/tcp --permanent
firewall-cmd --reload

1 Like