Backdoor in FireFox in Kali and Parrot OSes (and others)

That’s probably exactly what is happening. And if I were the Azure cloud, why wouldn’t I harvest data from all the client applications in my cloud? Data is money! :stuck_out_tongue:

1 Like

According to a thread on, it’s Mozilla’s push notification service. That service is hosted on Google, hence that Google IP address.

They were able to get rid of that connection by changing the dom.push.connection.enabled setting to false.


Fascinating thread, I had a good read through it and have a more informed perspective on the LibreWolf team. Thank you for sharing.


Is it rude to ask for a summary of what you learned, for those of us in a hurry?

1 Like

One aspect not brought up in the codeberg thread, unless I missed it: hosting on Google helps to perpetuate Google’s power and to increase their revenue. But I’m not sure if, say, millions of end-users disabled the connection that it would have an impact on Google, as Mozilla has already paid for the hosting.


No, but my perspective of them is still primarily limited to just this one Codeberg and the associated GitLab thread, so it may not necessarily reflect them as a whole.

Basically, the main point is that in order to use push notifications in Firefox, it is required to serve them from a server hosted by Google and operated by Mozilla. Here is the relevant article about the feature itself.

The most important quote is this one.

So assuming you want to know my perspective on the LibreWolf team’s stance on keeping this feature enabled, I think it is fine when viewed in the context of their mission to reduce browser fingerprinting. Their argument is that if you do not want your IP address to be stored by Mozilla for 90 days, you should use a VPN.

But, going back to this thread, the issue is that during initialization of Firefox for the first time, it connects to these Google-hosted Mozilla services without the user being informed of them. That should at least be changed so that such a feature is only opt-in, specifically when Web Push is enabled and actually utilized by the user during normal operation.


I am more worried about Google’s server being involved.

Thinking about the following scenario: someone is using Firefox to connect to something, take this Purism forum as an example. The user logs in on that forum and posts some messages there. That user then reasonably thinks that there is a connection between Firefox on the local computer, and Purism’s server where the forum is hosted. However, the push notification service that is apparently on by default, means that each time someone replies to that user a message is sent via Google’s server. So if Google wants to know the IP address of that user, Google could post some replies on the forum triggering such notifications and they could monitor that on their server, identifying the timing to find out which notifications go to that specific user. Google can do that, in spite of the user having chosen to use Firefox and Purism, the user has at no point chosen to involve Google.

Anyway, I agree that the push notofication feature should be off by default, it is very inappropriate for Firefox to connect to a Google-hosted server by default like that.


Well everyone’s threat model is different, so if yours involve auditing every (sub)domain and blacklisting specific hosting solutions, that is not a responsibility a privacy-focused browser can reasonably take without affecting other users. You control what you browse, even if that means you will encounter many unexpected third-parties along the way. This line of thinking is similar to what operating system you decide to install and use on your Librem devices.

I do not necessarily know if Discourse works as you have described, as my Firefox ESR configuration on both of my devices is specifically hardened against any calls to action. However, I will state that Google already has plenty of resources and tools to identify users automatically without needing to resort to manual intervention using Discourse accounts. The way I see it at the moment, they may as well be a digital continent.

1 Like

Are they? So, as far as I know, I’m nobody. There are no 3 letter agencies after me, and I am not engaged in any illegal activities as far as I know. I am just an internet user who has fun with technology.

With all of that in mind, years ago I used to believe that some of this internet privacy and security stuff was less applicable to me and I should just use whatever technology worked well, because I didn’t need to worry. Even if the government builds an infrastructure to record all messages like Snowden said, why is that a problem if I do nothing wrong?

But after some years of approaching the problem that way, I began to find myself in situations where I saw myself being used to worsen the lives of people around me, or as an agent of change where the change wasn’t good and I wasn’t informed of what was happening unless I spent a lot of time self-analyzing. And for me, the end result conclusion, is that whatever government entities created those surveillance systems to try to keep people save failed abhorrently at their jobs. Because they allowed the same corporations who were probably contracted to build their stuff, to then go and use similar principles to build similar data collection tools for the purpose of increasing profits and powering machine learning on the people, we now live in this horrible world spiraling towards a future where nobody can agree on anything and we are all crazy about this or that, even though none of it is what really matters. And after society falls apart because of all that, the circle will be complete, indicating that whoever originally started doing data collection under the guise of national security actually led to the end of the nation, because of the limits of their intuition.

And that’s just my country, who knows if it’s the same in other parts of the world. But as a result, don’t we all have the same threat model? Even someone who is no one should probably still try to be as offline or as anonymous or as private as the worst criminal, because we don’t want to become AI-used instruments of destruction for our society and our family and our friends!

So, isn’t the difference just what people know? Let’s say you are doing some things that keep you more secure than what I am doing. That doesn’t happen because your threats are different, does it? I would imagine it happens because you know more about how to secure your well-being and the integrity of your mind.

And, when it comes to using people as instruments/weapons against their loved ones, one of the primary companies with sufficient AI tech to use people like that for the last few years, as far as I could tell, was Google itself. It seems like if Microsoft can get enough money to get enough development to become the next Google in that way, they probably will, so obviously Google is not the only offender of this kind. But they are certainly one of the biggest.

And as such, isn’t the simple idea that “Firefox opens a connection to XYZ when its process starts and closes the connection when it ends, and keeps the connection live for the entire duration of the use of Firefox,” as an idea, something ludicrously important that anyone using Firefox would want to know? Why talk about hosting solutions and whether Google is bad, or talk about whether or which user IP address is leaked, instead of this simple reality that Firefox is giving someone online the timestamp-able metadata for a log of the entire duration of its use? The fact that it is Google is bad, but if it was not Google, it would still be bad. If link “the standards” and say that “the standards” say it should be like this, and therefore it shall, what about if the standards are dumb?

If somebody told me that “the standards” say that mobile handsets should not allow the user to have root access, nor to install applications other than from Google Play, and that therefore PureOS Phosh should not be allowed to exist because it didn’t follow “the standards,” then we would just know that “the standards” were obviously compromised. I would still want to use my Librem 5 in such a case.

Likewise, there is a gap in my understanding regarding the implementation of these push notifications. Obviously, per the LibreWolf thread(s) that were linked above, one option to kill this constant connection to the push service from Firefox is to go into the browser settings and turn off the push connection setting. However, I actually have a use case for getting Firefox notifications on my Librem 5. So on a high level I can understand why users might want the functionality of a push notifications feature. But what I do not understand yet – and again, this might be ignorance speaking – is why a push notification would use a separate server at all. Suppose I was writing a browser: why wouldn’t I make a JavaScript function such as displayNotification("You got pinged"); such that whenever the JavaScript code executing in the browser called this function, the browser then would submit this text string (along with browser name/process name/icon) to the local window manager tech stack to display a “notification.” At no point, when I imagine how I would construct this feature, would I feel the need to connect to a third party independent of the communication between the local browser and the specific site that was visited that is itself providing the notifications. Now that I said this, I suppose the complexity arises from a browser policy that JavaScript is not allowed to run while a site’s visual page is not in the foreground, but I still think that introducing a third-party (Mozilla?) server into the equation seems like a contrived solution rather than an ideal solution.

Maybe this is a call to action that someone should build or invent an ideal solution? As an example, if I want to use my Librem 5 as a libre solution to as much as I can control, but then connect to for my job, it’s useful for me to enable the Notifications feature from Slack so that when my work pings me, my phone buzzes. But why does this interaction need to involve a third party server that isn’t me, and isn’t slack?

So, getting back to my original point, I don’t really feel like this is a “threat model” thing. It seems like we should all want our devices to leak as little additional information to third parties as possible, unilaterally, and there is an information leak to a third party here that as of yet I do not understand the rationale for. And I would think this would apply to all users.


I cannot answer all of your questions and concerns without spending an hour or two breaking each of them down, so I will only focus on what I perceive as important.

First off, threat modelling is about determining what assets you want to protect, what adversaries are interested in them, and what resources and skills they have to dedicate to acquiring it. Distilled to its rawest form, it is about time, and the more time you want to buy yourself, generally, the more expensive it becomes to protect your assets.

So depending on your threat model, you may end up using different tools even though our adversaries may share similar skillsets. For example, most people are comfortable trusting government-regulated financial institutions with their money, while others may prefer to carry cash, use cryptocurrencies such as Monero, install a floor safe in their basement, and/or build a (modular) vault. The degree of time and resources required to defeat these security mechanisms vary, which is why threat models are different for everyone.

Building on this argument while also addressing the push notification server as previously mentioned, only some users have an issue with the LibreWolf team keeping Web Push enabled. Those who want a solution can access about:config and change the value of dom.push.connection.enabled to false, or use a VPN.

You accept it or make your own “smart” standards.

If you want to read more about how Web Push messages are encrypted, I highly suggest reading the RFC it is based upon.

Read the Introduction.

What is considered ideal is different for everyone, no different from threat models. That is why many proposed “solutions” exist. If they are not suitable for your needs, you either wait for someone to invent it or you build it yourself.


Thanks! This was really helpful. They explained their rationale that Push Notifications on a browser should be sent through a third party to consolidate them and save processing.

With that in mind, I looked up the Mozilla autopush code on GitHub and created my own autopush endpoint using their docker image, on my own server, and modified the Firefox setting dom.push.serverURL from the default to instead point to some custom URL where my test server was located.

At this point, I encountered something strange. The automatic connection to push notification services was gone, and during startup netstat clearly showed that my Librem 5 handset with the Firefox application running was connecting to my custom server instead of on startup. But, my custom server didn’t do the wss security on top of the ws WebSocket technology correctly [I didn’t bother getting a certificate to do https/wss instead of ws], because I was just senselessly running the Mozilla docker image, so rather than sticking around as an open connection like what we see to from Firefox on default settings, the connection to my server quickly was gone, probably due to some failure although I wasn’t sure where to look for Firefox logs regarding the cause of the error. Again, I’m pretty sure the cause was probably just some configuration on my part.

But then, I had the weirdest thing happen. I tried doing an activity that would give me a Firefox notification for the use case where I need notifications, and I still got the notification anyway despite netstat showing no sign of communicating with the default US Intelligence Air Force Mozilla version of the push noifications, nor any sign of communication with my botched attempt at running the docker image for autopush.

Is that a known thing, that if these systems are not available then notifications still work anyway? I’m just… a little confused about the point of it all. Maybe I’m misunderstanding.

1 Like

Use the Mozilla Autopush documentation, specifically this page.

Yeah, that’s what I was using. So, now, for me the question becomes: why when I do it wrong and don’t have this autopush server properly running, does my device still receive firefox notifications?

I guess I don’t know who to ask about that. Maybe the Firefox source itself.

1 Like

Use this quote on Firefox, then retest the configuration against your Autopush server.

1 Like

Sorry I was putting off doing this test for a while. Currently I was running with the autopush URL in Firefox set to some garbage, so that it doesn’t connect to the questionable default URL but my Push Notifications still work (and I simply don’t know why, but it works). Accordingly, I am not actually using my own autopush server, since it seems unnecessary.

But I went ahead and tried running this and opening the site where my push notifications are working, along with my current config that sets this URL to something non-meaningful that won’t function. And I didn’t really see anything in the browser console that I thought was attributable to this push notification stuff. It seemed like the console was filled with proprietary logging from the website that I was using to receive push notifications.

Then, after this, Firefox inexplicably crashed. I was able to find in my journalctl log the line:

firefox-esr[1972]: Error flushing display: Broken pipe

But other than that, I don’t know why Firefox crashed, and I’m not convinced that I successfully located the information regarding the debug log. But, given that Firefox stopped making the pointless extra connection to that Google Cloud service, and my push notifications still work magically, it seemed like I solved my issue. I suppose I could try to get my Autopush server running again maybe on a weekend, and maybe point at that and do a test where it would actually connect or whatever, but is that even worth it? My current understanding of this issue is basically:

  • Firefox has an always-open connection to a Google Cloud service that is reportedly for Push Notifications
  • Pointing this always-open connection to some other URL that is invalid and fails causes the connection not to happen
  • When the connection fails in this way and does not happen, push notifications still work fine
  • If I run my own Autopush server and point the URL to my server, then push notifications stop working fine for some reason

Given that the Autopush server included a ton of stuff like Google Cloud services and Amazon DB and had dependencies on many languages like Java and Rust, and had this really complex interplay for whatever database logging and cloud service interactions it has to do for the information for every single Firefox user to track them or whatever, that’s all pretty pointless for my use case and it seems better to just point the URL to nowhere and then presumably let it do this apparent fallback case where the push notifications are between me and the web site providing them, and then things just work and there is not a weird middleman.

1 Like

That depends if you truly want to know how Web Push is actually possible without an Autopush server. Your experience seems to challenge both Mozilla and LibreWolf’s claims.