I’m sorry for pasting this as you guys already seem to have everything under control and already saw this post. My apologizes.
Heh… hitting that link results in a bad SSL cert… That’s confidence inspiring.
Oh I just typed it out instead of pasting. Let me see what’s wrong. Fixed
The website itself has a bad SSL certificate, which could in principle mean that secure connection over HTTPS isn’t possible.
It doesn’t inspire confidence in the author’s claims against Purism’s security if the author doesn’t maintain proper security certificates on the website.
To me it is someone blowing their own opinion out of proportion. Some of it is, I guess, a point,but all of it seems trivial and straw man like.
Either way, I’m not moved in the slightest. The way the article is written does seem like a great way to get people to click on links.
Still. The website is secure and uses https… I fixed it already
That roughly matches my assessment. The author looks to be another Rust fanboi…
The biggest single issue is they start by assuming a compromised machine. Guess what? If you have a compromised machine, you have a compromised machine. And? Why bother getting root? What you actually want is almost certainly in
.mozilla or similar anyway (bank passwords and the like).
We already had these discussions multiple times in this forum and the “insecurity” problems mentioned are always the same.
- Problem: People are confused with privacy vs. security. Android and iOS are the moste secure operating systems available and they are designed to stop the user from making mistakes. That’s there security model. But if we follow this discussion Linux in general is not very secure… If you install a virus with sudo you are in trouble. But it is your fault. And if Linux would be really insecure it would not be used on most of the servers out there. And PureOS is not less secure than any other Linux distro.
- The kill switches are useless because (add random argument here). Yes they are useless if the NSA has a problem with you and wants to spy on you explicitly but then you also shouldn’t trust your friends, your family and every body else. The hacking scenarios mentioned are soooooooo complicated this does not make any sense… It is simply nice to know that a random application at a specific time (for example during an important meeting) is not able to access any of your sensors. Which you can’t guarantee on Android or iOS.
- The boot process is not save: This is also true but this is only a major problem if someone has physical access to your phone. But if that happens also Android an iOS are not secure anymore. In any other scenario you have to install a virus with sudo (your fault again).
- Privacy: No one who criticizes Linux and it’s security mentioned privacy. On Android and iOS the security model is like: nobody can spy on you… except we and all the organizations we give your data to. But in my opinion privacy is more important than security. I can change the security level on my machine with AppArmor, by not clicking on random emails, by only installing things from repository and a lot of other tweaks (There a lot of tutorials in the web). But on Android or iOS you can’t change the privacy.
- “The majority of the hardware/firmware is still proprietary” - This is wrong it is definitely NOT the majority! A view parts are proprietary but Purism has communicated this very well what is open and what is not open and they tried there best to get rid of most of the binary blobs. But it is not possible to make every thing open source.
- I don’t get the modem problem: It is not a strong barrier yes but you have to hack a Librem 5 very specific to overcome this barrier. It’s not like a normal SoC where nobody knows what is happening inside…
The main point here is: If you install OPEN SOURCE apps from the repository your are 99% save because you or other people know how these apps work and what it does in the background. And nothing is more secure than knowing the source code. Security is always related with freedom. If you as user can do every thing on your phone a very good hacker can do that to. But that does not mean that this will happen.
Again if Linux would be that insecure: Why should anyone use it?
The real question here is: Do you want root privileges or not? More privileges more responsibilities!
And Purism can only do what is possible at the moment and if there is no free modem they have to deal with that… And I am not afraid that the NSA is spying at me because if the want they will and they are not dependent on your smartphone. I am afraid of companies like Google or Apple which collect all user data and every activity I do on my phone and with my phone.
My last word to this article is: Complaining is always easier than changing something!
On their website:
OpenBSD is lacking in a lot of ways. Many of its exploit mitigations are half-baked/useless
Yeahhhhhhhhh… go tell that to the security researchers that keep finding vulnerabilities that affect every system except OpenBSD.
My respect of this random stranger on the internet has gone down by a lot.
EDIT: Also, because this is Linux, there will be virtually an infinite amount of updates you can get. Compare that to my last phone that got 1.5 years of features and 0.5 more of security.
While refuting posts that make exaggerated claims like this line-by-line isn’t a valuable use of time, it is worth making some more general statements about security as it pertains to our approach because it applies beyond any individual post from a critic.
People who come to us from an Android, Windows or iOS background often have issues with our approach because it’s so radically different than theirs. We have different constraints than Android, Windows, or iOS. We factor user control as opposed to vendor control heavily in our design and that often means rejecting security features that those communities accept. Those OSes make very secure cages, but unfortunately those cages restrict the user even more than the attacker. That’s largely the point of their approach–security against attack is the marketing story, but the prime motivation behind their measures is to prevent the user from changing settings or installing software the vendor doesn’t approve of (and in the case of iOS, the ability to remotely revoke software). This allows the OS, for instance, to enforce carrier restrictions on whether to allow tethering, install 3rd party applications by default that the user can’t remove, etc.
This approach is why we went with PureBoot instead of UEFI Secure Boot on our laptops, for instance. We wanted the ability to detect tampering but rejected an approach where the user would have to get the blessing of a vendor to boot the OS of their choosing. We chose to solve the problem in a way where the user was in control of their own keys and their own computer.
Our approach is to lay the foundation where the user can trust the system and has the tools to secure it, without depending upon Purism or other vendors as the anchor of all trust. We think the strongest foundation we can build trust on top of is free software and that is why we are pushing initiatives like reproducible builds so much. While it’s not impossible to inject malicious code into free software, and there are certainly examples in the past, it’s definitely much more difficult to do so without detection long-term than when dealing strictly with binaries. Will we consider some of the more advanced sandboxing kernel features for application in the future? Perhaps (and we already plan to do so for userspace apps via bubblewrap+flatpak), but we will always balance that with the desire to allow the user to control their own computer.
Beyond that, it’s silly to dismiss hardware kill switches entirely just by thinking of some scenario where they wouldn’t address a particular threat. They are a tool in the toolbox, and as such can be incredibly powerful when used thoughtfully (and properly) against many different threats. When used in concert with software switches within the OS itself you can get even more fine-grained control and protection from a wider array of threats given even the software switches within the OS are more trustworthy than, say, “location services” software switches in Android.
What you see in PureOS on the Librem 5 today is the foundation, the start, not the end, of our hardening plans. We have to start with a solid foundation before we can build more security controls on top of it.
For what it’s worth, I’m planning on writing up a general-purpose Librem 5 hardening guide before Evergreen ships, to guide people through some of the proper use cases for kill switches as well as how to further lock down the OS on top the hardening we have in place by default.
I love this idea. I will look forward to seeing that guide.
That and making blanket statements like “X is useless”.
Case in point: “Mic kill switch is useless because there is a sophisticated attack using the gyroscope” that could allow speech to be listened to (at low quality and might require sophisticated post-processing to get any useful information out of it).
So straight off the bat, the mic kill switch will completely defeat an attacker who is attempting to listen in using the mic. So it raises the bar for the attacker. That’s a good thing.
Secondly, it assumes that the current version doesn’t and/or no future operating system software version mediates access to the gyroscope. Let’s assume that this is currently an issue (unrestricted access to the gyroscope), it is something that can be addressed in software in the future.
As you say, if the attacker has root access then the attacker can bypass any operating system controls BUT you should be trying really really hard to avoid an attacker having root access. If the attacker already has root access then you have probably already lost.
Thirdly, as you say, it assumes that the attacker is already running random code on your phone in order to perform this attack. Your main goal should be to avoid that situation in the first place.
I’m not sure but web browsers may already be blurring the information available to a script from sensors like the gyro - so that the gyro functions for basic rotation detection but without being able to eavesdrop on speech. Clearly a web browser would be the easiest way to get you to run random code, and the web browser is a good place to restrict access to the gyro over and above what restrictions the operating system imposes.
Other case in point: “Network kill switch is useless”
His point about exfiltration is fair. If an attacker is already running random code on your phone, it can indeed batch up data for exfiltration because surely you will eventually enable the network. (Technically, you might only enable the network when at home where additional infrastructure may detect and thwart the exfiltration but let’s take his point.)
However it completely misses the point about why I might want to use the network kill switch. For example, I don’t like the fact that WiFi is used to track me and identify me as I move about in public spaces. For that use case, the WiFi kill switch works as advertised. I can turn on the WiFi when in public only if I actually intend to use WiFi. Or I can leave turning it on until I get home.
The anti-exfiltration use case is a very narrow view on all the reasons why a person might want to use the WiFi kill switch.
Some of madaidan’s arguments don’t make much sense, but he and Micay do raise a couple valid points about better kernel security in Android vs Debian. The problem is that they seem to be exclusively focused on kernel hardening, and often fail to address the larger issues beyond the kernel and they refused to acknowledge that Purism may also have ways to do things like secure and verifiable boot. Just because it hasn’t yet been implemented on the Librem 5 doesn’t mean that Purism won’t do it. It is hard to argue about how the OpenPGP card will be used, when it is not yet implemented.
I’m not going to rehash all the arguments that I already made in that thread, but let add some additional points that I didn’t mention:
- Android may have a more hardened kernel than Debian (and by extension the Librem 5), but Android literally has millions of pieces of malware created for it, and a lot of it can be found in the Google Play Store, so there is a high chance that you will install malware on an Android device. With the Librem 5, where you are getting all your apps from the PureOS Store and most of the mobile apps are converted FOSS desktop applications that have spent years in the Debian repositories, the probability that you are going to install malware in the first place is very low.
- Android and Windows 10 are operating systems designed to monetize its users’ personal data for targeted advertising. Now that Apple is switching to services over device sales, it will likely begin to harvest its users’ data to better market its services to its users. Google, Microsoft and Apple all collaborated willingly with the NSA, before Snowden’s revelations exposed how they were sharing people’s private data with the government. Since then, Google and Apple have made it a policy to resist government access, but one has to question these companies’ commitment to your privacy, considering their earlier cooperation with government surveillance.
- Google in particular, but also Microsoft and Apple to a lesser degree, encouraged developers to create software on their platforms that is based on exploiting people’s personal data. You can avoid a lot of this spyware by installing an AOSP derivative (such as LineageOS, /e/ or GrapheneOS) and only using apps from the F-Droid, but that takes a lot of work on your part, compared to getting a phone with Linux preinstalled. The Librem 5 will be preconfigured to use a safe app store, where all the code is free/open source and collection of user’s personal data is strongly frowned upon by a system of badges that will inform you how the apps will violate your privacy.
- Google spends a lot of time hardening the kernels that it takes from mainline Linux, but what that means is that you have an out-of-date kernel in your phone, and Google doesn’t guarantee that kernel is updated when Android is upgraded to a new version. The Linux kernel is typically 1 - 1.5 years out of date when you buy an Android phone, so it may be 3 - 3.5 years out of date by the time you stop using the phone, and most phone models stop getting updates after 2-3 years of being on the market, which is an even bigger security threat. Yes, that Android kernel may be hardened, but what good is that when you are no longer getting security updates after 2-3 years and you are using an ancient kernel? In contrast, with the Librem 5, you are getting lifetime software updates, and the phone can run the latest mainline kernel, so it is going to receive the latest security updates. Because its drivers are all open source, the community can maintain them, and because it uses chips that are manufactured for many years, unlike the integrated mobile SoC’s used by Android phones which are only manufactured for 1-2 years and only get 2-3 years of updates, we can count on years of firmware updates from the manufacturers. The fact that a Linux phone can count on so many years of updates has huge security benefits, because old security holes can’t be exploited.
PS: madaidan uses a pseudonym and some of his code commits are found in GrapheneOS, so I confused him with Daniel Micay. He got really offended that I had confused him with Micay, but he causes this kind of confusion by posting articles on the internet under a pseudonym. As you can see from the thread, I found out that both he and Micay were quite obnoxious, so in the end, I decided that it wasn’t worth arguing with them any further.
According to their logic, the lock on their front door is useless because it can’t stop dynamite. Bet they still use theirs, though, and would refuse to buy a house that couldn’t have one installed.
This is already not true. The gyroscope is a sensor attached via some bus (in laptops, and likely the L5, usually the USB). It gets accessed via the standard file-like device access (
/dev/something). Which means you can restrict access to it simply by setting it
700, and then using
setfacl to allow access to specific applications. There’s already been a bunch of work on getting desktop users able to exist with everything inside of
firejail (kinda lightweight docker). The “standard” config for firejail totally hides
/dev (and most everything else) from the jailed application.
As for the wifi kill switches, there’s a much simpler reason to want them. It’s a “do not disturb” mode which maximizes battery life.
and that should be in the constitution for the XXI-st century revision
i believe the question that we should ALL ask ourselves is :
“how do we create AWARENESS and easily-accessible OPPORTUNITIES (or why not ? a REWARD system) for ALL people that use NETWORKED technology or feel FORCED to use it in order to have access to HUMANITY ? for the purpose of strengthening the weakest links while simultaneously NOT weakening the hardest links in the human gene pool.”
No name, no date, no footnotes – except for some links back to other pages on the same site with all the same defects. Useless.
I always find it interesting to see/hear things from different and unexpected angles. It shines - even if partly flawed - light to areas that may have weaknesses to address. Even if they are possible only in special circumstances, it’s good to be aware for the user to manage them with their behavior (instead of system making the decision for them). And it makes sure @Kyle_Rankin does not run out of things to do (coding or writing)
Now for some “TL DR” stuff…
I think the main thing is - as has been said here a couple of time - that the base assumptions (call it security model for now) are different and in that, the Android/iOS model and the L5 model (I’m not attaching Linux to this since it doesn’t include hardware) are answers to different questions, to different needs and goals. And I’d like to remember that there are other things at play here than “only” security (including privacy in this) as there is the business model and the ecosystem of selling information (an most of it is rather general, although the personal info section is more important). The latter is so ubiquitous that it Android/iOS are more or less just support platforms to enable it (among other things) and make the user feel like they are getting something in return (like security against some things or usability etc.).
I’ve been working on this idea for some time, based partly on complex network theory (and I can only hope I can translate it here), that there is kind of a built in “tug-of-war” in every network service connection where you have the individual user and the system negotiating/struggling to set the limits which each need to achieve and maintain their respective security/safety/privacy. Both want to ensure their continued existence and neither wants to be used in a manner they do not condone. It’s a balancing act that may not have stable equilibrium, as the needs shift slightly constantly (sometimes in obvious ways like when there is a threat or attack). It’s about whose security/safety/privacy is more important and to what level that/those need to be ensured. Individual gives more value to privacy but the system needs information to trust you to let you in and make sure you are behaving acceptably. This can be applied to governments and such social systems as well. Some choices are easy, as individual can choose not to use a system or service, but the real problem lies in the ones which are huge de facto standards - almost mandatory to use if you want to be part of the world around you (or just services that keep you alive) - with which the system(s) can be made to demand a lot. All in the name of security.
But my point is not about that well known model, as it’s about understanding that system(s) and individual(s) have different needs that are in flux and they are rarely (if ever) compatible for long periods - unless one rules over the other dominantly. Once the individual gives something away, it’s control is lost forever, as they can not change and rebuild and patch like the system(s). System(s) as such are not (mostly) evil, they just are. But they can be repurposed to do nasty things and complex systems have a lot of unintended consequences (on top of the intended ones that haven’t always been well thought out) - on both points, I suggest some light reading in the form of “Weapons of Math Destruction”. The scale can be big or small.
Soooo… Having the L5 model is a step towards gaining control in this “tug-of-war” but there are more ways that we are connected to various systems that we do not (yet) have control. That is not a reason not to try to plug these holes in the damn - it’s still less water taken in, even if there still are other holes (the pumps have better chance of keeping us afloat longer - to strain the analogy). The fact that it’s difficult to do it this way and the road is long, is not a reason not to do it. And I don’t believe it’s possible to be without being connected to system(s) - it’s about finding ways to make sure both, individual and system(s owner/admin), can feel safe and secure.
This may seem blindingly obvious and/or I may have missed it but @Kyle_Rankin, has this difference in approaches between closed systems (A and i) and L5 been laid out? It might be an assumption to think that people understand it fully - what it means, what it requires from user behavior, what it doesn’t do etc. - and having that baseline stated clearly might be needed. I know it’s been talked about in bits here and there. Maybe a topic for a blog post and/or FAQ and/or About?
I think for the average customer it is completely unnecessary. In fact it would just scare off people who would have otherwise bought a Librem 5.
Have it in some deep dive material sure, but don’t plaster it on the main page for it. I don’t believe this is unethical either as Purism has every intention of helping their customers position their devices to be secure and private.