Secure enclave availability on librem phones

https://www.bunniestudios.com/blog/?p=6031

and more specifically…

2 Likes

This is a good summary of the situation. It’s not that we don’t understand the concerns, the motivation behind it, or even the security benefit of a discrete secure enclave. It’s just that most of the traditional approaches are not what we want, which then means we’d need to invest the time to make something new, possibly from scratch. That would take time and additional resources.

In the mean time, we are investing in using the openPGP smart card for similar functionality where it makes sense because there are also some benefits with that approach, the pros and cons are well-known, standards are public, and we can benefit from a lot of prior art. That combined with porting PureBoot over to the Librem 5 should help build a reasonably decent foundation we can build on top of.

2 Likes

I could see the benefit of adding additional microcontrollers to handle things like digital wallets. The problem that I see with putting too much into a single secure enclave is that secure enclaves can be hacked too. For example, SIM cards behave like a secure enclave. They are a physically separate CPU that communicate with a specific protocol to the host (which for SIM cards, the host is the mobile CPU). Even those can be hacked. You only need an enclave because you do not trust the host CPU’s defenses. But if you put too much code into the enclave, then you increase the attack surface to the point that it becomes almost as vulnerable as the host CPU. For example, I would not trust my digital wallet running in the same secure enclave that performs attestation. At that point, it becomes just another computer to worry about. If you trust public cloud computing or even local VMs, then you are also trusting the same virtualization technology that makes secure enclaves extraneous. If you wanted true CPU isolation, then you would have implemented multiple physical servers instead of VMs.

For the “Respects Your Freedom” certification program, giving the user less than full control over their device sounds like an anti-feature. Perhaps this less than full control feature can be optional, either as a separate download, or hardware accessory. Purism’s smart card feature can implement a physically separate CPU with the right type of card. But for very secure authentication, you need something like the Mooltipass, which has a separate CPU, display, and input. If the host CPU is not trusted, then neither is the input device and display connected to it.

Apple once added a “am I rooted” flag to their APIs, but removed it because it was trivially defeated if it was rooted. For that reason, I do not hold software attestation in high regard. If you can trust cloud computing as a valid practice, then you can trust secure boot without an enclave.

There are open source designs out there, see KeyStone and Sanctum. Nevertheless I’m sure this is a ton of work. Cant even imagine how much. But what if there was a path to funding, and a strong revenue opportunity, while maintaining complete independence like you did with crowdfunding so far? Might that help? I have a few ideas in this regard…

Open source phones (Indeed, open source devices in general) could be incredibly disruptive even to the likes of Apple. And in my limited knowledge you’re further along than anyone. The question is then how to compete with the giants whose business model is predatory without being predatory yourself. And I think there are ways, but they are all dependent on this concept of security I’m driving at here. A bit of a long story, but I can definitely explain…

Again, this is “solved” by seL4, an open-source formally verified microkernel. Everyone interested in security, privacy and open-source should check it out.

1 Like

“You only need an enclave because you do not trust the host CPU’s defenses” – well, I strongly disagree with that. The key feature of an enclave is secure attestation, it lets you verify the software running on a host you’re communicating with. In this respect that are no CPU’s defenses to speak of, with SA enclave gets you into a whole different territory.

Let’s look at it from the data sovereignty standpoint, because everyone on this forum probably agrees with its basic tenets. A user should be in control of their data, right? Well, this means, for example, that the user aught to be allowed to send their data to others, right? This, also means, that when sending data to others, the user should be assured of how their data will be used, right? Indeed, data you never use or share is, well, useless, right? Ok, so, the only way the user can be assured of how their data will be used, is if the remote node, where it will be used, can securely attest to the software that runs there. No OS kernel is able to produce the chain of trust required for this to work, because to perform secure attestation, the attesting party must somehow be verified, remotely, as well. Unless it’s possible to mark each valid kernel with a unique signed key, and unless such key is never assigned to a compromised kernel, a kernel is useless in this picture, only a hardware system can do it. And yes, there ways to break that also, of course. Until we have fully homomorphic encryption with generic computation inside of it, there’s little you can do that’s 100% unbreakable. But enclaves are a good start.

There is a whole host of use-cases like this that go well beyond localized protection. Credential management, credential delegation, all of blockchain finance, all of data sovereignty movement, all of the data verification and fact-checking movement requires this security model. There is no way to get around this with half-measures.

Ah, I get what you are talking about now.

So basically, you want to guarantee that your data can disappear on hosts that you do not own and control. If that were possible, the billions spent on creating DRM for movies would have figured it out by now. If they have not figured it out by now, then it is probably not possible, like secure encryption with 3 or more parties.

Also, I find a software and network ecosystem that cannot function without the approval of other parties abhorrent, as would others in the free software movement.

1 Like

that’s interesting indeed.
lately when downloading AV-material from the LBRY network through the desktop app on Linux i get the ‘confirmation’ message “you are trying to download something that was shared with you by other people …” or some such mess of words … as if they intentionally want you to know that it’s possible that you will be punished if you share … but go on and share with us whatever you want … WHILE you can …

I actually don’t understand why you need hardware to do this? What is the difference between a hardware secure enclave and a software secure enclave? I would say that the software one is safer since it is formally verified, and the code running a hardware enclave is not. I have worked with TrustZone for instance, so I have at least some knowledge about it.

As far as I know this is what secure boot protects against. It checks the signatures of all stages of boot all the way to the kernel and makes sure that they are signed properly.

You have probably missed EMM/UEM technologies which target exactly that area. For companies data protection is crucial therefore emm usually achieves that by enforcing [security hardening] policies and running apps in secured, encrypted containers, which may be volatile or non-volatile (as policy defines).
But of course once the data leaves the enclave (passed to a party which does not participate in the data control community) the data is essentially leaked. Which suppoed to be prevented by measures like DLP.

What will stop someone from virtualizing a host with an emulated enclave of a FSF RYF complaint device (from the time of enrollment to the time that the data is accessed) and using the non-hardware enclave to extract the “protected” data?

You have to start somewhere, yes, but you need to start somewhere that is acceptable, not start just anywhere for the sake of starting.

To my way of thinking, anyone who starts talking about putting blackbox chips inside devices automatically raises some red flags unless they explicitly spell out how it is going to work and ultimately unless there are open hardware implementations.

Digital signature in and of itself does not necessarily solve the problem - because if a camera manufacturer is compromised or is under the control of a state actor, or if there is a niche brand “camera” that noone has ever heard of, there is no need for sophisticated technology in order to subvert this system. In some respects, this idea parallels the problems with Certificate Authorities.

(Another tricky aspect is common core chips i.e. many different brands of camera may use some chips in common to implement the core of the camera. For this idea to make sense, the signature would have to reflect the overall manufacturer, not the supplier of the common chips.)

Whatever the solution, you have to ensure that the cure is not worse than the disease.

I don’t know whether there is a technological solution to “credibility”.

I also think that “fakes” are part of freedom of speech. Would we want social media platforms blocking the upload of all images that lacked the metadata signature or lacked a verifiable signature? Would we even want social media platforms allowing all such images but tagging them as “fakes” or “possible fakes”?

There is a social and legal question beyond what is technologically possible.

2 Likes

Thanks for your response, so much to unpack here…

In no particular order:

“Would we want social media platforms blocking the upload of all images that lacked the metadata signature or lacked a verifiable signature?” – no we clearly don’t want that, and I am no way advocating for that. What we want is the ability to distinguish between fakes and non-fakes. I’d be very surprised if you disagree with this.

“Digital signature in and of itself does not necessarily solve the problem” – you are correct, and this gets very complicated. But at the same time, your argument can be interpreted as “there is not perfect solution, so we don’t need any solutions”. I disagree with that. Shifting the attack surface from “anyone-with-half-a-brain” to “state-actors-only” is a very meaningful step in the right direction.

“To my way of thinking, anyone who starts talking about putting blackbox chips inside devices automatically raises some red flags unless they explicitly spell out how it is going to work and ultimately unless there are open hardware implementations.” I agree with this. I also never stated that I prefer black-box or closed-source implementation (and not sure why people keep saying this, frankly). Yes, 100%, ideally you would want an open source chip, with a community based (blockchain-based) root of trust. This does not currently exist, but will exist soon, given support from forward-looking organizations such as Purism. Keystone and Sanctum are real open source enclave projects that are most likely struggling for lack of funding. To me this warrants attention and support from the open source community, provided how much benefit this can bring to users in terms of security and data sovereignty rights.

“Whatever the solution, you have to ensure that the cure is not worse than the disease.” I agree. But please help me understand how I am even close to straddling this boundary in my thinking.

Please bear in mind that my comments are just to move the discussion forward, to lay down some markers.

The problem is that that can happen whether you or I want that or advocate for that or advocate against that. So we need to anticipate the possibility and consider the consequences.

I am lukewarm on that until I understand all the implications.

Anyway, I’ve called out some of my friends for sharing “fake news” on social media and often the response is: probably / I know - and I don’t care.

1 Like