Just recently learned about the Librem product line and am really amazed by this community. I’m working on a piece of software designed to run under secure enclaves (e.g. cryptocurrency wallets/identity/personal data storage) and looking for pointers to information on the type and architecture (if any) of an enclave available on the Librem hardware, especially the phones.
It would be really cool to use seL4 for this. A formally verified micro kernel. It will do the same thing but with verified software. https://sel4.systems/
The issue, of course, is attestation. To create the right trust framework we should be able to somehow verify the software on the remote node. Enclave attestation (SGX being, of course, fatally overcomplicated) provides such mechanism, at least in principle. With microkernel software, even if verified, its unclear whether this is even possible. But I am happy to be educated otherwise.
It would be best if the code in the enclave could be verified, at the moment I know of no HW crypto wallet that has been verified. It is sure possible to verify them with the right tools. Functional correctness compared to a specification would be a good start. Like the one for seL4. The advantage with seL4 compared to secure enclaves is that even secure enclaves needs software to run, and to be really secure that software needs to be verified. I have not worked with SGX, but I have worked with TrustZone, and I would trust seL4 separation much more than TrustZone (even if TrustZone provideds HW separation)
If it’s not running now, I think the porting job is very small since its running on IMX8MM-EVK. I don’t know exactly the difference, or if there is one, between this and the one Librem 5 is using. Here is a list of the supported platforms: https://docs.sel4.systems/Hardware/
If you are looking for the part that can be trusted to wipe its credentials if the secret is attacked with brute force, that will be the smart card. It is intended to store secrets, not run code. Maybe a custom microprocessor smart card can be used to run custom code? Phones generally do not have smart card readers, but the Librem 5 is designed to put the user in control, and that means that it has no need to impose vendor controls, unlike the walled gardens by Apple and Google.
I have no experience with these, so I am just throwing it out there. The Mooltipass is open source, but you might need a number from the manufacturer to unlock it for replacing the code.
We take a slightly different approach to secure enclaves, because the enclaves as implemented in most phones are designed less to protect your secrets, and more to prevent you from controlling your own hardware.
We believe in open standards and user control, which is why we are making plans around the OpenPGP smart card reader in the Librem 5, as the foundation for these kind of functions:
Hi Kyle, nice to meet you. I’d love to discuss this with you, because I think only a fully integrated enclave creates a complete security picture necessary for a truly secure crypto-wallet or a similarly sensitive identity application. Any chance we can have a brief chat? I would cherish an opportunity to explain this thinking to you, because I think what you are building is amazing, but this type of stuff will become critical in the next few years.
We would not want to add any solution that wasn’t open hardware and free software. I understand the thinking within the crypto-wallet community and the general ideals behind secure enclaves and how many want to leverage the power of locking users out of devices with key enforcement, to also perform other cryptographic operations securely.
This is why we are taking the approach we are with OpenPGP smart cards as I think they solve many of the issues either directly or indirectly (and have an advantage of being removable and replaceable unlike an enclave chip that’s part of the PCB). However, if you have a separate USB security token that works with Linux, there’s no reason you couldn’t plug it into the bottom of the Librem 5.
Very respectfully, I don’t think the above correctly summarizes my thinking. Also, our goals match with regards to making sure that hardware and software remains open, as that is the only viable path to security.
I still would appreciate being able to chat about this, but here’s the main reason why I don’t believe security tokens of the type you’re describing sufficiently tighten the screws on security, even if they are full featured smart cards with a secure CPU. The gist of the matter is the dire need for a secure UI. Today, the easiest way for an attacker to penetrate a key storage facility of any kind, is to compromise the user interface, as was done on multiple occasions, for example in the case of the Ledger wallet’s laptop software. To prevent such instances, a secure enclave must be integral to the device, and have the ability to control both the screen, and the input devices, including the biometric sensors. Anything short of this is breakable.
Very curious to hear your opinion on the above, but to be very clear, I am neither proposing to lock the user out of their device or software, nor to supply them with anything proprietary or closed-sourced.
Thanks for clarifying, I think I understand your use case and concerns more. I don’t see us developing a discrete secure enclave that has full control over the hardware to that level for the Librem 5 (at least not any time soon). It seems to me that a special-purpose device designed specifically for this use case and threat model might be more effective than trying to apply it to a general-purpose computer.
You are quite possibly correct, for the time being. To clarify, though, I’m looking much broader than just crypto wallets, and this includes any sort of authentication, as well as interacting with a variety of sensitive applications, starting with banking, but eventually even social media might require this type of security, think of how one could completely (or mostly) eliminate bots from the picture. I am rather confident that this kind of security will eventually be required on a general purpose device, but as you point out, not yet.
That said, given the amount of expertise in your network, I’d be very curious to keep the conversation going. Perhaps if and when I can find the right kind of funds, I will be exploring building something. I would MUCH rather collaborate with a company such as yours, than look to build something entirely separate.
Thanks for answering the questions, I think I got the answers I needed.
One last thing. If ever you have a few minutes, this article of mine provides more details for the architecture I’m looking to build. Hope this makes sense to you. Always welcome your thoughts.
digital camera and smartphone manufacturers can include cryptographic chips into their devices. Such chips can securely sign every image taken, and the signature can later be used for verification
That raises some privacy issues however.
Suppose I am employed by the government and I am a whistleblower and I have taken a photo that proves that the government is committing a crime or lying or doing something where it would be in the public interest for it to come to light.
I surely don’t want that photo to be traceable to my camera.
Even in less extreme situations, I may not want two images both taken by me (that is, by the same camera) to be able to be connected to each other.
(That said, research has been done in using camera (sensor) artifacts to trace an image to a camera even without a digital signature.)
There are a couple of answers to the above.
The digital signature is just metadata and everyone who is concerned about it can use exiv2 to remove the signature. (I would certainly do that.) The implication of that however is whether being signed is normalized or being unsigned is normalized or neither.
This is then also problematic from the perspective of the power of defaults. People who know nothing about any of this sacrifice just a little bit more of their privacy, if signing is default behavior. A whistleblower who knows nothing about any of this may sacrifice much more than his privacy. This would particularly apply if camera manufacturers do this “secretly” or without widely advertising the capability. (For all I know, this may already be the case. If you examine metadata for a JPEG you can see proprietary opaque fields like the MakerNote(?).)
The digital signature is only as good as the robustness of the crypto chip. It would presumably be a high prize for, say, a state-based actor to be able to compromise the chip and start signing deep fakes to make them look less fake.
Signature or no signature, falsehood spreads well on the internet.
“A lie gets halfway around the world before the truth has a chance to get its pants on.” (attributed to Winston Churchill)
In other words, some number of people will believe and/or spread a deep fake image, whether it is signed or not.
As for privacy, that’s definitely a solvable problem, and you don’t even need to strip metadata, just use one of the zero-knowledge primitives out there. For example, I think you can have a signature that proves that one of the keys from a given set was used, but not which one. Etc.
From a more organizational point of view (companies and non-profits - anyone offering a “company phone”), a secure enclave for work products and business data would probably help Linux phones too. Especially when the user can be sure that the company isn’t doing anything it shouldn’t either. But such solution would need some kind of user hierarchy (viewer, user, admins - there should be possibility to set up some kind of policies for when person leaves, dies etc.) or something (really, I haven’t thought this through yet).
“Why bother?”, the audience asks. Well, a certain very big global cell company has this kind of product that they market as very secure, even to government users. Except they also sell tools at security events to penetrate them. Funny that…
Well, exactly. I disagree with the stance @Kyle_Rankin expressed, not that I have any say. Enclave-style security is required precisely so as to protect the user. Lacking it (for example, lacking any capability for secure verification and attestation of application software) leads to users being critically exposed, in the name of giving them control over their software and devices, I gather. But giving users too much control necessarily means giving too much control to those who are trying to breach their devices and accounts.
A device with a root mode readily accessible can only be safely used by a handful of professionals. Safety of an average person requires an entirely new approach.
No disrespect to the team, and I understand that resources are limited and have to be intelligently managed, and that this isn’t currently a priority. But in the long term this is absolutely critical.