Librem 5 security

TL;DR: Other smartphones can boot securely to prevent injecting malicious software into the boot chain. I wonder if the Librem 5 can get a similar feature?

I’ve noticed that the i.MX8/i.MX6 says it has ‘High Assurance Boot’ and ‘secure key storage’ features, which is pretty interesting. I wonder if it might be possible to get the Librem 5 to boot securely, meaning if it detects unsigned/tampered firmware, it won’t boot, or it will boot but it will lock down or wipe the secure key storage.

Lots of phones offer this sort of functionality, but the OEM controls the keys burned onto the efuses, so typically it’s abused to limit user freedom, and perform dirty tricks like voiding their warranty and putting scary warnings on their boot splashes. But it would be interesting to see a security implementation that preserves the goal of ‘if an adversary tampers with my phone, I want it to lose access to its own encrypted storage’ but doesn’t hand over control of the system to the OEM.

Or perhaps Purism could control and securely sign some pre-bootleader security firmware, but that firmware (in addition to being open source) could be specifically designed to maximize freedom while preserving security, it will let you modify your system in any way you please, but if it detects any changes that were not signed with an earlier supplied user generated key, it will forcibly forget its encryption keys, which you may or may not have chosen to use to encrypt your storage with.

Or maybe a combination of both plans. You could supply the device with its efuses unburnt, and let the user decide ‘do you want to install a Purism supplied secure pre-bootloader, or do you want to install your own with your organization’s own keys and code, or do you want to just forget all that stuff and live with the fact that physical access can be used to install a keylogger or some other malicious software into your boot chain?’

There are lots of interesting possibilities here for how to handle security, with a freedom-respecting company at the helm.

7 Likes

Beside all TEE and hardware vulnerabilities how will it be ensured that some attacker simply installs his own OS or firmware extracting the “hardware-secured” private keys like Benjamini did it with the Qualcomm’s secure element?
Is signed software and a root-of-trust somehow compatible with Purisms goal of a free and open source phone?

Edit: http://bits-please.blogspot.de/2016/06/extracting-qualcomms-keymaster-keys.html

1 Like

I think using it is a good idea, but keep in mind that this can be easily go south

Yes… though only if the device owner and end user is the only one with the signing keys. Assuming that there aren’t any vulnerabilities in the trusted world bootcode (I’ll give you a minute to stop laughing), you can use it to ensure that nobody can subvert your phone by installing boot-time trojans. The main opposition from our perspective to commercial phone usage of trusted enclaves is that we are not the ones who control that root key and as such are not the actual device owners.

Qubes OS, for instance, makes use of a TPM - https://blog.invisiblethings.org/2011/09/07/anti-evil-maid.html and https://www.qubes-os.org/doc/hcl/

2 Likes

I’d love to hear from Purism on this topic. :+1:

2 Likes

I haven’t heard anything about it since this initial post, but Purism is nominally working implement the Heads firmware, which I think either implements a TPM itself, or allows use of a TPM (I’m no expert)

https://puri.sm/posts/purism-collaborates-with-heads-project-to-co-develop-security-focused-laptops/

2 Likes

This topic has received very interesting content. Most is not related to Trustzone itself though, and rather focus on another part of the security measures available : bare-metal level, boot and static core root of trust measurement.
@jeff if you can split the thread between Trustzone and boot security, feel free!
(also the forum engine is currently lacking multiquote)

@simne7 You asked if signed software and hardware-based root of trust are compatible with the project ethics.
Here is what I found, please correct me when I am wrong!

Trusted boot vs measured boot
NXP iMX boot security takes the trusted boot approach : it chains signed modules which must each be validated for the boot operation to continue, with the root of trust burned into the chip fuses. Another classic method is measured boot, where you first launch the machine and afterwards ensure no tampering or attack occurred.

Documentation (iMX6)

  • iMX6 reference manual - IMX6DQRM
  • iMX6 Linux High Assurance Boot User Guide - IMX6HABUG
  • Secure boot for iMX6 using HABv4 - AN4581
  • Code Signing Tool (CST) available for download with NXP registration - HAB4 API and HAB CST User Guide
  • iMX6 UL Security Reference Manual - IMX6ULSRM (available under OEM request and NDA)

Boot stages :
1 BootROM (read only)
2 Bootloader (i.e. uboot, an ARM equivalent to coreboot for x86)
3 Linux kernel

Trusted boot process
The fused root of trust is a hash of multiple Super Root Keys (SRK). In iMX6 you can have 4 of those SRKs so you can revoke compromised ones (with yet other fuses), and you will chose one of those SRKs for a boot operation.
The Super Root Key is a public RSA key.
The bootloader image is signed with the private RSA key. Simply put it will be authenticated only if the RSA test passes, and the public RSA key matches the one in the fuse.
There is a Public Key Infrastructure, a tree derived from the root of trust, with subkeys and certificates.
The chain of trust working around the PKI tree is implemented as a sequence of commands written in the Command Sequence File (CSF), stored after the bootloader image.
First the CSF will be authenticated with derived keys and certifs from PKI tree. If it passes, the bootloader image will have to pass authentication too, and then on to the next step, loading bootloader, and possibly continuing the chain of trust, authenticating the Linux kernel, and so on.
The bootloader image can be encrypted in addition of its signature : this uses other independent modules, the Cryptographic Accelerator and Assurance Module (CAAM), and the Secure Non Volatile Storage (SNVS).
I found IMX6HABUG page 2, this HAB for dummies for encrypted uboot image layout, and further vulnerability link to have especially informative details.

CAAM stream cypher, full-disk encryption…
CAAM and SNVS can later on be used for stream cyphers (AES256…), for example Full-Disk Encryption.
Much like Android SHK and Apple secure enclave, CAAM stream cypher uses a hardware-enforced key, rather ill-documented. This key, called Data Encryption Key (DEK), is derived from :
1 a randomly generated key (DEK, KEK?)
2 a device-specific key burned in fuses, some sort of UID (secret key/OCOTP Master Key?)
3 a passphrase (?)
DEK is stored in a blob, the DEK blob, apparently not accessible through software : to me it looks more like the robust Apple Secure enclave scheme, with stream cypher happening in a black box, than the SHK Android scheme where FDE keys can be extracted as explained in Beniamini’s post linked above.

So to try and answer your question @simne7, the chain of trust is neither a complete black box, uboot bootloader is open-source, you can program most of the pre-bootloader sequence (CSF) using the Code Signing Tool which uses standards like openssl… You can even tweak CST, check HAB CST UG Appendix B Replacing the CST Backend Implementation.
It is far from fully documented either, as open-hardware would be, especially CAAM… And I guess most of the documentation is in the one file under NDA…

Boot end-user control
@TungstenFilament “The main opposition from our perspective to commercial phone usage of trusted enclaves is that we are not the ones who control that root key and as such are not the actual device owners.”
Nice point.
First there will be fused data unavailable even for Purism to program, the Master Key device ID is done by NXP. We can only accept they claim to not register it…
But other fuses are programmable, especially SRKs, meaning you could choose your private RSA keys to sign bootloader with. If you are really concerned about security, you should be aware that security on the host platform, where you will run CST, is of utmost importance as you store private keys and act as certificate authority for the PKI.
So @nicole.faerber, would we get to choose between off-the-shelf bootloader and the option to program fuses ourselves?
Do you think we could access NDA-IMX6ULSRM, or its equivalent if(when) you ship iMX8?

Audit and issues
I have no skill to thoroughly audit the boot process, but its general structure may look OK.
Although, it is quite complex, and security does not goes well with complexity. Researchers have found vulnerabilities : they managed to run arbitrary bootloaders using stack overflow bugs. NXP errata here.
@taylor-williamc Trammell Hudson working on Trusted Platform Module is nice, and it might have to do with possible Purism announcement of ME going from neutered to disabled? :slight_smile: Although he seems specialized on x86, he could still audit general iMX structure I guess. Maybe could ARM experts chime in, like Gal Beniamini?

Lastly, I think we should keep in mind that the Librem5 project will first make a big push toward privacy from GAFAM-NATUs. Security is much harder I believe, and if one were to setup a security plan against nation-level intelligence agencies, maybe keep in mind that, with 2M, this is a rather small project :slight_smile: Found these kind of measures at Hudson’s site daamn oO

5 Likes

Is it a security concern when I hand somebody my phone, or do you have to escalate priviliges to do anything harmful/malicious? My Linux knowledge is pretty basic so even though I searched, I probably didn’t understand the answer in front of my face.