A fairly obvious remote attack vector against machines whose BIOS/UEFI firmware is stored on an internally-flashable SPI ROM chip is to use a root-level exploit and then to flash custom firmware to the chip. This has the potential to defeat much of the security of the boot chain and, as a result, of everything above it.
Chromebooks block this vector by means of a screw on the motherboard that must be physically turned in order to flash the ROM. (The screw just (dis)connects the chip’s write-protect pin appropriately.)
Some new Macs block this vector by using Apple’s T2 chip instead of a traditional SPI flash device. (Perhaps the T2 chip is internally flashable, but AFAIK it is not supported by common tools like Flashrom. Figuring out how to flash it is likely to be non-trivial for months if not years to come.)
What about Librems? How do they block this vector? (And if they don’t, then would (de)soldering the chip’s write-protect pin, as suggested by Peter Stuge and others, be acceptable within the warranty?)
AFAICT, there are two possibilities open to a remote attacker who has gained root-level access to an internally-flashable PC with Heads installed and enabled:
Flash a ROM image that will skip the steps of measuring the bootloader and of authenticating to the user via a 6-digit TPMTOTP number. This is tamper-evident, but detecting it requires the user to notice that the PC has stopped prompting them to check that number against their hardware token (or to perform a hardware dump of the ROM contents, to check against a known-good image). Many users would not notice this, and upon those users, the attack would succeed entirely.
Do what should perhaps be called a “BadHeads” attack, which would be much less tamper-evident. Something like:
(i) measure the existing firmware;
(ii) build a new Heads image that has a record of the existing firmware’s measurements and the ability to communicate those to the TPM (instead of its own measurements) and to process the result via TPMTOTP as usual, so that the 6-digit number would match the user’s expectation;
(iii) flash this new image to the ROM.
I am not yet certain whether step (ii) is possible.
A Chromebook-style hardware switch for flashing would make both these attacks impossible for a remote attacker.
Yes, if the user chooses to ignore the feature within Heads that alerts them to tampering (TOTP code) then attack #1 would work, but it would be better for the attacker to just set a new secret as then it wouldn’t hide the 6-digit code but instead just display some other code and a user who isn’t checking the code at each boot (as they should) wouldn’t detect it.
Attack #2 is also possible, but it would need to be custom-tailored for each user as each user’s BIOS will be slightly different (as it will contain their custom GPG keyring and settings) so the attacker would need to pull down their particular BIOS and couldn’t just use a standard Heads ROM. We have tested this type of attack and it does work! Our plan is to mitigate it for remote attackers by setting the ROM to read-only mode inside Heads at boot time. We also have some plans to mitigate it in the future for local attackers but we aren’t ready to announce anything along those lines yet.
Currently we have added patches to Heads such that you can flash the ROM within Heads itself, but we haven’t yet added the feature that sets the read-only bit when it boots into the OS itself. Once it’s enabled though, the attacker would need to be physically present to be able to reboot into Heads to reflash the ROM.
It’s important to remember that our goal with this is to detect tampering but while still allowing the user to completely control their device. That’s why by default we have Heads alert the user to tampering but we explicitly don’t prevent them from booting into a tampered system. We offer a particular “unsafe boot” option within Heads that allows exactly this (but sets the console background to be red to warn the user that this is risky) so they can boot into a tampered-with system to inspect it or otherwise fix it. Because innocent tasks like kernel updates have the potential to trigger this, we need to be careful about anything that would lock a user out.
Thank you for your answer, @Kyle_Rankin. Could you or someone else share how one would go about checking said code? I don’t use Heads and don’t know much about it, but if there are measures I need to be taking at every start-up, I’d definitely like to know about them.
That is a matter of psychology. Personally, I suspect that if a user is prompted to check a code, they are more likely to (remember to) do so than if they are not prompted to. But your disagreement with me about this is a minor quibble: if we wanted to settle it, and had the resources to do so, we would try the two variants of the attack on a sufficiently large sample of users to enable a statistically meaningful conclusion to be drawn.
The more important point is that without protection against internal flashing, this sort of remote attack will succeed against some set of users who would otherwise be protected.
Again, thank you for confirming this.
I know. That is the reason for steps (i) and (ii) in my post above.
Thank you. This is good to know.
It is not as strong a protection as a hardware switch, however. A hardware switch “just works”. Heads, on the other hand, is thousands of lines of code. A flaw in Heads could potentially render the ROM capable of being flashed from the OS or from firmware in other devices.
I am well aware that the original developer of Heads is the same as the lead developer of Thunderstrike 1 and 2, and is therefore familiar with that sort of risk and how to mitigate it. Even so, a hardware switch would not be amiss.
“If you open the case, it blows up in your face!”
Seriously, though: by “local attackers”, do you mean attackers with physical access?
I realise that mitigating against a physically-present attacker who is using a port is possible, per Thunderstrike 1 and 2, via e.g. IOMMU and disabling option ROMs, etc.
I am not sure how to mitigate against an attacker who is using a SOIC clip, other than:
using obscurely-headed case screws, with tamper-evident seals/varnish over them; and/or
following Apple’s approach of using a BGA chip instead of a SOIC chip (to increase the difficulty of attaching to its contacts), and choosing a chip that few people know how to read from or write to over hardware.
I am intrigued to see what Purism has in the pipeline.
Seems reasonable.
I still think a hardware write-protect switch is in order on this front.