PureBoot not so secure - potential simple attack vector for someone whom have access to device

Looking to details of PureBOOT bios verification:

Bios sends measurements to TPM
TPM releases secret
Bios generate OTP

Problem is that to make able bios to generate OTP secret need to be stored in RAM ( if i am correct)
As i am assuming that attacker can have access to the device,
so injecting of new mailous bios that will pass later OTP verification is simple:
remove DIMM module, install temporary replacement that allow dump memory to 3rdparty device.
Trigger boot on original bios to steal OTP secret. (intercept from ram)
Build mailous bios that will not even talk to TPM, just generate OTP.
And voila… we have mailous bios that will pass.
@Kyle_Rankin can you comment in that matter?

1 Like

For starters I suppose wouldn’t call an attack that requires swapping out RAM multiple times, analyzing it for secrets, and building a new custom firmware with that secret embedded “simple” but more importantly, it’s not exactly a fast thing to do with an unattended computer. Also, since one can read the contents of the flash chip from userspace later on, a modified firmware can’t hide indefinitely, it could be detected within userspace itself by comparing hashes. In fact I’ve considered implementing something like that in the past within userspace (most likely within the initrd) that would perform that additional verification at boot.

In any case, people facing that level of advanced physical threat should certainly consider additional physical tamper-evident countermeasures (glitter nail polish on screws, etc) like we offer in our anti-interdiction services. There are also additional discussions of these sorts of things within the Heads Threat Model document, where Trammell also discusses various countermeasures for different threats.

If you look at the code for the unseal-hotp script, you can see that Heads has considered this attack vector and has made it pretty difficult to achieve. In general throughout the Heads code, whenever it is dealing with secrets you will notice they are erased immediately after they are no longer needed (or if there is an error).

The secret is stored in the TPM’s NVRAM but does briefly reside in ramdisk (/tmp/secret), however that secret is shredded and erased immediately after it’s used (and during any error).

[Edited because at first I was reviewing the wrong unseal file that was included with tmptotp as an example, and didn’t contain the shred commands. The correct file we actually use in Heads does contain the shred command.]

5 Likes

Thanks for explanation @Kyle_Rankin,
Forgive me my paranoia, but with tampered firmware checksums can’t be trusted.
I will definitely accept the challenge and try to beat that security model soon :wink: (waiting for Librem 14)
I not say it is bad, it solves most of attack vectors, however determined bad actor with enough resources can beat it.
I saw once ddr3 interceptor in action - an additional device hooked between ram and motherboard that was recording operations. So for someone who knows what to search… (if those were available for ddr3 , for sure they are for ddr4)
I know it’s realistic scenario of 3 phase attack:
1st phase - recon - record boot sequence
2nd phase - analysis of record, to find secret
3rd phase - implanting actual backdoor.
but possible for actors like NSA or any other intelligence agency.
So it triggered my imagination.
But be realist, 99,99% of us are not an target for such Bad Actor.

These checksums would be signed with the user’s GPG key of course. The underlying question is whether a tampered boot firmware could then also mask the contents of the firmware on the flash chip to userspace, and I don’t believe that it can. It certainly couldn’t from hardware flashing tools and I don’t believe it could within software either.

3 Likes

A person facing this kind of threat would not have such a thing as an “unattended computer”.

So that really leaves physical violence as the threat model, and there’s only so much that a computer can do about that.

1 Like

Far too many security architects forget about this sort of thing. They model for the most extreme and least likely threat first, without really considering the victim in their threat model, and will throw away a security measure if it doesn’t happen to suit that extreme case.

The better approach to me, and the one I try to take, is to start with the more likely threats that impact the most people, resolve those, and then work toward the extreme threats.

A great example of this kind of thinking is folks who focus on and worry about 0-day defenses, but don’t have a patch management system in place so they can efficiently patch 20-day-old bugs.

Applied to PureBoot, it means we focus first on the threat from a remote attacker installing a kernel rootkit and possibly attempting to persist that root kit through a modified BIOS. That’s a far more likely attack that would impact a far larger group of users, than an advanced Evil Maid cold boot attack. Along the way though, if we design our measures well, we can start to nibble away at those advanced Evil Maid attacks and make them impossible or at least impractical.

With the right approach you are more likely to come up with security measures that protect the average person from the threats that they are most likely to face, and do it in a way the user may actually use, instead of disable. If you start with the spy threats, you end up with high security measures the average person will just disable.

2 Likes

The beauty of the Librem 5 is that it makes it more realistic to avoid an “unattended computer”. Fewer Evil Maids in the world …