Couldn't microcode with backdoors pose the same threat as IME?

If the IME is basically a second processor which has god-mode access to the entire PC and we are unable to see what it’s doing. I don’t know much about microcode but from what I understand, it’s low-level code which runs on your CPU. If my understanding of microcode and the IME are correct, wouldn’t a CPU running malicious microcode have the same powers IME does? And am I right to assume this would be the case for any CPU be it AMD, ARM, etc? If not, please explain.


Assuming that the microcode is only for the CPU and only affects how the CPU reacts to specific instructions and/or data, it would actually have less power than the ME/PSP. The ME can lock out parts of the system memory from CPU access, write to and read from whichever memory it desires and emulate any device that it’s programmed to. The CPU (at least, the “main” CPU - the one that you paid for) in an Intel system (and probably AMD as well) is just one device on a bus and it has a lower priority/permission level than the ME, which also lives on that same bus.

That does not, of course, make malicious microcode any less dangerous. It’s still quite capable of doing horrible things to your system which would be incredibly hard to detect. The only saving grace is that microcode is never stored in the actual CPU, so even if evil microcode has hijacked the “update microcode” subroutine to prevent you from over-writing the evil copy with a “clean” copy, you still have the option of powering off the system, using an external chip programmer to manually stick a clean microcode in the BIOS image on the motherboard’s flash chip, then putting the hard drive into another system and making sure that whatever microcode that the OS loads at boot time is also likewise “clean”.

Yes. Compromised microcode could undermine any kind of hardware security (such as pagetable permissions), create intentional vulnerabilities akin to SPECTRE, or even open extra backdoors based on processed data. This has an interesting discussion:

For modern CPUs it’s necessary to have the vendor’s signing key to create custom firmware. With some old CPUs from AMD it is possible to upload custom microcode without checks. But as said, in every case it needs to be re-uploaded at every boot, either by the BIOS or operating system. As you control those in the case of Librem, it’s your own choice if you permit them to upload new microcode from the vendor.

The existing answers more or less cover it.

Same powers? No
Bad powers? Yes

Some additional comments are:

  • the writeable microcode may be limited in how it can modify the operation of the CPU - there may be core instructions that are safe even in the presence of malicious microcode - of course you may not know which those instructions are - and for an arbitrary CPU the set of safe instructions may be empty
  • writeable CPU microcode is not the be all and end all of malice - because the CPU could have hard-wired malice i.e. even with benign writeable microcode; the only true antidote is an open CPU design

No. Because a CPU may not have microcode at all. Or it may not have writeable microcode, more to the point.

I believe that typical ARM implementations do not have writeable microcode. So whatever malice is there, if any, it is baked in - but at least it can’t be hacked in after the fact.

So the key questions regarding microcode are:

  • does the CPU even have microcode? (usually the answer today is “yes”)
  • is the microcode baked in to the CPU or can it be modified after the fact?


One other comment on Intel ME … as I understand it, the homunculus CPU is intentionally undocumented and intentionally unauditable and its code intentionally obfuscated. Very little is known about the homunculus CPU. Probably relatively little research has been done into its specific security problems, for example by comparison with the main (x86) CPU whose problems in recent years have been widely discussed and documented, although a few security problems with the homunculus CPU have come to light over the years.

You really couldn’t design something that is worse from a security point of view.

This is “security through obscurity” at its worst. Intel CPU verifiable security is inversely proportional to the potency of malice i.e. potency of malice goes up to infinity while verifiable security goes down to zero. That’s a pretty dangerous combination.

1 Like

but maybe it’s a perfect design … for the god-mode-actor that’s profiting from that …


Well, yes, from other points of view, it may be perfect design. Just the worst from a security point of view.

1 Like

Personal Security and National Security are often at odds with each other.

1 Like