i’m not sure that intel would be the one to “want to” but the proprietary infrastructure and lower levels of code are the most easy to conceal a backdoor or rootkit into … but it’s a fairly sophisticated type of attack …
Purism customizes the IME image on the IME storage chip, removing every module not required to keep the computer running (there is one more that could be stripped, but doing so makes the computer shut down after ~30 minutes of use). The theory is, since it’s modular, the modules which would allow remote control are successfully removed. Of course, as all the modules are closed source and use model-specific undocumented registers, we can’t know this. It’s probably safe. Then again, this is Intel, which has been cutting corners on security to win on performance for years, so…
Note that even with most of the IME removed, it still has a direct connection to the integrated LAN port. If you want to further reduce the risk of remote control, plug a separate ethernet/wireless card in and use that (technically, the IME could gain access to that via the PCI bus by hijacking the main cpu, but doing so without detection and with the limited brain of the IME code is unlikely).
Or try to find a board which doesn’t have the NIC integrated into the chipset the IME controls (good luck with that with Intel, IBM’s power9 boards or maybe some server boards). Recent AMD boards might be an option, I haven’t looked. First gen ryzen boards let you trade problems: the PSP (AMD’s IME equivalent) can’t access the network, but the south bus chipset is known to have security issues and it has access to the NIC. I believe the current generation ryzen boards have that issue fixed (south bus is AMD product too), but I’ve not gone looking for problems with them.
Personally, my next upgrade will probably be to an AMD home-server board, which gives access to its management engine on a dedicated NIC. Yes, that relies on AMD not putting anything bad in there (unintentionally, if it was intentional, they could bake it into the CPU and it would be essentially impossible to prevent).
Aye, only problem is I absolutely can’t justify a threadripper. Have to need serious multi-core compute power for that, or I suppose more than 24 PCIe (gen4) lanes. The home-server boards are x470/x570 and take the mainstream ryzen CPUs, but with the server management features and suitable for rack mounting (at half the price or less). Something like the Asrock X470D4U.
There’s a non-trivial probability that there’s a backdoor somewhere in the IME to aid in surveillance efforts but it’s doubtful that it’s near-certain. For the typical user, the concern may lie in the recognition that undocumented/closed-source blobs that deep in the system like that can lead to difficult to patch issues when nation-state tools get leaked as it did with ETERNALBLUE. But this is all speculation. As much as these types of things tend to be.
That is all I know about this but perhaps you find it useful.
It’s really frustrating. AMD, of course, has issues with respect to the Spectre family of vulnerabilities and there’s speculation (there’s that word again!) about PSP. Having this Coke/Pepsi scenario is rather untoward. I don’t like it one bit.
… wouldn’t help at all in this case. The problem is the existence of the “secondary” processor (actually, it’s the main processor - you can’t even boot your system if it fails) which only runs code that has been signed (or encrypted?) by the manufacturer’s private key. It wouldn’t matter if it were a RISC-V chip there if it’s still got that “someone else’s code” requirement.
I’m confused by your response. Can you clarify? I’ll start by clarifying my comment - the idea of being “publicly auditable” was intended as a requirement for all CPU specifications. So if there’s a chip within a chip as the IME and PSP are architected as, it would, in my mind full under that umbrella of auditability. In my mind, that would override “someone else’s code”. Can you show me where you think I’m missing something?
Couple that with the fact that the homunculus CPU can alter the state of the main CPU, and it’s not good. You would be asking whether you own your own computer - or someone else owns it on your behalf.
I don’t know of any way you could really “publicly audit” the instruction set (on any CPU) but let’s say that you could do so on both the homunculus CPU and the main CPU and you have verifiable source for the code that the homunculus CPU runs (a highly optimistic assumption), you still aren’t in control over your own computer. (For example, if the homunculus CPU opens a backdoor, you would know about it but you couldn’t close it.)
@kieran and @TungstenFilament - I understand now. Our dialogue differences are bound in the differences between normative and positive viewpoints. I’m taking a normative stance and you’re responding with a positive viewpoint.
No issue here. I thought perhaps there was something intractable about the IME/PSP situation that I was unaware of. Nope, just a choice by the manufacturer to embed an enterprise-friendly backdoor. We have no disagreement.
I have no problem with the ME/PSP requiring signed code. I have a problem with not knowing what that code is. Heck, the thing is simple enough it wouldn’t even be a big deal to not have the original source code, there are still enough of us that can read assembly to trace and recreate it from the signed binary (would be a bit expensive, but easily within reach for a crowdfunded or corporate review). The problem is the existence and use of model specific registers (MSR) and undocumented model specific opcodes. These are often documented, letting you query things like the temperature sensors embedded in the CPU. But there are additional undocumented instructions. Sometimes we can figure out what they do via trial and error and timing attacks, but doing so is incredibly time consuming (expensive) and you can never be certain what an undocumented opcode did. They can (and often do) even change the behaviour of other opcodes.
This means the first time your reviewer of the PSP code hits an undocumented opcode, you have no idea what happens next. Heck, they could release the original source code, with an inline assembly section for those undocumented opcodes, and you’d still have no idea what happens.
Yep. That’s why I prefaced my comments with the assumption that you could actually publicly audit the instruction set i.e. there exists complete and correct documentation for the CPU and you can verify that the CPU behaves exactly as described in the documentation (and in this case that would have to apply both to the homunculus CPU and the main CPU).
Or worse, they don’t even know they hit undocumented opcode. As sandsifter authors noticed:
On Intel processors executing in 64 bit mode, the 66 override prefix appears to be ignored, and the instruction consumes a 4 byte operand, as it does without the prefix. Most disassemblers misinterpret the instruction to consume only a 2 byte operand instead (those that assume a 4 byte operand still miscalculate the jump target, assuming it is truncated to 16 bits). This difference in instruction lengths between the disassembled version and the version actually executed opens opportunities for malicious software. By embedding an opcode for a long instruction in the last two bytes of the physical instruction, the physical instruction stream can hide malicious code in the following instruction. Disassemblers and emulators, thrown off by the misparsing of the initial instruction, miss this malicious code in the subsequent instructions. (…)
As a demonstration of the impact on emulators, we created a program that runs as a benign process in QEMU, but executes a malicious function when run on baremetal (figure 7). The same program, analyzed in IDA, objdump, Capstone, or Visual Studio, will also appear to not execute the malicious code.