Do CPUs have ONLY the capacity to execute particular instructions and NOTHING ELSE?

I’m worried that Intel or AMD might be able to do something like this:

If these particular private-key generation steps that Bitcoin Core performs are being executed, replace them with these particular steps instead.

The corporations could steal people’s crypto like that.

  1. Would this be possible for them, if they wanted to?
  2. Do modern operating systems, such as Debian or PureOS, have any measures against such malicious CPUs?

CPUs have undocumented features. Intel even has an extra core running it’s own OS, Minix. Look it up.
Hoever, to do real harm, it would need a way to communicate. To avoid this, Purism does not bundle it with Intel networking. For your scenario, it would not only need to send the found keys, it would also need to receive updates to adapt to new version of targeted crypto software. Theoretically possible, but rather unlikely.

Also, Purism removes the potentially malicious code from the CPU.
Look up “Purism Intel management engine”, especially the blog posts that will show up.

The OS has no (reliable) way of detecting malicious behavior of the machine that executes it. If you are male, go back to the beginning of this paragraph, and continue reading.

Now, either you are not male, or you just give a **** on my instructions. I can’t force you to obey my commands. Only if you reply will I know you were not captured in my infinite loop of reading that one paragraph :wink:

In Pop culture, the concept that you don’t know whether you see reality was popularized by Matrix (movie).

In computers, the level of the OS that can access everything without restrictions is called ring 0. Your applications run in higher rings and only see what ring 0 allows them so see.

As modern CPUs have their own live, the need was seen to say the is a ring -1, and even ring -2.
To make things worse, if you update the firmware of your CPU, replacing it with a clean one from Purism, and it says “success” … How do you know that the CPU is not just pretending it did that? For 100% certainty you need a hardware flash tool that directly writes to the chip.
Just now, how do you know there is not a second, hidden firmware that is activated by a secret command? You don’t.
The movie on that is called Inception.

Sorry if now you will never again trust anything with a chip inside. Well, you shouldn’t trust them. But also, you shouldn’t have asked, Neo.
Now you need to follow the white rabbit all the way down the rabbit hole. (*)

Your welcome.

(*) While I thought that was a clever pun, I just realized that Matix already cites Alice in Wonderland, and Neo actually is supposed to go down the rabbit hole. Wonder if Alice met Bob at the mad tea party and they had a chat about cryptography in a crazy environment where you can trust nobody.

1 Like

Are you asking about the laptop or the phone?

Yes - and there really isn’t much that can be done about this. As the previous reply suggests, if the CPU can’t be trusted then your computer can’t be trusted. Simples.

There are two potential issues with private key generation. 1. Is the key intentionally weak? 2. Is the key strong but leaked? There is the vague possibility of using a dedicated external chip to generate strong keys but clearly the CPU can subvert that, if it wanted to.

In theory you might put all secure communication outside the CPU (between the CPU and the other party e.g. in the network chip itself, as an additional offload) but again there would be various ways that that might be subverted.

If this is your biggest concern then use a computer that does not have an Intel or AMD CPU but instead has a CPU from a company that you do trust (even though you will probably not have a basis to trust that other company either).

Without a way to communicate, it could still weaken your cryptographic operations in a subtle way, so that when you use that cryptography in the world at large, it is not secure. (The Debian weak keys bug comes to mind, though obviously that wasn’t caused by CPUs meddling with instructions.)

The catch would be that it would be difficult to pull off and the risk of discovery would be quite high over the long term.

Better if you can design a ‘bug’ in the CPU that can plausibly be claimed to be an innocent mistake and then exploit that bug at a later date to get some more specific exploit code onto it from some random network node that has no apparent connection with you. That way, there’s far less awkward explaining to do.

2 Likes

where would Micro$oft be without it’s gaming user base ? or without github ?

the new Halo infinite (6 in the series) has a bigger buget (aprox 500 mil $) than the latest Avengers blockbuster movie (aprox 400 mil $).

not to mention that practically the majority of “PC” computer users only give a crap about hardware when it can run x-game at minimum requirements. non-free CPUs are a big concern but to make matters even worse you have the second dedicated PC running inside your “black-box” - and that is the GPU on the graphics card. which is an even bigger mistery than the CPU.

how did we reach this point ? be did not listen to RMS 35 years ago and we STILL consider non-free software and hardware design LEGAL.

on the software side there is the copyleft GPL to counter copyright copyright (the thing which makes non-freedom respecting software code legal and which can potentially turn-to-the-dark-side ANY non copyleft piece of software around )

for documentation purposes there is the GFDL ( gnu free document license ) which complements GPL.

sadly there isn’t anything sufficiently powerfull currently beeing done to counter PATENT law in the realm of hardware and firmware design and manufacturing (it’s non free by default if it needs to be guarded by a patent)

good topic ! thanks to everyone who contributes …

later edit:

here is a perfectly good example of how a state-of-the-art criptographic scenario can be ruined