Eric Schmidt, Ex-Google, Wants a Kill-Switch on Your CPU

Eric Schmidt, the former Google chairman, told Reuters in a recent interview that high-end processors should have kill-switches.

“Knowing where the chips go is probably a very good thing. You could for example, on every chip put in essentially a public private key pair, which authenticates it and allows it to work”.

What he won’t tell is that this is already a reality, as I learned after having my air-gapped system and Pixel phone wiped remotely for researching “silent speech interfaces”, which goes against Google’s interest for the public to know about. There is no security when silicon trojans are inside of every CPU.

Can Purism make a laptop that isn’t vulnerable to what Eric Schmidt wants?

1 Like

Why should a tool maker be held accountable of what others do with the tool? If it is sold, it is sold. No ownership - no responsibility.

From the article:
Marvell’s experience is one of a myriad of examples of how chipmakers lack ability to track where many of their lower-end products end up,

And this is how things should be.

2 Likes

Yes, that’s why we have no serial numbers on bullets…

1 Like

How is that workable for a computer that is not online? Even if online, a firewall could interfere with operation of this mechanism. I expect that the check could be subverted too through clever use of network hackery.

Intel must be fairly close to this already.

I’m pretty sure that close to 100% of Purism customers will think that remote disable functionality on “your” CPU is a terrible idea.

Indeed. Something that regular forum participants here are keenly aware of. That doesn’t mean that there are a lot of alternatives at the current time i.e. alternatives to trusting the CPU manufacturer (whether an Intel CPU in your laptop or an ARM CPU in your phone).

Not sure what you are implying here. Could you elaborate?

That implies that there is a big case for open design. The days of closed design will be numbered and aware users switch to RISC-V (or similar initiatives), even if performance is initial lower. Nobody likes suddenly be switched of by some idiot politicians in a country based on sanctions and living from stolen property.

If you can identify the bullet that kills people than the bullet manufacturer can be sued for selling bullets to minors, mentally unstable people, misuse in general …

As far as I remember some lawmakers suggested a law to force bullet identification. Of cause this law has no change at all to be implemented (most politicians now their sponsors)

In free countries, the owner of the phone should have a right to have or not have any features they want on their phone. The fact that Google and Apple don’t respect your rights doesn’t mean that those rights don’t or shouldn’t exist. Almost anything can be used as a weapon.

The US has complex export laws that prevent the shipping of technology to hostile countries that can use the technology in weapon systems. Perhaps we can put kill switches only in CPUs that are shipped to Iran and North Korea and other hostile countries. That’s about as aggressive as the restrictions should get.

Such a law would be sick. It would accomplish nothing, but perhaps moving bullet-making industry outside of the country. Also there is a difference between identifying a manufacturer of the goods and proving he sold them to the evil-doers.

Moving responsibility for action from user to manufacturer is against basic logic.

In software, this topic has been discussed extensively by FSF, OSI, and others, and they came to the conlusion that is “No discrimination against fields of endeavor” - which boils down to: If your license forbids using a software to kill people - then it is not a free license.

This sounds shocking, but either you held users responsible for their actions, or you run into arbitrarily killing[1] the hardware manufacturers or software developers for something that they have no control of.

As for the hardware with remote killing switches - If I was a manufacturer, I would never do such thing. First it is against my view of the world, as explained above. Second, I don’t like the perspective of being held accountable for actions not of my own, should the remote switch fail.

To sum up, the kill-switch on a CPU is a terrible idea:

  • It is against the freedom of people to use computers as they wish,
  • It moves responsibility into wrong place - from actual performer of evil action to the manufacturer of an all-purpose tool,
  • Hardware manufacturers (or software developers, social media platforms, etc) are not law enforcement institutions and should not be,
  • It gives governments yet another tool to oppress their own citizens; it feels awfully similar to the anti-terrorist “solutions” that have been deployed after 9/11 and are now being abused,
  • It will backfire on you sooner or later - backdoors tend to be discovered and used by third parties,
  • It most probably be rendered ineffective for all the cases when it would actually matter - Russia may be a sinking ship, but it still has considerable resources. Or, the very thought of using such a tech against China just makes me laugh.

[1] Or imprisoning, or making bankrupt… Same thing really.

4 Likes

Not only that, the US has even laws to prevent the import of high tech equipment to themselves… :stuck_out_tongue: In this way the US gets farther behind than it already is.

1 Like

How do US laws prevent high technology imports? What body of laws regulate the import of technology in to the US.

1 Like

Hunter Biden screens the companies!

Maybe I’m wrong, but the last I heard was a ban on advanced 5G equipment from China … The general vehicle for this is sanctions, sanctions, sanctions based on danger to national security.

That applies to the kill switch case, but not the tracking case. Retaining control of a thing shifts responsibility. Marking a thing does not. And if you want to find out where the responsibility for misuse falls, all you need is to learn about the supply chain.

Yes, but at least some believe this is because it’s pragmatic to stay focused instead of trying to fix all the wrongs:

why not immediately begin using all the tools, mechanisms and strategies used for FOSS advocacy to advocate for these other causes? The TL;DR answer is simple: because these tools, mechanisms and strategies are highly unlikely to have any measurable impact on those other causes, while using them for these other causes would ultimately minimize software freedom and rights unjustly.

Regardless of the politics of it :slight_smile: … just about everyone in this forum wants to own their own hardware. That means noone anywhere for any reason gets to operate a remote kill switch on your computer - unless it’s operated by you yourself because your computer has been lost or stolen. So we are all in furious agreement.

Even if someone thought that remote kill switch is a good idea … “silicon trojans” raises more questions than it answers.

Can I audit this silicon trojan code?
What other functionality is in this silicon trojan code?

It isn’t entirely clear to me to what extent this topic is a theoretical discussion. Does the silicon trojan already exist? The Intel homunculus CPU probably goes close.

1 Like

It seems possible that the the US is falling behind China on the technology front. If so, then the only way to remain safe would be to ban all technology that we can’t keep up with, so as to better understand the vulnerabilities of it before relying on it. Hardware viruses are a real possibility. But controlling other people’s devices to keep society safe is actually very unsafe.

That’s the fastest way back to the stone age (accept a nuclear war).