Why don't we sign firmware like USB in 2019?

Is there not a way to devise a USB standard/protocol were we can sign the firmware and prevent attack like badusb or rubberducky?

Off the top of my head …

If it were simple, it would already have been solved. As a general guide to what this problem is up against, consider the comment from early on:

There’s no way to get the firmware without the help of the firmware, and if you ask the infected firmware, it will just lie to you

First of all you need to define what you are concerned about. There are two scenarios.

  1. You trust the original manufacturer but you are worried about the possibility that at some point since then, the device has been tampered with.
  2. You don’t trust the original manufacturer.

Looking at scenario 1, there would be a number of solutions.

(a) the manufacturer prevents update of the firmware, period i.e. the original firmware has no capability to replace itself

The downside is that if a functional, performance or security etc. issue is discovered after manufacture, it can’t be fixed. Either the issue is tolerated or the device is thrown away. For a $5 flash drive, that might be acceptable and a small price to pay.

New features can never be added (but in an era of predatory governments that might not be a bad thing).

It is also not completely robust because someone with physical access to the device could either physically replace the chip that stores the firmware or replace the entire device with an apparently (but not actually) identical device. It is however robust against any USB host-based attack.

(b) the manufacturer prevents update of the firmware unless the new firmware is digitally signed by the manufacturer

(maybe this is approximately what you are suggesting)

A significant improvement over the current situation from a device integrity point of view. Requires no changes to USB (since it is implemented entirely within the device itself). Requires no logic on the host side. The new firmware comes from wherever and the host provides the firmware to the device and the device either accepts the firmware or it doesn’t.

The downside is that in a sense that approach is anathema to us. You can’t run your own software on it. The software on it is beyond your control. You can’t change the software on it. Perhaps you can live with that for a keyboard or a flash drive.

The downsides regarding physical replacement in point (a) still apply.

In addition it might be possible to downgrade the firmware, thereby reintroducing a previously fixed security issue (since older versions of the firmware still bear a valid digital signature).

There would be minor downsides if a manufacturer’s private key escaped into the wild, and keys may not be able to be expired ever (under the assumption that the device has no access to the current time and no access to the internet).

( c) the device could come with a physical firmware write-protect switch

This protects against the situation where the device remains in your custody at all times but is connected to computers that you don’t trust.

For more sophisticated devices, particularly, a physical switch could be replaced by a software switch.

While this doesn’t directly relate to the problem that you may be suggesting (i.e. the opposite - the computer doesn’t trust the device), it is part of the picture because of “unsafe sex” between devices e.g. a flash drive that is plugged in to a compromised computer and then later on plugged in to another computer.

1 Like

Thanks. I was thinking B and C in combination. That would be the best approach.

I understand secure boot doesn’t gain much respect in the open source world but I kind of do like the idea of key signing for devices and isolation of loading. If you could some how design a protocol where you could sign the firmware YOURSELF (i.e after purchasing it or updating go hash it) to always check it for tampering periodically then that would make me happy. I would know that my device is safe. The write switch would be ideal here too.

Not all devices support reading out of the firmware - and even if they did, if the device has been compromised with bad firmware, the firmware can lie to you (as per my quote above). So it is difficult to verify the device after the fact (hence “checking for tampering periodically” is difficult). Hence it may be better to focus on ensuring that compromise does not occur in the first place.

For the same reason, it may be difficult to sign the firmware yourself. Maybe you can download it from the manufacturer, sign it yourself and then flash it to the device. However if you can’t download it from somewhere and you can’t read out the firmware then there is nothing to sign.

But let’s say you can get the firmware one way or another.

Signing anything yourself raises all sorts of issues. It would require specific support just to allow that at all. If you can sign it then so can the bad guy. The mere possibility of self-signing opens up additional avenues of attack for the bad guy.

Honestly for a $20 keyboard (or mouse) or a $5 flash drive, I reckon “A”. I can live without bug fixes and new features - and at the same time live without compromised USB peripherals.

In case it is not obvious, in the case of a flash drive, the write-protect switch for the firmware is independent of any write-protect switch for the drive content. It would be valid to write-protect the firmware but not write-protect the content - and it would be valid to write-protect both - and it isn’t for me to say that it is not valid for someone to want to write-protect the content but not the firmware.

In some cases they are not independent because the drive content write-protect switch is implemented in firmware and so if the firmware is compromised then so is the drive content write-protect switch. So it goes without saying that both switches should be done in hardware if possible.

Of course most flash drives don’t even offer a write-protect switch for the content.

Does this not concern you with the phone’s roms and flash components?

I’m sure Purism has considered all these issues. There are limits to what can realistically be achieved, especially for a v1. If you want more comprehensive and more specific information, you would have to ask Purism directly. As the phone is not released yet, it is not possible to investigate the full security picture of individual components of the system.

I believe that boot path security is on their radar.