Can I trust a no-wifi purism laptop to generate crypto private keys?

I’m especially worried about the random number generation (RNG). If the RNG is rigged, I have a false sense of security. One day, my crypto could simply be stolen because someone knew all the private keys that could possibly be generated using this laptop.

In order to rig the RNG (in either the CPU or the OS), what individuals within Purism company would have had to be malicious actors?

anybody involved with the system firmware or OS, I’d imagine. Luckily you can replace both of those if you’re not confident in their trustworthiness


But then noone needs to rig the RNG if the platform is compromised i.e. if someone is keeping a copy of all random numbers generated and sending them somewhere then the RNG can be ‘perfect’ and it will be all to no avail.

Whether it’s rigging the RNG or rigging the underlying platform, you get to inspect the source code - and decide for yourself.

I wouldn’t necessarily recommend replacing the source code for the RNG. Unless you are an expert, you could easily make it weaker, not stronger, by using your own implementation.

Another option if you are concerned about subtle flaws in the RNG, that a source code inspection might not detect, is to use a Hardware RNG device. However that pushes the problem somewhere else. Do you trust the HRNG device?

The ARM SoC in the Raspberry Pi contains a built-in HWRNG but do you trust it? Most(?) recent Intel CPUs contain built-in HWRNG but do you trust it? Or you can choose from any number of USB HRNG devices but do you trust it?

It is generally going to be harder to audit a HRNG device than to audit RNG code, but maybe harder to compromise the device retrospectively.

"because someone knew all the private keys that could possibly be generated " this is already known for each crypto implementation. What provides the protection is not knowing all of the possible keys, but rather not being able to test all of the keys in a reasonable time frame.

I think what you mean is compromising RNG to limit the size of the pool of keys, and this basically boils down to either being an expert and inspecting it yourself or who you trust. Currently if you are not an expert you are trusting someone or some group of someone’s to be doing this correctly.

Personally I find it easier to trust a group of people that knows everyone in the world can look over their shoulder at any moment more than I trust a group of people with a smaller less well informed group of their peers looking over their shoulder… But it is still a trust as I am not a crypto nor RNG expert and have to trust that make.random() results in a random number and not the number 4 every time…

With all of this said, it is my understanding that wifi has no impact on the generation of random numbers. Time may have an impact, so not having internet access (and in turn no time server) may provide some amount of impact on RNG but this is typically down to the microsecond which would be impractical to manipulate in any meaningful way.

Why don’t you just use a RNG-generator like the Librem Key? (It has that feature built-in)


Does it work with rngd ?

What bitrate can it provide?

Maybe you just ask this specific question on the Nitrokey forum or read in the documentation of the Librem Key/Nitrokey.

forgive me but that is too general - i’m confused. what “both” do you mean ?

firmware could mean a lot. also there are firmware present in the librem 13/15 that are still binary/proprietary - how can you replace those if you are suspicious ? or maybe you were refering to something else …

both = system firmware and OS. Ultimately, you have to trust someone/something. If you don’t trust Intel/Intel’s firmware blobs, then you can’t be sure of anything. If you trust Intel but not Purism, you could build the firmware yourself from the source (assuming you trust that the coreboot source is “safe”, or have the ability to audit yourself) and use whatever OS you wanted as well. But this is true for any device on which you might generate crypto keys

do we HAVE to ? does intel trust us enough to liberate the code ? or is it just one way ? i have to but he doesn’t …

forgive me this isn’t directed at you - you’ve done your best to answer …

That only directly applies to the CPU firmware. Even if you had the source code for that, trust would only be as good as your trust of the underlying CPU hardware. For that you would need a fully open ‘source’ CPU i.e. not Intel.

precisely. that’s why i said that the lowest-level of trust you can give to something/someone is also a measure of the MOST important aspect - freedom. from this one can reach the conclusion that if you can’t trust something 100% you are not fully free. if you aren’t fully free that means you also can’t defend against the one holding the “leash” so all this talk about encription and privacy/security only applies to the ones not-holding the “leash” but rather some other individuals/entities that are in the same “boat” as we are.