Rowhammer on Librem 5 (L5)

Somewhat: https://www.vusec.net/projects/eccploit/ (I didn’t read it all).

2 Likes

You could write in rust, haskell or python even, and avoid whole classes of traditional bugs.

1 Like

Geary, however, is written in Vala.

Yes, this is correct. I am very pro Haskell and other pure functional programming languages. The referential transparency at least makes it possible to formally reason about the programs functional behavior.

1 Like

Can formal verification proof the absence of bugs?

Yes:

formal verification is the act of proving or disproving the correctness

Yes, pretty much. It is not exactly the same thing as with mathematical proofs, but similar. You can at least prove functional correctness, which means no buffer overflows, no null pointer exception, no writes or reads outside memory, no deadlocks, no race conditions, etc, etc. But you could have made error in the specification, which then would exist in the implementation. The specs are a lot smaller and more high level than implementations so likelihood of errors there are smaller. If you are interested in formal verification I recommend reading up on seL4, a formally verified OS. Here is one paper to start with, https://ts.data61.csiro.au/publications/nicta_full_text/7371.pdf

Most of the Rowhammer tests like this one don’t work on ARM, because they rely on x86’s CLFLUSH, although there are WebGL and Drammer attacks that work on ARM. ARM Inc. doesn’t think the WebGL attacks are likely to work with Mali GPUs, because of their memory structure and timing, but I have no idea whether the same is true for the Vivante GPU in the Librem 5.

The abstract to this paper about Rowhammer on ARM says:

The rowhammer bug belongs to software-induced hardware faults, and has posed great security challenges to numerous systems. On x86, many approaches to triggering the rowhammer bug have been found; yet, due to several different reasons, the number of discovered approaches on ARM is limited. In this paper, we revisit the problem of how to trigger the rowhammer bug on ARM-based devices by carefully investigating whether it is possible to translate the original x86-oriented rowhammer approaches to ARM. We provide a thorough study of the unprivileged ARMv8-A cache maintenance instructions and give two previously overlooked reasons to support their use in rowhammer attacks. Moreover, we present a previously undiscovered instruction that can be exploited to trigger the rowhammer bug on many ARM-based devices. A potential approach to quickly evicting ARM CPU caches is also discussed, and experimental evaluations are carried out to show the effectiveness of our findings.

Unfortunately the paper is behind a paywall, so I can’t read it.

Vala is generally compiled as C with Gobject, which means that it does memory allocation/deallocation through reference counting, so I assume that Geary avoids many of the memory bugs that occur with standard C/C++. On the other hand, Vala with Gobject won’t prevent data races with threads like Rust. Gobject has functions like g_weak_ref_set () to help prevent memory leaks, but most programmers are likely to use strong references and run into memory deallocation problems with shared pointers between objects.

1 Like

???

It worked for me. I downloaded it. (I don’t have a subscription to whatever web site that was on.)

My bad. I must have clicked on the wrong link.

If anyone wants to read it, here is the direct link:
https://dl.acm.org/doi/pdf/10.1145/3266444.3266454

1 Like

With certain caveats, yes. to a degree you still have to trust the C library, compiler, and hardware.

L4 goes to the fairly extreme, Trying to prove that the specification matches the C code, and the C code matches a binary produced under certain conditions. Most formal proofs don’t go that far and there are still occasional bugs in the toolcains that pop up from time to time.

1 Like

In the proof of seL4 the compiler and the C library is not in the trusted computing base.

Well, that’s only mostly accurate… It’s assumed the computer being used for the verification is not so compromised as to hand back arbitrary data. If you, for example, use hexdump to verify some compiled file is as it should be, you are relying on hexdump itself not being compromised. And how do you check that hexdump is not compromised? You can look at the size to make sure it’s too small to have a sophisticated attack inside, but that assumes stat(2) is not compromised. These are generally reasonable assumptions, and can be mostly verified by comparing how long things take relative to some estimate. If hexdump is really the 60K it claims, copying it should be near instantaneous, do it 1 million times and see how long it takes.

The verification of seL4 is very particular about what is in the TCB and what is not. The case you are talking about is if the computer doing the verification is compromised. Since the proof and code is open-source the attacker would have to compromise all computers running the verification, which is not a feasible attack, I assume that the proof is correct. It may not even be possible for some reason that the verification engineers working on seL4 could explain. There is another project that I think deals with this kind of problem called CakeML.

Does the formal proof of seL4 take into account Rowhammer (or, for that matter, random Intel or other CPU bugs)?

I don’t think it takes rowhammer into account, since that can be triggered by a user process when the OS is not involved, but if I remember correctly they found a hardware bug and remodeled the hardware to bypass and prove it. It was a few years ago since I read the papers though. When designing a CPU a lot of strict verification methods are used, probably also formal verification for some parts.

For row-hammer, I’m not sure… you could keep blank rows on wither side of kernel code to protect it. But user programs may still be able to step on each other.

And seL4 does partition caches, which helps against certain types of cache side channels but specter is a pretty deep and subtle bug so I doubt there is full protection.

A formal proof of these things would have to have a formal description of the hardware behavior, which Intel isn’t giving out to just anyone. And new research shows the fundamental flaw isn’t in the cache, but in the contents of speculated registers being accessible, and who knows who that might interact with s4 internals?

1 Like