…and maybe we don’t yet want to accept that?
When lately reading a lot about Purism and coreboot I noticed that they tend to emphasize their connection to FLOSS.
Yet it seems to be that while I can browse their repositories I cannot seem to just see the very Memory Reference Code (MRC) residing there in plain-text.
Since I also knew about them readily using or supporting those FSP’s, it might be explainable why I cannot go there and just spot the very silicon initialization code for my own laptop’s CPU, the north bridge, or the south bridge.
(And my laptop’s CPU has notably only the SandyBridge architecture but not even its initialization source seems fully available on coreboot )
Likely, all of this might just be contained in that dark black FSP blob, thus not visible by humans.
So I started to philosoph a little and asked myself quite a few things:
How can coreboot still be supporting CPU generations as late as Cometlake?
Because my thinking is, that the later an Intel CPU’s generation the worse the chip documentation must be, to the point where you can’t initialize it any longer since you just don’t know how, and the old source code doesn’t work any longer
(And Intel don’t want you to initialize things in the first place, since otherwise you might stumble across hidden “features” )
Aren’t the latest chips being the most complex ones, and aren’t they possibly boasting the newest and darkest backdoors, which the (FLOSS) community just haven’t had time to find out about, yet?
And I’m not even talking about the ME, where quite frankly there must be an error in the universe anyways, that such a late ME has been hacked , but rather about e.g. the Intel SGX, and even wilder security features yet to come.
And integrating not only the NB but the SB too into a single CPU chip hasn’t really reduced the already huge complexity of Intel chips has it?
I suspect that in generations to come there might be even a tiny little radio in the CPU, which no one knows about, since of course it is undocumented.
And yet coreboot will support the containing hypothetical Intel CPU since it sports that neat little FSP blob which is taking care of that radio too.
Are they going to support those chips as long as there is an FSP for it?
Disclaimer: I don’t like the FSP approach so I haven’t researched much into it. But I just guess that those FSP’s aren’t decreasing in size?
What is the latest CPU architecture whose initialization firmware or FSP has been fully reversed or leaked, or otherwise is open for that matter?
I.e. this must include that pesky Intel reference code as well as the MRC.
I’d go with “SandyBridge” due to the AMI BIOS source leak back in the days.
Now guess what, SandyBridge is 8 - 9 years old, which may mean that every open BIOS implementation since then isn’t fully open
Has anyone ever been able to reverse an FSP for the late processor generations such as the *lakes+?
On the other hand, why have the coreboot community still been able to neutralize or even disable the ME on a processor generation as late as Cometlake (Refer to the Librem 14 which also uses coreboot and has a disabled ME)?
Does coreboot by any means make use of the Secure Enclave technology that comes starting with Skylake processors?
What about their use of further technologies such as TXT, STM, BiosGuard, BootGuard, to name a few?
Btw: By “open” I’m just referring to the source code be somehow human-readable on a (gratis) website such as GitHub, or coreboot for example.
So a leak for instance is only “open” for a limited amount of time
And by BIOS I’m collectively referring to the CPU reset code as well as everything from PEI, DXE, up to the last instruction that still originates from the flash chip, before finally control is handed to the boot media.
Btw2: This post might appear a little coreboot-lasty. I’m still posting it in this forum since Purism too have put great work into coreboot especially when it comes to the later CPU generations.
I think your post needs to come with a glossary i.e. XAD.
I think everyone (in this forum) recognizes that closed-source code is a problem and that the more of it that there is, the less desirable it is e.g. from the perspective of auditing for correctness, including auditing for backdoors.
I can only touch upon your questions.
Two things have been done with the Intel ME.
- Remove inessential modules.
- Disable it after completion of the essential activities.
How is it possible that this can still be done as recently as Cometlake? Two answers:
a) Because Intel deigns to allow it. In other words, Intel could suddenly deign not to allow it for some future generation and beyond. And some would stop using Intel.
b) Because the TLAs demand it (in the case of the second thing).
I would doubt it. This is the antithesis of open source.
However I don’t speak for Purism in any way.
It wouldn’t surprise me if there is a documented WiFi implementation in the CPU in some future generation. It might still need an antenna though.
It would be a step in the wrong direction.
PS Judging by the number of Intel CPU bugs, Intel CPUs have become too complex, regardless of anything to do with the FSP.
You got me =) At least in DuckDuckGo I couldn’t find a viable explanation for “XAD”.
I’ll try to explain every acronym that I use in my future posts. Unfortunately, when speaking of Intel, there are a lot to explain
That’s indeed a good point!
Maybe they even have another secret chip already which hasn’t been discovered yet, and hence, they resort to focus on the secret chip instead of attempting to re-hide/seal the ME, which is already being known way too well to the coreboot guys
I’m happy with my MacBook Pro from about 2012. All steps since have been going in the wrong direction if one were to ask me…
But I must admit that I’m somewhat conservative regarding technological advance.
Though a more complex FSP is likely going to be the result.
But yeah, there is so much stuff that I certainly don’t need nor want.
On another note, I further researched the FSP issue and learned that that developer who goes by “KaKaRoTo” seemed to at least try shed light on the darkness of early silicon initialization by in fact trying to fully reverse an Intel FSP.
Unfortunately he appeared to not publish any further posts on that matter after he got warned from Intel because of their Intellectual Property nonsense
Naïve me. Should’ve expected it to end that way.
Wonder if one day on my Mac I still can do the early silicon initialization using my own firmware…?
(Better not trying until the end of home office though ;P)
Yes. I was teasing you. eXcessive Acronym Density.
At this point, the x86 architecture has 42 years of cruft to deal with, and it has to compete with chips like the Apple M1 with 16 billion transistors. Processor design has gotten so complex, that it seems like the best route is to reduce the size and complexity of the instruction set for decent performance per watt.
So far it looks like RISC-V will have even better performance per watt and better performance per mm2 of die than ARM, so it seems like the best policy to switch to leaner and more efficient ISAs. All of this is exciting from a freedom perspective. Switching from x86 to ARM gives us chips that can run on 100% free software. Switching from ARM to RISC-V will give us free/open source CPU cores. I can see a future where we design the GPU, VPU, neural processor, and the other parts of the modern SoC in software and have them all execute on very efficient RISC-V cores, so we get to a 100% free/open source SoC.
What does “FSP” mean here?
Intel’s “Firmware Support Package”
That’s true (started ugly - didn’t really get any better 1) but then
“they” have been saying that for a while. Some RISC ISAs have come and gone without denting x86’s market share except in the embedded (appliance) market.
I think it is better to be agnostic about what does or will give better performance or better performance per Watt (both of which are important, in their own specific product areas) - unless backed up by figures.
As with any claims about performance, including performance per Watt, the benchmark has to be representative of a target workload and the scenario not rigged to give an unusually good benchmark result.
Of course the “freedom” considerations are very important to us but won’t be important to many.
For me at least the “security” considerations are also very important (i.e. does the CPU even work properly? I don’t want “fast but buggy”.).
1 Since “noone” programs in assembly language today, it doesn’t matter so much whether the ISA is ugly or beautiful, inconsistent or consistent. There’s a reason why RISC is facetiously held to stand for Relegate Important Stuff to the Compiler. The compiler has to maximize the potential of the ISA, RISC or CISC, otherwise real world compiled code won’t perform as well as benchmarks, regardless of how good your ISA or your CPU is.
Hahaha really had to laugh in real life since I didn’t know that one
I don’t know if we can expect this. RISC-V ISA is open. The core implementation may be closed as far as I understand.
Most implementations are proprietary, but there are currently 4 groups that have created free/open source implementations of RISC-V, and a bunch of companies (NXP, Alibaba, nVidia, Huawei, etc.) are now collaborating to work on the free/open source RISC-V cores in the OpenHW Group.
Anyone can take those cores and put them into an FPGA to test them (as shown by Bunny Huang’s recent project), and once they have a debugged design, they can start making ASICs with them.