VisionFive 2 RISC-V

Right, and this is exactly the reason why I would only accept firmware blobs that give me certain freedoms, one of which is that I need to be able to control them (like swap them out) and the freedom to copy them to anywhere I want - also to backup for restore later. On my Linux system no vendor has the ability the swap out firmware blobs - WTH!? These blobs are sitting inside of my root partition over which only I have control, not some $vendor.

And this is how it’s ought to be and nothing else is acceptable and this is also how firmware blobs in Linux are for the most part handled. Firmware files for mainline kernel drivers are stored in kernel.org’s git:

https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/about/

Full version control, full transparency - except for some licensing of some binaries there which I personally have a bit of a problem with. But in general this is the kind of ‘good’ firmware blob. Full user control.

I could also think, since this can IMHO be a security problem, think about signing firmware. It could go like this: You, under your digression, install a firmware. If you deem it ‘good’ then you sign it with your private key. The kernel firmware loader can then check the signature each time it loads it and can refuse to load it should the signature not match. This will ensure that no malicious firmware can be sneaked into your system - as long as you keep control over your private key.

A firmware hidden away in some storage location you have no idea about will not protect you. In many cases this storage will not be fully read only but rather flash memory and not real ROM (no one likes to use ROM since you will have to have a bug free firmware at time of production since you can not update it later - no one does this except very few cases, so most of the time firmware resides in r/w flash and can be updated - and even if it is ROM then you usually have a way to upload RAM patches which in effect is a runtime firmware upload). But you may not know that it is flash / updatable and you may also not know the method for updating it because it is as proprietary as the flash content itself. But someone else can know it! And since you don’t know you also can not know how to protect yourself from a malicious update. So a bad actor could without you noticing update/change your hidden firmware. This would not be possible if the firmware is under your control, in your sight and eventually signed by a key that only you have control over.

So, still no, I do not the see the point or advantage for hidden away firmware blobs.

And again, yes, totally, I also want to get rid of these blobs, don’t get me wrong. But I would like to partition the problem. Longing for full freedom is one thing, a good thing and one that we all strive for, no doubt. But I also want reasonable ways how to deal with the current reality. And let’s be honest, the situation will not get better, but much much worse. So we can either limit ourselves to ancient hardware or come to terms with the current situation somehow and figure out workable paths that do not compromise on core ideals (like it must not limit the freedom to use free software in any form or fashion, or must not endanger security) but still enables us to use current hardware.

Cheers
nicole

2 Likes

Talking about evil I feel like the devil’s advocate when I say that I do not think that Intel has had evil plans with that. Taking a step back and trying to have a kind of neutral look at the ME, the ME is a pretty logical development for a chip maker that has to satisfy the requirements of millions of customers and many of them being large organizations. Imagine a rollout of hundreds and thousands of devices in such an organization. You can not reasonably have an army of admins running around from one computer to the next and update systems, fix non-booting machines etc. You want to be able to do this from remote, from a central location. That’s what the ME, the Management Engine, initially was designed for - or at least that’s what I think has happened.

So they put this extra small CPU core into the CPU package, that, for such management tasks, needed to be pretty tightly integrated and sitting on a very low level. Had to if you e.g. want to be able to reinstall the OS over it, which makes sense in such a large org environment “My laptop does not work anymore!” - “OK, wait, we reinstall the org standard OS image, you can login again in a few minutes.”.

But then there are also other functions that are hard to realize with just the APU, like some power management tasks, RAM training etc. So they thought, hey wait, we have that ME that has access to all that and that is running a piece of code. Let’s stick this into there too!

And so the functions of the ME grew and grew. And since it is also a “security domain” (it can reinstall the OS from remote!) it had to be “protected” etc. etc.

That’s at least the story that I have on my head when I think how this could have come into being.

But like anything, this obscure piece of computer can of course also be exploited and since it is not under our control this mean this is a potential risk, which already has been proven many times.

So evil? Well, I do not think initially. You can of course also speculate about conspiracy with three letter agencies since these will try to pressure anyone in their favor and maybe there are backdoors. But I seriously doubt that this was purposefully developed by Intel to allow this. It just kind of “naturally” happened over time.

All that does not make it any better or the ME more digestible or desirable. No one likes it. And AMD’s version of it, the Platform Security Processor (PSP), is in no way better than the ME, it’s just a bit different. And ARM? Well, ARM is moving in such a direction too. Many functions now get moved into so called ATF, the ARM Trusted Firmware. Still most parts of it are open source but some parts are beginning to become pretty obscure. But I expect ARM to get into heavy waters soon anyway so I am not so super concerned about them anymore, more about who is coming after that :wink:

Cheers
nicole

2 Likes

That’s really the point I was making, in respect of firmware, with: “free” is not binary, is not black and white. There are shades of grey regarding what you can do and what benefits you get and what potential problems you get.

The shades of grey include things like … is it truly ROM? is it not persistent at all? is it persistent but can’t be updated directly by the operating system? is it persistent and can be updated directly by the operating system? Those all have different implications.

On the question of signatures … yes, allowing me to sign the firmware addresses the downgrade attack but doesn’t address the more concerning issue of vendor signature. If you look at the Purism message around who controls the keys and why they have taken their approach to boot integrity then vendor-signed firmware is a bad direction to go in because it directly transfers control from the owner of the computer to the signer of the firmware.

(You would think anyway that Purism has the downgrade attack covered through the normal boot integrity mechanism, not to mention encryption of the root partition.)

(In the worst case, to pick up @Skalman’s example, the firmware could theoretically contain a time bomb so that it just stops working after a certain time. The assumption here is that all three versions in my original comment contain the same time bomb and that the versions differ only in their other functionality and in their vulnerabilities.)

So the shade of grey include things like: who signs the firmware?

Intel ME takes the signing to the next level. Not only is it signed (so you can’t alter it) but it is encoded in such a way that you can’t even meaningfully look at it. It is the ultimate blackbox. It makes it difficult even to assess the security risk.

Intel could fix this by removing the encoding and publishing the source. In doing that they wouldn’t be giving up any control, and derivative works would be useless, since it would still have to be vendor-signed, and of course they wouldn’t be giving up any functionality. They would just be showing the world that they have nothing to hide - and no I’m not going down the rabbit hole of three letter agencies. :wink:

All true.

But we also have to conclude that all this very same logic applies to RYF devices in the very same way.

The most important point I want to make here is that RYF does not protect us, it does not make anything more secure and in some cases it even limits freedom. That was my main point.

The storage location of the firmware alone does not solve any problem. In this regard I find RYF superfluous and even harmful.

Cheers
nicole

3 Likes

I think the goal should be a 100% open source computer, everything everywhere open.

This may not be economically feasible, so as close to that goal as we can get should be what we are aiming for.

.:worried:

But also the newer intel video card like A770 it requires Intel ME to work, so i really suggest to Purism STOP using intel anymore, then go for Sifive risc-v cpu.s or Libresoc cpu.s.

1 Like

Yes, it is very unfortunate that Intel further and further ties this closed ME into their ecosystem. We have to watch this very closely and at the point where a full working ME is inevitable for the system to reasonably work, latest then we have to split ways with Intel - except Intel would open up the ME in some way too.

But the problem really is that there are, right now at least, no serious alternatives. The AMD PSP can not be inhibited at all (in contrast to the HAP bit in Intel). The RISC-V ecosystem is looking promising since it seems to inspire more openness at the other silicon block vendors (like GPU etc.), but a free ISA is not a guarantee for that in any way. The ISA is just one piece of the puzzle.

But what I hope for is “leading by example”. For the past decades the silicon industry was super closed, pretty much anything in it was highly proprietary, patented etc. That’s how companies like ARM made their fortune, they “license” this “IP” (intellectual property, I hate this word) to others to combine their proprietary blocks plus some of their own into silicon. Or Intel, AMD etc. who do all of that on their own, even down to the silicon level with their own fabs and silicon processes. This very proprietary way of working in the silicon industry has worked for them for many years, basically since the invention of the very silicon chip in the 1960s.

But we are now reaching an inflection point, I think. A similar change that happened in the software industry is now starting to happen in the silicon industry and by that I mean the change that free and open source software has brought.

Two or three decades ago the world was almost 100% dominated by fully proprietary software. Software was treated as the holy grail of the industry, pretty much like the proprietary silicon technologies are now. Software was a great revenue and profit machine! Once developed you can make copies of it that cost almost nothing anymore (this was before the internet where you had to make physical copies on disks :slight_smile: Same goes for the silicon IP, once developed it does not cost you anything anymore, except lawyer fees probably. So it’s a great way to make a big profit!

But it also comes at a price. If you are the sole owner of that IP then also you are also the sole proprietor of the eco system around it, like additional tools, development tools, derivative development, drivers, applications etc.

With the advent of free and open source software (FLOSS) the case was made that such FLOSS software has the potential and quality to be equal to commercial offerings. Soon the proprietary makers discovered that they could leverage that too and lower their burden of proprietary development cost by using FLOSS where possible without sacrificing their “IP”.

I worked in the embedded electronics industry for most of my career and took part in this development. In the beginning it was very hard to find chips at all that you could develop for with FLOSS, almost anything came with their own proprietary Windows only toolchains, libraries and even debug tools. You had to spend thousands of $$$ just to setup a development seat. Today an almost laughable idea. Today most chips come with a FLOSS toolchain, use standard debugging interfaces which are cheap to source, software development kits (SDK) are expected to be very close to FLOSS with as little as possible proprietary binary code etc.

Why?
I think because of two concepts that got recognized and valued. First of course on the proprietor side (the chip makers), they save a lot of development cost by not having to maintain their own proprietary stack of tools. They can focus on their product and what makes their product really special and shine. But also on the user’s / developer’s side FLOSS is now fully accepted and more and more expected. FLOSS allows the developers to look into the code, to modify it to their needs and requirements, to tailor mix and match what they need to make their awesome next new product. They are freed from the shackles of proprietary tools and all the confinements that these came with, more and more you can use the same toolchain for many different chips and projects which lowers the entry barrier massively.

And I think this exact same thing is now happening with the silicon industry through RISC-V. RISC-V is leading by example and the RISC-V foundation is doing a pretty good job fostering this new sharing economy in a new silicon industry emerging. Now silicon integrators for the first time can tune every bit of their CPU based silicon to their specific needs. Something that was almost impossible with ARM cores so far, only ARM (with very few exceptions) was able to do that before - and not talking about Intel or AMD here since these are playing in a whole different league, they do not license anything to anyone, they are the sole providers in their world.

The other thing the silicon industry now starts to recognize through the concerted RISC-V efforts is that by sharing not only CPU designs based on the same ISA but also necessary building blocks, like an internal bus system, a memory controller, basic blocks for interfaces like UART, I2C, SPI, USB or PCIe etc., by sharing this they can be way faster and a lot cheaper developing their own special chip with what they see as their unique business proposition.

Western digital is a good example for that, a huge sponsor of the RISC-V foundation and ecosystem. Why? Because they do not care at all about the CPU core they use in their harddisk or SSD controller chips. Their special knowledge and IP sits around that CPU core, it is the motor control logic, the error correction algorithms etc. This is what makes their product, not the little CPU core. With RISC-V they now have the means to influence that CPU core and its ISA in their favor and they of course benefit a lot from the work that others do which in the end provides them with better and better CPU cores to use in their storage controllers - for free. So instead of paying ARM a fortune for CPU core licenses in which they have little interest apart from that they need one, they now shift their focus to investing into RISC-V, not having to pay royalties for the CPU core anymore and benefiting the eco system which in the end gives them a return in the form benefiting also from other people’s work. Win-win.

Another good example are tiny microcontrollers used in so called motor controllers, chips that control speed, torque, position etc. of motors. Imagine your digital camera for example. In an average digital camera there are over a dozen such motor controllers at work! Focus, aperture, lens shift etc. The IP of the motor controller silicon makers is not in the CPU core of these controllers but in the driver logic around them. They do not care if the tiny CPU core is an ARM or anything else. In the recent years ARM has been the most convenient choice for them and that’s why most of them are based on tiny ARM M0 cores. But this is not a requirement. And here RISC-V will thrive - and I actually predict that this will be one of the first high adoption fields of RISC-V, embedded microcontroller cores. These chip makers of e.g. motor controllers (like e.g. Toshiba) can save millions and millions of $ by switching from ARM to RISC-V, with little effort and without loosing much or anything at all. They will even benefit because then they can finally tweak the CPU core itself as they see fit eventually even further improving their product. Looking back at the digital camera example, over a dozen motor controllers means over a dozen ARM CPU core licenses to pay. Even if a single one does not cost much, it adds up and cost reduction has always been a big incentive for the industry to turn.

If I would be in ARM’s position I would be really frightened. Their microcontroller business is their bread and butter business, billions of cores per year. I do not have the numbers but I would expect this to be on a steady decline from now on, which is probably also one of the reasons why Softbank is more and more desperately trying to get rid of ARM. It’s becoming a money pit.

Back to Purism, we are sadly not in the position to make our own silicon. I looked into it and it is simply not feasible. We would have to invest a multi million $ budget to make a single silicon product and would have to sell tens of thousands of devices with it afterwards. I would love to! But right now we can’t, I’m sorry.

Concerning the possible alternatives you mention, SiFive or LibreSOC, I am afraid this is not a real alternative. SiFive does not make chips, but we need chips. SiFive tries to be something like ARM, they develop silicon building blocks and try to license them to silicon makers - that’s what their StarFive offspring in China is, this is their silicon cooperation in which they are invested. We are not in a position to make our own chips based on their or any other IP. LibreSOC is a nice project but also they are “only” developing IP not a chip. We (Purism) did even sponsor LibreSOC, so we are invested in it, yes. But what we need to make a product is real silicon, the hardware, and making that silicon is immensely expensive.

If you (or anyone else) knows someone with deep pockets (starting at at least $10mio) we would love to get in touch and discuss opportunities for developing a real world SOC chip, as libre and free as possible, to be used in our and other products. Seriously, point them our way and we will do everything we can to make it happen!

But until this generous donor comes along we have to look out for the silicon that currently emerges from sources like THEAD, StarFive, Alwinner, Gigadevice etc. - do you recognize something? Hm? All Chinese…

And among the SOCs currently available and announced I have to say, well, while these are nice especially since they show what is possible already, but they are IMHO not yet good enough for a product. Gtreat RISC-V showcases! But not ready for consumer products yet. Maybe for some tinkerers that want to ride the first wave - I am for sure one of them! But for a consumer product? No, I don’t think these are usable yet. The closest right now is the StarFive JH7110 but even that lacks quite some performance, you would be pretty disappointed with it in, say, a laptop or such. This is not something we could sell yet. Maybe some, yes, but far from enough to recoup the development cost. Nevertheless I am looking into it, hopefully we can find a way to make some for the interested.

But I am pretty confident that the RISC-V ecosystem is picking up a lot of pace now. It is just a matter of time, maybe two years from now, for a decent RISC-V based SOC/CPU to emerge and when this happens, we will for sure be among the first to make a product with it! :slight_smile:

Cheers
nicole

9 Likes

I feel like a VisionFive 2, with open source drivers, firmware and software would be good enough for me.

Would developing open source firmware and drivers for it really cost 10 million USD?

I understand the memory control needs a firmware blob, maybe we could overlook that for the time being.

I would really love 3D acceleration, but if a simple 2D display driver was all that could be done then I guess that would have to do.

Hopefully they will release higher performing chips, with open source drivers and firmware going forward?

Does it though? There are people who are attempting to run those cards on ARM and IIRC the only thing stopping them from working is the driver flaking out because it was only ever tested in x86.

Unless you can point out a source for this claim, I call this anti-Intel FUD (as weird as this sounds).

1 Like

Nicole, I think you should gather up your thoughts on a blog - few people have the hands-on insights and experience and want to write about it. The Purism forum is a good place, but it’s not ideal for finding your newest thoughts, or even older ones. And those deserve to be widely read!

8 Likes

What do you consider “firmware for it”? Does it include the WiFi module? Bluetooth? eMMC firmware? CPU microcode (if any)? The mask ROM for the dozen chips on board? The firmware for the HDMI controller and Ethernet controllers?

Also, do you do it with or without docs/schematics?

Worst case, I can see how such a project could burn through 10 million USD.

It doesn’t make a whole lot of sense to do it against the vendor, because once they roll out a new generation and you’re back to the beginning. As you mention, it’s better to invest the money to convince vendors to release the firmware in the open as a long-term strategy.

1 Like

Cool!
Then I would suggest you get yourself one! It is a really nice board and ha everything that one would need to get started - four USB3, two gigabit Ethernet, HDMI, micro SD card slot (via SDIO) and even a PCIe NVME M.2 slot!

Oh, here yo got me wrong. Developing a custom SOC silicon chip would cost way north of $10mio. The only firmware that I am currently aware of in the VisionFive2 is the DDR4 PHY firmware, maybe some more tiny ones, I have not looked into it very deeply yet, I could imagine e.g. the HDMI PHY/DSI also having a tiny bit, but I do not know (yet).

Yes, I would think so. It is pretty specialized, confined and tiny, while it would be nice to have it freed it is not super relevant either.

I do not know how far the Imagination drivers for the GPU in the VisionFive2 are along by now but Imagination has publicly stated that they want to release a blob free driver set for it. So even if it is not yet there, and as I said I am not sure about it, it will pretty sure be coming.

I need to more seriously look into the JH7110, geez… just so many other things to do!

Cheers
nicole

They have libreboot on some Arm Chromebooks.

Ideally, I would love to see all firmware 100% open source, but that may not be economically feasible.

I doubt it cost them ten million dollars to get libreboot running on those Arm Chromebooks.

Libreboot touches only 1-2 chips on the board: the CPU, and the embedded controller.

Not Wifi, Bluetooth, Ethernet, HDMI, eMMC, USB, TPM, and the myriad others that make up the complete functionality of a computer. Stopping at 2 chips out of 20 hardly makes something open.

1 Like

I agree; I would like to see a 100% open-source RISC-V computer.

Everything at every level completely open source.

I would make this a thing if I had the money to do so.

That said, it would be nice to have a computer that is as open source as possible; that seems to align with Purism’s business plan, no?

Of course!
And that’s exactly what we are doing since 2014. But we can only do so much and the availability of chips on the market is what limits us, so at some point we have to choose, either go out of business or do some compromise, as little as possible, but still failing to reach the final goal. We need to make money to be able to invest in more open development so at some point we have to bite the bullet and look for the smallest evil.

Take the Librem5 as an example. We really tried super hard to make it RYF. We even jumped through quite some hoops to get there, like confining the necessary firmware parts (like DDR4 init, display controller firmware, TPS firmware etc.) in a separate flash chip, away from the operating system. We even put significant money down and convinced a WiFi chipset maker (RedPine at the time) to move the firmware from a blob in user space onto the card itself (which to me makes no sense, it’s the same firmware just stored somewhere else and all of a sudden it’s RYF conformant!). Just last year we were notified that the RedPine M.2 card will not be available anymore - besides that it had quite some issues. There are no (somewhat current) WiFi chipsets on the market (none!) that do not require a firmware download at run time and the dozens I looked at do not allow to attach an external SPI flash or something anymore to hold it. So the only way is a download through the operating system which then would loose FSF endorsement if this comes from the same source the OS comes from - and the whole device looses RYF. So what are we doing now? We create even further hoops to jump through, the “firmware jail” which will either copy or get mounted from a separate storage (the SPI flash) so that the kernel can load the firmware from there. Except for sticking to the words of the FSF for not loosing the endorsement this only brings issues, nothing else. But we did it, nonetheless.

Cheers
nicole

6 Likes

I wish I could win a billion dollars and make the dream of a 100% open-source computer a reality.

Hopefully, as time goes on, we will get more RISC-V offerings.

Personally, I think the sweet spot would be a device like the VisionFiver2, and if I needed a workstation, I would go for a Talos II desktop.

I simply do not require cutting-edge performance, and I doubt that most people do either for web browsing, YouTube viewing, word processing, or email.

I would be very interested to see if, in the near future, we couldn’t get a RISC-V phone, tablet, laptop, etc. from Purism.

I understand that we may never have 100% open source, but we can take baby steps, right?

Well this card A770 requires PCIe 4.0, DDR 4, reBAR, ME-10th-up to work 100%. As far i know on ARM or AMD or even on intel without ME or old ME enabled just not work at all.

GNU Libreboot just touch the BIOS Chip not the EC.

Yeah, no, I’m not buying any of what you’re saying. Intel itself acknowledges the ARM porting effort:

1 Like

In any case, I’m still not seeing a citation (which you previously requested).

The suggestion that on x86 it will only work if the ME is enabled could be true or might be completely false. Does @carlosgonz own the necessary Intel discrete GPU hardware in order to cite personal experience? Or, if not, can a link to relevant discussion or documentation be provided?

However a statement that on x86 it will only work if the ME is enabled is not as such a claim about what happens on platforms that are not x86. Could Intel somehow engineer it in some twisted way so that the two claims are not connected? I’ll keep an open mind on that until evidence emerges.

It is already stated above that even without the discrete GPU, you are forced to keep the ME enabled if you want S3.

How much worse could it get in future generations of Intel CPUs?

If it reached the point where you simply had to keep the ME enabled (e.g. HAP bit non-existent or ignored) then this specific question about the discrete GPU would be irrelevant.

In many circumstances it could be a reasonable opinion that one ought move away from x86 but as Nicole notes above, it isn’t that simple.

I would love to see Intel face genuine pressure to lift its game. In some weird way that could come from Apple more so than RISC-V at the current time.

1 Like