RISC-V M-Class effort, and Purism donation

Interesting information on Purism helping to fund a RISC-V hybrid CPU/GPU. I saw this in a post in the Librem 5 Matrix channel, by nickh.

12 Likes

Right :slight_smile:
We see a bright future for the RISC-V in general and see a string need to developing free additional building blocks around the RISC-V core. Because keep in mind, the RISC-V foundation is only concerned about the ISA, there are some free implementations of the ISA but to make a CPU or even SOC it takes a lot more partially complex building blocks, like a GPU/VPU. That’s why Purism puts its money where its mouth is and contributes to the effort :slight_smile:

Cheers
nicole

18 Likes

Cool. Besides I will certainly buy a librem device built on RISC-V arch.

2 Likes

That’s awesome, we really need to move in that directions, i hope to see a full open hardware stack in the next 3-4 years with risc-v on mobile and i hope to see a desktop/laptop solution from purism built with the upcoming power10 in ther next 1-2 year.

@nicole.faerber i know @todd-weaver some years ago wrote you are not interested on desktop systems parts, but looking at succes from raptorcs i think you should join this market selling something like their blackbird, maybe a more user friendly version of it, with an easier firmware update process (i know they are working on it) and a better solution for an open graphic (they are using aspeed ast2500).

No need for power10. RISCV can power laptop, desktop and server.

Risc for desktop wobt be ready before 6-7 years power10 should be there the next year

1 Like

Won’t happen in laptops. As far as i know the power9 cpus are between 90W and 190W and designed for servers. So using them in a desktop is already kind of alternative use case. (which they fulfil pretty well i think). But not gonna happen in notebooks. And i expect the same for power10.
Devices are pretty much designed around power budgets which are pretty stable without major breakthrough in cooling or efficiency research. I would roughly categories it for cpus the folowing
<10W fanlesse portable, < 30W ultrabook, < 90W mobile workstations/gaming notebook and medicore desktops, 100-200W desktop and server, >200 highend server or gaming.

And on the power10 release:
From this article

“We expect that we will go into the market as we originally planned in late 2021 or early 2022 with our 7 nanometer product family,” Bob Picciano, senior vice president in charge of IBM’s Cognitive Systems division, tells The Next Platform.

So i think to release a desktop in near future power9 would be the road to got. But i don’t know if it would be good to add another architecture to their portfolio as it need more resource without having the effect of using expertise across devices.
So my suggestion would be to team up with raptor and make their mainboard more user friendly buy putting all the nice streamlined coreboot things on it and ship it with pureOS pre installed in a nice package.
But i’m note sure about the profits that can be gained on this $2000k plus desktop market this would clearly be in. And if there is no profit it think purism should focus first on the profitable ones and grow further. It’s wouldn’t provide anything new which is hard / impossible to do right now, like with the L5 dose.

So i see both risc-v and power for desktop more as a goal for the long term future, and this donation is an early investment in it. But nothing for the next 5 years. I expect us being stuck with intel/amd/arm for some more time. Power staying to expensive and not suitable for any mobile device, including notebooks.

But it’s all me looking in the glasball and guessing. :smiley:

2 Likes

Thanks for your reply, power9 is too power hungry for laptop i agree, so i can still hope for a cpu+matx combo with a better gpu chipset like vivante.
I doubt they will make a partnership with raptor i feel them a lot different.

There is an openpower summit in august i hope they will say more info for power10 and i hope they could make it happen the next year, but after reading your post it’s just my hope.

Btw we are not really stuck on amd intel because blackbird is a thing, i just prefear to give my money to purism than raptor.

Anyway debian support ppc64le so i don’t think this architecture whould be a problem for pureos than purism

You are right. What i meant is stuck for a reasonable price point. With unlimited budget many things are possible. But to be profitable and having a customer base there are constraint. And i fell that raptor is already hard on the line i not over the top. Purism devices are already perceived as pricey but some.
So power fails for me at this angle. Which is natural as IMB target it at server and put a lot of money on it to have things like lots of PCIe 4.0 lanes, massive ram interface, multiple CPU setups, which are all expensive to do but we won’t use it in a desktop but have to pay for it. So i don’t see it a worthy option when you don’t have kinda deep pockets to spend on it. Which isn’t purism markt in my perception.

Probably ECC RAM too?

After reading the information about the Libre RISC-V SoC on its Crowd Supply page and its NLnet grant application, it is clear that it is a long way from a marketable SoC, but I want to congratulate Purism on helping to fund it. It shows real commitment on Purism’s part to put money into the development of the Libre RISC-V, since will be many years before it produces anything that is remotely marketable, and it could easily fail.

From what I am able to gather from looking at the source repository, the project at this point consists of Luke Leighton writing python code that uses nMigen to generate Verilog which will run in a 28nm FPGA. Leighton is figuring out how to do a lot of basic stuff that only the proprietary chip designers know how to do, but he is getting advised by Mitch Alsup, who helped design the Motorola 68000 and AMD Opteron, and using old papers on the design of the CDC 6600, which was the first supercomputer. It is really cool that old engineering knowledge is being reutilized and put in the public domain.

The other part of the project consists of Jacob Lifshay writing Rust code to implement a Vulkan GPU in software which is compiled in LLVM. The idea is that the Libre RISC-V will be optimized to run LLVM very fast, so there won’t be as much of a penalty for doing this in software rather than hardware. I would assume that the VPU will be implemented in the same way, but it doesn’t look like any code has been written for that at this point.

Leighton estimates that that the 50,000 Euro grant from NLnet is 0.5% of the total amount needed, so a total of 10 million Euros to complete it. My concern is that Leighton has his attention divided since he is also trying to crowdfund the OEMA68, a modular, environmental computer.

5 Likes

Hm, I guess I have to agree. I just had a look at their FAQ, most of which sounds alright.
It didn’t even bother me too much that they state there is no way around Intel ME.
But at the point where they state

Talos™ II is truly one of a kind and is additionally protected against unauthorized hardware clones by patents and/or patents pending

in a “it’s not a bug, it’s a feature” tone, I don’t know why I would support that.

3 Likes

Hi, the thing about the EOMA68 development is that it is a lot of time waiting for other people to answer questions about component supply, or factory PCB assembly to complete, and so on. It is “Project Management” in other words. The early stages involved sustained full time development.

What I will do is apply for a 2nd NLNet Grant to be able to pay other people to do any development and injection molding etc.

So I have time to focus on the design of the processor.

This processor is still a huge jigsaw, where it needs putting together the basic infrastructure that is capable of handling hybrid workloads. It needs both hardware and software devrlopment, done at the same time. No Fabless Semi company is going to tackle this: they all just license MALI, or PowerVR, or put in a PCIe interface and tell their customers to use an external GPU.

One important thing to appreciate is that the design is scalable. The only reason for tackling such a low initial performance target is to keep the NREs down. If we do iterative development where the masks cost USD 7m, we burn through money like it grows on trees. If however we stick to something smaller such as 40nm, the MPW test runs cost way less. Once proven, we can tackle larger designs with confidence.

4 Likes

@lkcl, Thanks for taking the time to comment on the Purism forum about your projects.

I would love to see the EOMA68 in production (even if I’m not enthusiastic about the A20 processor).

I’m curious about a number of the details about the Libre RISC-V. Are you planning on implementing the VPU in Rust or have you not yet decided? I’m asking because I’m wondering if we will be able to take a Rust GPU/VPU and run it on any processor. Will the Kazan 3D code be tightly bound to the RISC-V architecture or can it be migrated to other processors? It would be awesome to have a free GPU/VPU that could run as software on an x86 or ARM PC.

Are you not still planning on implementing this in an 28nm FPGA or will you start with a 40nm MPW?

Hi sorry it is taking me so long to get back to you, have to focus really hard for sustained periods of time.

The Kazan software is a Vulkan compliant userspace library and is primarily a SPIRV compiler, which, unlike “standard” GPU drivers will translate directly from SPIRV to LLVM IR, and from there directly into assembly code on the target machine, using the standard LLVM JIT engine.

Normally because the GPU is totally separate (behind a PCIe interface) all the above has to happen… oh except the compiled assembler and the data has to be serialised and shipped over the PCIe bus to the GPU! Talk about insanely complex!

Jacob is actually designing the SPIRV shader compiler to even be independent of LLVM. So there is no reason, some time down the road, stopping a gcc backend from being added.

And yes, one of our intermediary milestones is to get this working first on x86. It should be clear why: we do not want to be tackling two or more unknowns at once, particularly as the RISCV LLVM engine - even without any hardware acceleration to support 3D - is under active development.

We therefore need a stable base to go from, I explain it in more detail, here

https://groups.google.com/a/groups.riscv.org/forum/?nomobile=true#!topic/isa-dev/JlKZdzS6VtQ

The speed grade of the FPGA does not matter, we do however need absolutely massive ones, or to split across a network of smaller FPGAs.

I am talking to someone who wants to do an open ASIC cell library, using alliance / coriolis2 to do the layout, it may be the case that we first do a 180nm test chip.

The nice thing about cell libraries is that they scale well as long as you don’t go mad eg the 3D FINFETs. Also, coriolis2 is driven from python i.e you divide and conquer by module, using each successive layer to construct the next, block by block, writing a python application to choose what to do at each phase.

We can therefore start at 180nm and quite reasonably expect to be able to reuse most of the layout to do a 40nm or 28nm chip.

The tricky bit will be the DRC (Design Rule Checks). coriolis2 does not have the same sort of checks as Mentor and Synopsis, however there is an online company that can do DRC up to 45nm.

So there is actually a way to do this.

Still have to get a PLL block, and a DDR3/4 PHY, and a USB2 PHY. These are all analog and have to be customised not just to the geometry, they often need to be customised to the Foundry.

9 Likes