Possibility of A Purism POWER9 or RISC-V System Within Near Future?

That’s what i’m waiting for, i hope after evergreen some man power will switch to this

How well does Power architecture handle x86 code through emulation?

Walking away from the library of software available on that platform just seems crazy to me. The viability of a product reliant on RISC or Power, for me, is going to strongly depend on how it can handle x86 software.

How well does that library of software compile for Power or RISC-V?

How much of the code is architecture-dependent?

Most of the software the userbase here wants will compire natively for RISC and Power, so you’re not really losing that much if it can’t run x86 code at all.

As for x86 through emulation, it’d be terrible to passable, emulating other CPUs usually comes with an 80% performance penalty, or there abouts (JPC does this).

Fortunately, there’s also the option of instruction set translation, which is what QEMU does. Instead of using a SVM to track virtual registers, instruction cache, and so on, it just translates the instructions (including JITted instructions) as they get loaded for execution. These then run at native speed. Only catch is code that is optimized for one architecture is often anti-optimized for another. This is especially true for translating CISC code to RISC hosts (x86->arm, or in this case x86_64->RISC-V). Translating from CISC code to CISC host often still loses some performance, but not as much. This means Power9 systems will likely be able to run x86_64 applications through qemu-static-user-targets without serious performance loss.


The only x86 software I would care about running would be a few propreitary games, but given that the Power CPUs would probably be a lot more powerful than mine (my fastest computer has duel 2009 Intel Xeon X5560s), and the fact that I am fine with fairly low performance on games (I mostly care about performance when rendering, compiling, and web browsing), this probably wouldn’t be much of an issue.

But then you also need a decent enough GPU in the Power / RISC-V ecosystem for our hypothetical future Purism product.

That’s also what the JVM does (if so configured). Java bytecodes are translated into native instructions when the code is needed.

Java does have two advantages there though - it is already a virtual environment so the points at which that has to be happen are easy to define and implement, and all bytecode routines are already verified at loading time as being well-behaved.

Easy enough to include a Radeon GPU on a Power9 System. There is the minor issue that the FSF guidelines are kinda idiotic on hosting binary firmware on the OS storage device, so you’d have to come up with something janky to upload the runtime firmware to the GPU (or select one of the radeon cards which ship with an acceptable firmware image pre-loaded).

As for the JVM, there are strong parallels to what QEMU does; basically QEMU treats the foreign program like JVM bytecode. Where things get tricky is in running things like the JVM on top of QEMU, since the JVM dynamically rewrites the code it’s executing. QEMU does some black magic to detect when that happens and re-translated the modified executable pages. This means there is significantly more of a performance penalty for using Java (or pypy, or most JS implementations, or C#) via cross-arch QEMU than there is for simple C programs. What are most modern games using? C#. Not sure how well Power9 could handle them. Then again, most of them have a tiny native-wrapper around launching Mono, so you may be able to just use the Power9 compiled version of Mono to run them (with probably shim to transload the underlying system libraries).


Tricky indeed since the JVM is by definition architecture-dependent, if doing native code generation, and cannot simply be taken for x86 and compiled for another CPU architecture.

So the options are:

  • Use the JVM in a mode where it is not doing native code generation i.e. pure interpreter - but runs noticeably more slowly, particularly on low end CPUs

  • Hopefully someone has already done the work to do native code generation for the target ‘new’ ISA.

As you note, this problem isn’t limited to Java but applies to any interpreted environment that offers native code generation as an option.


The solution is actually considerably more elegant than that. QEMU loads the original code page, then when first goes to execute it, it marks it read-only, translates it, and executes the translated version. When the JIT fires up and tries to rewrite the code page, the kernel signals a write-to-read-only memory fault, which QEMU intercepts. QEMU then invalidates the translated page, marks the underlying page read/write, and tells the kernel to resume the program where it left off. The JIT then proceeds without even noticing anything happened (except it took longer than it should have taken on the first write). Next time the page goes to get executed, it again gets marked read-only and translated.


OK, neat, as an interim solution, but the second bullet point above is a better long term solution i.e. native execution in and native code generation for the new target environment.

Your comment above about anti-optimized code applies doubly if we have translated x86 code converting bytecodes to x86 code at run time, which is then translated from x86 to native.

Anyway, putting aside stuff that is explicitly ISA-dependent, there is still a truckload of portable open source code that should just compile natively.

1 Like

Yeah I’m mainly interested in Windows software. I imagine a lot of that is hardware specific but I get what you mean. For most people this is not a big concern and I can understand that.

I suppose once Wine or Proton are stable on it, I’d be happy.

What does everyone thing of the desktop that Raptor released, and is talked about above in the video? Dual socketed Power9 with an rather generous amount of PCI express slots?

With reference to https://www.raptorcs.com/TALOSII/ which model specifically?

Generally: expensive

Plus not necessarily open hardware (per Nicole’s comment above).

Does WINE do that? i.e. support cross-architecture so that Windows x86 executables would have both the operating environment (system calls, library calls, etc.) and the instruction set emulated (or in the latter case translated)? I’m sure it would be possible to do but I am talking about out-of-the-box.

Wine doesn’t work cross architecture, or rather not without QEMU managing the instruction set translation. It might let you run Windows ARM programs on Linux ARM, if you compiled it for ARM. In theory, you could compile Wine for ARM (or Power) and then have QEMU only translate the actual executable. It would probably require a bit of customization to make QEMU aware of what needs translating and what doesn’t. That said, Wine + QEMU does currently function, well enough that a raspberry pi 3 can run x86 games from the late 90s at reasonable performance. A CISC host system, and one more powerful than a Rpi can probably manage more modern titles without issue.

1 Like

I know that it does provide access to libraries, etc. specific to Windows, but I have no idea if it does anything beyond that. I’m going to assume no. I just said it the way I did, because IF it is available on the platform then most of my concerns are solved.

Wikipedia says “yes” but the problem here is that (I assume) there are no Windows programs for Power or RISC-V.

So your option is WINE for x86 + Windows program for x86 - all under QEMU for Power or RISC-V.

Yes, in theory. That’s what I meant by possible. Someone could integrate the two pieces of software to let WINE handle the system calls etc. and QEMU handle the program’s code.

Just my 2c but the whole point of a move to RISC-V would be for the openness so running Windows programs doesn’t really cut it.

1 Like

Do note that there’s a fair amount of open source software which is windows centric. Looks like winelib is available on power9 (ppc64le), and on riscv64. This means those open source windows programs can be compiled against winelib and will run quite quickly and quite happily on power9 or riscv. Seems Raptor Engineering paid its devs to do the winelib port, and they have expressed an interest in doing the wine-qemu integration, but we’ll have to wait to see what comes of it.


Talos II is not really their desktop solution. That’s the Blackbird and in the future the Condor. Those systems are still expensive if you compare them to AMD based systems but you get totally free systems. You are in control of your computer.

I think Nicole’s comment above was outdated or just ill informed. The SiFive HiFive Unleashed requires binary blobs as does any other RISC-V board I have looked into. So from what we are seeing OpenPOWER is more free than RISC-V.

OK. Can you post the correct link?

China has a fundamental problem (roughly the same problem that we, as Purism customers, are aware of). “All” the world’s hardware (design) comes from the US, “all” the world’s software and services come from the US. China has plausible reasons not to the trust the US government, and in any case there is no concrete basis to trust any of the US technology. It may be all good, it may be backdoored, no way to verify.

China seems more interested in RISC-V than OpenPOWER - as the basis for eliminating a need to trust the US government and eliminate its dependence on US tech as far as the CPU goes. (India too. Other countries too. OpenPOWER seems more limited in its geographic scope.) Provided that they continue the open model of RISC-V and release their improvements to and implementations of RISC-V back as open hardware then I think RISC-V has greater potential.

I can imagine a future Purism product range based on RISC-V which, like the Librem 5, comes in two variants: the “Made in China” variant and the “Made in US” variant. The CPU is fabbed in the respective countries, ostensibly from the same underlying open design. Buy whichever you are less mistrustful of. :slight_smile:

However imagining a future and being right about it are two different things. :slight_smile:

Oh, and to answer the original question, I don’t think any of this is “near future”.

As you say (or imply), a system is more than a CPU - and blobs can creep in as part of other components.

nVidia has “said” that they will use RISC-V in future graphics cards, which would be a small step towards bringing the graphics card into the fold (for non-entry level desktops and laptops that actually have a dedicated graphics card).

1 Like

RISC-V is not entirely “open”. The Risc-V ISA is an open standard, but the organisation that controls it acts more like a closed shop from the look of it. Judging from the references below, the RISC-V Foundation ignores external contributions and makes it difficult to access technical documentation.

It Turns Out RISC-V Hardware So Far Isn’t Entirely Open-Source

NLNet Grants approved, Power ISA under consideration

Libre RISC-V Open-Source Effort Now Looking At POWER Instead Of RISC-V

The ISA (instruction set architecture) only defines the CPU instructions and not how it is implemented in silicon. So you can still be forced to use binary blobs to use a chip that implements the open RISC-V ISA.
It seems that the RISC-V ISA is only gaining traction with the major players because it lowers the cost of IC development and not for protecting user freedoms.


On top of all this, power CPUs are a proven technology with decades of experience behind it. Longevity is a serious concern for any platform and I’d feel much more confident developing towards or for power architecture than RISC, at least in the current landscape.