I admit I didn’t, but I was more on the wishful thinking side of things
Anyway, possibly it’s not as complicated as you make it. Of course, libs are needed
C algorithms for all of the math library functions will be an Addendum to the Posit Standard when it is completed. I already have most of the 16-bit functions done
Maybe it would be possible to add a compiler option that substitutes float types with posits (which will probably only work for code that doesn’t do any UB/dirty tricks), and when interfacing with non-posit libs, implicitly converts (which should be a comparatively cheap operation).
However, the question is, would conversion float<>posit be really needed often? The context here was a new CPU+GPU, so you have no legacy burden. CPU+GPU would only have native implementations for posits, but possibly can have very fast conversion ops, but only for compatibility if needed.
You could implement a standards compliant Vulkan library, which only serves as a wrapper for identical functions that have posits as parameters. But again, not sure if the compatibility is a hard requirement. It would be a completely independent architecture, without the requirement to be backwards compatible with proprietary software. Because that’s the only thing why you would need that.
It seems to be a similar problem as supporting little/big endian. In 99.999% of the code written it doesn’t even matter. Only when interfacing with the network or a file one needs to pay attention.
As said, no, I didn’t think that 100% through, but it does seem possible.
EDIT: @lkcl, forgive my ignorance, I had no clue who’s hiding behind that nickname. I totally understand there’s already an overwhelming amount of work to do, without the need to complicate things even more
I just dreamed a bit as this would be such an amazing opportunity.
The article states posits are “straightforward to implement in hardware”, which I read as “not more complex, possibly simpler than IEEE 754”. Together with the paragraph on quires, stating up to sixfold performance, I was wondering if an architecture that has only posit instructions, plus float<>posit conversion instructions would be able to achieve these goals:
- compatibility to IEEE floats, plus smooth migration path
- minor/no performance penalty for typical workloads, when working in compatibility mode (convert, mul-add, mul-add, mul-add, convert)
- manageable effort to adapt the compiler