About that AI use case. While true, that an AI model can be run on CPU, it will not be anything nearly as fast and efficient that you can get from a proper GPU. And in this case “proper GPU” means often one of the newer and bigger cards, which do not fit Librem server, nor (as far as i know) do they have available proprietary drivers (as far as I know, again, the open drivers are not an option for AI). Some newer CPUs have NPU cores (Apples chips may have those), but their relative help is not on par with GPUs, especially when compared to price. There is the third option of separate NPU cards, but the price, usability and drivers again. Hard to say if and how big closed blobs they all have and require and how much work is needed to utilize them properly on Librem, so either of those seem unlikely help anytime soon.
If your AI model is small and efficient, the current server actually is well powerful enough, but then comes the question, “could I have little bit faster and a bit bigger”, which is a never-ending rabbithole. A Librem mini can run several models. Even L5 runs a mini model (slowly but still).
So, for the AI use case, these servers are not the best suited for it even if the specs would be a bit better (but will do if you optimize your model).