As a note: There is a Alpaca GUI from flathub available that may take a bit of resources of L5 but may be more user friendly, better UX.
What is the benefit of putting AI on a Librem 5? It seems to me that the Librem 5 has difficulty enough, just running a trimmed-down version of PureOS. Without several high speed cores, lots of RAM, and a big hard drive or SSD drive, any AI program wouldnât stand a chance at doing anything productive. What we need to do is use the google model. Build a data center in your house and log in to your data center from your Librem 5. Without external data centers, your average Samsung phone wouldnât be much better than your Librem 5 at crunching data.
In a typical application (example: navigation), your Android phone sends the address you entered in to the navigation program, to a data center. The data center does all of the navigation and communications calculations, and sends the results back to your Android phone. So your phone appears to have super powers. But itâs not much more than a glorified dumb terminal. If we take away all of the data centers, your Android phone gets very stupid, very quickly. The Librem 5 canât do much better unless you add a lot more resources, like some kind of datacenter to greatly expand the hardware resourcing. If you try to add AI to the phoneâs extremely small hardware resources, I doubt that it will work at all. Put the AI on a server at home and log in to it from your Librem 5.
Plus sends a copy of the addresses etc. to the Google-borg for future advertising / law enforcement / âŚ
FTFY.
That would be my approach but isnât the essence of it that you own your own Purism hardware and you can do what you want with it, including impractical things that cause problems?
This is kind of what I am doing
I have a Mini with several AIâs installed - Tiny Llama, Samantha Mistral, and Dolphin Mistral. The Mini works more as a server than a workstation, as I do most of my work on a docked Librem 5. When I want to utilize an AI, I ssh into the Mini from the Librem 5, and run the AI that way.
Tiny Llama would probably work (slowly) directly on the L-5. But for me it isnât worth taking up the precious disk space!