How-to: Installing AI to L5 and running it locally offline with ollama.ai

As a note: There is a Alpaca GUI from flathub available that may take a bit of resources of L5 but may be more user friendly, better UX.

1 Like

What is the benefit of putting AI on a Librem 5? It seems to me that the Librem 5 has difficulty enough, just running a trimmed-down version of PureOS. Without several high speed cores, lots of RAM, and a big hard drive or SSD drive, any AI program wouldn’t stand a chance at doing anything productive. What we need to do is use the google model. Build a data center in your house and log in to your data center from your Librem 5. Without external data centers, your average Samsung phone wouldn’t be much better than your Librem 5 at crunching data.

In a typical application (example: navigation), your Android phone sends the address you entered in to the navigation program, to a data center. The data center does all of the navigation and communications calculations, and sends the results back to your Android phone. So your phone appears to have super powers. But it’s not much more than a glorified dumb terminal. If we take away all of the data centers, your Android phone gets very stupid, very quickly. The Librem 5 can’t do much better unless you add a lot more resources, like some kind of datacenter to greatly expand the hardware resourcing. If you try to add AI to the phone’s extremely small hardware resources, I doubt that it will work at all. Put the AI on a server at home and log in to it from your Librem 5.

1 Like

Plus sends a copy of the addresses etc. to the Google-borg for future advertising / law enforcement / …

FTFY.

That would be my approach but isn’t the essence of it that you own your own Purism hardware and you can do what you want with it, including impractical things that cause problems?

3 Likes

This is kind of what I am doing :slight_smile:

I have a Mini with several AI’s installed - Tiny Llama, Samantha Mistral, and Dolphin Mistral. The Mini works more as a server than a workstation, as I do most of my work on a docked Librem 5. When I want to utilize an AI, I ssh into the Mini from the Librem 5, and run the AI that way.

Tiny Llama would probably work (slowly) directly on the L-5. But for me it isn’t worth taking up the precious disk space!

3 Likes