How-to: Installing AI to L5 and running it locally offline with ollama.ai

This is kind of what I am doing :slight_smile:

I have a Mini with several AI’s installed - Tiny Llama, Samantha Mistral, and Dolphin Mistral. The Mini works more as a server than a workstation, as I do most of my work on a docked Librem 5. When I want to utilize an AI, I ssh into the Mini from the Librem 5, and run the AI that way.

Tiny Llama would probably work (slowly) directly on the L-5. But for me it isn’t worth taking up the precious disk space!

4 Likes