Mycroft (Open Source) A.I. Assistant... for Librem 5?

I would suggest this would be a great addition to the Librem 5. An open source Voice Assistant would be a great option.


I remember seeing MyCroft at Akademy - cool idea indeed! From what I know, it still requires a powerful computer to do the computing :frowning:

1 Like

mycroft-core/picroft seems to be running on a Raspberry Pi 3, too. Just tried it on my laptop: First impression is good.

Though of installing this into the phone. Linux release is available, but never though of power consumption.
However, except the fancy assistant feature, I think the ability to call emergency service by voice in locked screen is important, and potentially saving user from life threatening situation.

1 Like

You forgot to read the FAQ on that git repo though

Q2) Can I run this with a the Raspbian desktop GUI?

Sadly, not really. A Raspberry Pi is powerful, but still not well suited to do everything at once. You can add other basic services on top of Picroft, but the desktop GUI requires too many additional resources and neither Mycroft nor the GUI end up running well.

Since the Librem 5 is going to be slightly more powerful and have more RAM it’s probably going to work, but it’s also very likely going to be a bad experience.

You forgot to read the FAQ on that git repo though

No, I read this point in the FAQ. I’m aware that it wouldn’t be a good end user solution in it’s current state (performance and energy wise) and I never thought of it as a solution to “just install and be happy with it”. Just wanted to point out, that it should be possible to play with it on the Librem 5.

Another possibility would be to implement something similar to the current Mycroft for Android solution: mycroft-core running on a more powerful system and on your Android device you’ve only a companion app that connects to mycroft-core via web socket on the same network. Not the best solution, but well…

It’s all just a starting point. Nothing you could use “out of the box” - but a cool thing to play with…:smile:

1 Like

I don’t think you need a full desktop GUI to run a voice assistant. A configuration GUI maybe. And some message app like interface. But for the moment, a text editor could do the configuration. A simple zenity script as GUI of message box to voice assistant could do. It is hacky, but should keep the resource usage low.

The current desktop GUI seems to be a either some web app,

A QT 5 app.

Or use KDE Plasmoid in a KDE Desktop.

If Librem 5 using modified Gnome, it may be possible to write a Gnome plugin to communicate with Mycroft service. The caveat is the power consumption of the Mycroft service itself is not known. Not sure if the desktop GUI is the bottleneck, or the service itself.

Edit: After some search, I found a Gnome extension for mycroft. Not sure if it works on Librem 5.

1 Like

Or maybe I misunderstand the FAQ. If full desktop GUI means installing a desktop environment, i.e. only a headless Pi could handle Mycroft, then it is fair to say it may not work on the phone. I don’t own a Pi, and I can’t test which of them is true.

@uau7j7woi7: It’s quite hard to say what performance impact mycroft-core will have without a Librem 5 dev board. And what it means for battery life is even harder to say, before the final Librem 5 is available.
I did a quick test on my Raspberry Pi 3B+ with Debian/PureOS ARM64 and Phosh running. The good thing is, it doesn’t use as much resources as I thought: Base OS 100MB with a load avg. of 0.15, after starting Phosh 340MB with a load avg. of 2, after starting mycrosoft-core 510MB and a load avg. of 3. Load average is going down a bit when mycroft-core is idle.
While using basic functions, it didn’t feel much slower with mycroft-core enabled. But this is only a really short test - don’t read too much into it.


As others have pointed out, MyCroft developers appear to have made progress in improving the efficiency of MyCroft, even as far as creating a self hosting version [no external server/internet required].

Has anyone from taken a look at incorporating Mycroft into PureOS and/or the Librem 5 [or even Librem One]?

This world be cool to have eventually. However, asking for this at or near is a pipe dream. (Not that you mentioned it at launch). Purism developers have a monumental task ahead of them getting the phone released.

I do see this, or something like it, a reality but not from the Purism developers in the near future.

I’ve ported Mycroft to FreeBSD and at the end of the “day” (after some weeks) I could manage the command line version to run and talk/respond Hey, Mycroft, how is the weather today? and such stuff. One needs a very recent stack of Python for this (and a bunch of fixes fro FreeBSD). Don’t know anything about the efforts to get a GUI like front end working. And if I read the Mycroft postings correctly, they have a lot of work to do.

This is nothing at the moment for our beloved L5.


1 Like

Please don’t

See this post/thread:

1 Like

Did you read their CTO’s reply?

Personally, based on this response, their stated goals, and that the entire project is all open source, I’m willing to give them the benefit of the doubt in their intentions.
Ultimately, we can self host - either as a server for multiple users - or privately, and can totally avoid using their servers entirely if you are worried about their potentially nefarious schemes…


Thank you for the replies

Is the speech recognition online or offline?

Also wikipedia says: “Its code was formerly copyleft, but is now under a permissive license.”.

Eh, I don’t see myself using a voice assistant until I can say “computer what is the current stardate” and Majel Barrett responds.

A more sane approach is to host a server for that (Maybe part of Librem one?) but it should be a free feature,
since it requires a single server that can handle thousands of requests without many I/O resources usage.

Then a client can run on even low end hardware like an Android watch, where a voice assistant is a more usable
case because the lack of keyboard and for stuff like setting timers, alarms, weather etc.

I didn’t look into the code but such things can be a potential security risk because the assistant can actually
do stuff on your device, such as reading/sending messages, contacts, etc. So you need to trust the server part.
Also, those assistance services were used as an attack surface in many cases during the past years, I remember there was a way to unlock an iPhone because of a Siri bug a few years ago, so it’s not something a security minded person would like to use on a phone.

Not really, according to they are currently “Coordinat[ing] to build community team” to create a personal server. Nothing usable yet. And to process the data in a reasonable amount of time you might need a GPU which can be used by TensorFlow, which AFAIK today translates to a NVIDIA graphic card with non-free drivers (or a lot of CPUs).