Moz now giving in to the AI Chatbot frenzy. Doesn’t sound like good news to me.
And what for? My take on this is that they expect to soon loose Google’s “bribe” which is currently keeping them afloat. So they will monetize! Perplexity is total commercial data mining/privacy monetizing business model - this is what they do.
And it’s just about the worst kind if we are to believe this WIRED article:
HackRead finishes its article with this line:
Beyond this specific integration, Firefox will also soon prompt users to agree to updated terms of use upon startup.
… although I guess this is anthropomorphization. (I mean, it is possible for a computer algorithm to produce a result and a confidence level together but typically LLMs don’t give you a confidence level. They just make ∎∎∎∎ up and sometimes it’s correct and sometimes it’s incorrect.)
The use of language by LLMs is based on how it was trained. In general it was trained on authoritative sources and sources where there was not a great deal of speculation being communicated. It should not be surprising that the phrasing is authoritative. Similarly, LLMs have much-better-than-average use of grammar … and their spelling is impeccable.
After a conversation where it admitted to providing incorrect information and I was delving into “why it would think that”, I discovered that it has basically “learned” to sound smart (e.g. quoting sources even when they are fake, …). In any case, I changed the LLM’s opening prompt so that it would provide a confidence level for each assertion. That helped somewhat, but one must always understand that it is optimized not on “truth” or even “probability of truth”, but essentially on “what sounds good”,
This is veering off from the original topic to AI…
There is a lot of good research on this behavior. LLM’s can and are prompted to answer and behave in a certain way. It’s more potent, direct and likely way an LLM is made to “sound” like something (as in, “Be helpful and give your answers in a positive way”). This is separate from user prompts (and you can ask your questions with “and answer in way that sounds smart but using simple language that a 5 year old can understand, incorporating and old quote, a Star Trek reference and a southern accent” etc.).
That being said, LLMs are constantly tweaked and the weights of the various algorithmic aspects are being changed to get better general results - due to complex behaviors this also has unintended consequences. There are several articles (and user feedback) noting how some of the models have changed their attitudes and expressions compared to what they were previous as developers have been tweaking them. Sometimes that is learned, based on feedback/likes. The data sources as such do have an affect to the outputs but it’s unlikely to be the reason for the tone. And let’s not forget the how we have changed ( for instance regarding our internet search behavior)
You may be interested in a recent article that concluded that some LLMs were in some cases showing deceptive behavior when prompted in a certain way [noting, that there is a difference between what something seems like to us and actual intent, as LLMs do not “think”]. Another blog post had some good points too on the subject, particularly agentic tasking. The question is, is this useful or even necessary in order for an human like (mimicking behavior of our interactions) AI assistant to be more effective in it’s tasks or to be more acceptable by us - after all, we humans use little lies in social interaction all the time to get things done. And then comes the ethics questions (as devs/engineers tend not to think of them before hand): should this be done/used and when (context matters).
The confidence level issue is also an interesting area of research. There’s a good paper on asserting confidence but not via numbers but expressions (“I’m unsure, but…” etc. [there’s even research on prompting the LLM’s to use “I” in answers, if I recall - something to do about how most users respond better to it with conversational AIs and are more likely to trust the answers - we’re funny that way]). There are many other papers of this area too, from other angles.
I think the original topic was AI … at least in regard to AI applied to “chatbots” being integrated into Firefox. Recall that the original topic was introduced as:
I am perplexed … Moz now giving in to the AI Chatbot frenzy … [which is presumably where the title “Mozilla’s latest foolish plan” originates].
And thanks for the references. They’re useful.
LLM’s can and are prompted to answer and behave in a certain way. It’s more potent, direct and likely way an LLM is made to “sound” like something (as in, “Be helpful and give your answers in a positive way”). This is separate from user prompts (and you can ask your questions with “and answer in way that sounds smart but using simple language that a 5 year old can understand, incorporating and old quote, a Star Trek reference and a southern accent” etc.).
Yes. And what is cool about running LLM’s locally (at least it’s true when using ollama) is that you have control of the system prompt. Without being able to control the system prompt, there might be issues when a user prompt conflicts with a system prompt.
The rest is mostly for Tracy’s benefit: Those user prompts vs system prompt conflicts are not as great as the 2001 a Space Odyssey, but it’s fun to remember the “climax”:
[HAL] I’m sorry Dave, I’m afraid I can’t do that …
It may or may not have been answered already, but as users on PureOS (Librem 5 in my case), should we be as concerned using Firefox as our default browser or has this issue been countered when forked through Debian/PureOS store?
Still an issue concerning were Mozilla’s is taking their brand though and it’s something we have to keep an eye on, but glad to know that there is someone out there to pick up the ball whenever it’s dropped.
The interface makes it clear whether you wish to have your question/search sent to the chatbot or not.
One can turn it off in the about:config.
Perhaps independent from the above, there is now a separate panel where you can have a dialog with an AI chatbot. It is explained here. Access AI chatbots in Firefox | Firefox Help . Most of these require a login to the chatbot. But otherwise the feel is much the same as choosing a search engine.
The problem, as we all know, is the power of defaults.
What is clear to us may not be clear to the average non-technical user.
What one can do in about:config most definitely isn’t suitable for everyone. The control should be available from the GUI.
Avoiding dark-pattern design would probably mean … when you upgrade from the version that doesn’t have this to the version that does, the GUI explicitly asks you to make a decision as to whether to enable or disable this new functionality, after giving an accessible explanation of some of the considerations.
It’s the so-called “paradox of choice”. Sometimes having too many choices is, in itself, a problem. That’s what GNOME argues against in regard to KDE (and vice-versa) —> they have different standards in regard to top level settings.
The fact is that applications such as firefox are complex. They contain 10’s of millions of lines of code. The question is always how much of this complexity – as evidenced by user settings – needs to be at the top level. IMO, as long as the user is given a choice before their queries get sent to an LLM (which tracks you), I’m fine with it. But that’s my line.
There is also this: browser.urlbar.perplexity.hasBeenInSearchMode
I think it was added in FF 139.0, with default set to False. Setting to True allows you to search with Perplexity, I believe. (Or should I say “perform searches with perplexing results…?”)