Mozilla's latest foolish plan

I am perplexed…

Moz now giving in to the AI Chatbot frenzy. Doesn’t sound like good news to me.
And what for? My take on this is that they expect to soon loose Google’s “bribe” which is currently keeping them afloat. So they will monetize! Perplexity is total commercial data mining/privacy monetizing business model - this is what they do.
And it’s just about the worst kind if we are to believe this WIRED article:

HackRead finishes its article with this line:

Beyond this specific integration, Firefox will also soon prompt users to agree to updated terms of use upon startup.

So we know what to expect

3 Likes

Even if this gambit succeeds, I doubt it can completely replace Google annually paying Mozilla $400 million and up.

1 Like

Yes they can’t afford to drop Google

1 Like

It may not be a case of Mozilla’s choosing to drop Google but rather that circumstances simply change underneath them.


I don’t blame Mozilla for seeking adequate alternative revenue. It’s just a question, for me, as to what price they (really we) pay as a consequence.

Maybe you can block access to Perplexity, and hopefully it can be disabled by configuration anyway.

Is anyone surprised that AI (LLM) hallucinates??? This is a problem that is by no means limited to Perplexity.

3 Likes

G is not going to be paying them for long, if it hasn’t already stopped - as per Mozilla buys an ad metrics company - #27 by JR-Fi (or more recently Firefox could be doomed without Google search deal, says executive | The Verge). They need alternative sources of money - and so does the whole ecosystem as browsers and internet are not cheap (but Perplexity is a shitty plan B, if their other browsing ideas are to be considered).

2 Likes

Did I not mention Mozilla scraping a week ago?

2 Likes

When I first started dealing with LLM’s (recently) I was very surprised at the confidence with which it made stuff up. I’m no longer surprised.

2 Likes

… although I guess this is anthropomorphization. (I mean, it is possible for a computer algorithm to produce a result and a confidence level together but typically LLMs don’t give you a confidence level. They just make ∎∎∎∎ up and sometimes it’s correct and sometimes it’s incorrect.)

1 Like

The use of language by LLMs is based on how it was trained. In general it was trained on authoritative sources and sources where there was not a great deal of speculation being communicated. It should not be surprising that the phrasing is authoritative. Similarly, LLMs have much-better-than-average use of grammar … and their spelling is impeccable.

After a conversation where it admitted to providing incorrect information and I was delving into “why it would think that”, I discovered that it has basically “learned” to sound smart (e.g. quoting sources even when they are fake, …). In any case, I changed the LLM’s opening prompt so that it would provide a confidence level for each assertion. That helped somewhat, but one must always understand that it is optimized not on “truth” or even “probability of truth”, but essentially on “what sounds good”,

1 Like

This is veering off from the original topic to AI…

There is a lot of good research on this behavior. LLM’s can and are prompted to answer and behave in a certain way. It’s more potent, direct and likely way an LLM is made to “sound” like something (as in, “Be helpful and give your answers in a positive way”). This is separate from user prompts (and you can ask your questions with “and answer in way that sounds smart but using simple language that a 5 year old can understand, incorporating and old quote, a Star Trek reference and a southern accent” etc.).

That being said, LLMs are constantly tweaked and the weights of the various algorithmic aspects are being changed to get better general results - due to complex behaviors this also has unintended consequences. There are several articles (and user feedback) noting how some of the models have changed their attitudes and expressions compared to what they were previous as developers have been tweaking them. Sometimes that is learned, based on feedback/likes. The data sources as such do have an affect to the outputs but it’s unlikely to be the reason for the tone. And let’s not forget the how we have changed ( for instance regarding our internet search behavior)

You may be interested in a recent article that concluded that some LLMs were in some cases showing deceptive behavior when prompted in a certain way [noting, that there is a difference between what something seems like to us and actual intent, as LLMs do not “think”]. Another blog post had some good points too on the subject, particularly agentic tasking. The question is, is this useful or even necessary in order for an human like (mimicking behavior of our interactions) AI assistant to be more effective in it’s tasks or to be more acceptable by us - after all, we humans use little lies in social interaction all the time to get things done. And then comes the ethics questions (as devs/engineers tend not to think of them before hand): should this be done/used and when (context matters).

The confidence level issue is also an interesting area of research. There’s a good paper on asserting confidence but not via numbers but expressions (“I’m unsure, but…” etc. [there’s even research on prompting the LLM’s to use “I” in answers, if I recall - something to do about how most users respond better to it with conversational AIs and are more likely to trust the answers - we’re funny that way]). There are many other papers of this area too, from other angles.