Is Google’s AI Reading Your Private Messages? Why It’s Time to Consider a Secure Alternative

Always worth considering that even if you are already using a secure alternative, if you send content to another person and that other person has been assimilated into the GoogleBorg then Google is still potentially reading your not-so-private messages.

This to me is a completely unsatisfactory situation, that needs legal protection against. But do governments care?

4 Likes

Does anyone?

1 Like

Reading my post, did you come away with the impression that I care? :wink:

2 Likes

No, I do not believe the concept of caring means deferring responsiblility to someone or something else.

1 Like

So do you “care” and, if so, how do you deal with this problem?

2 Likes

I have little interest in the operations of people, their tools and/or workflows, and I no longer have any ambitions on converting them over to alternatives either. Instead, I am primarily focused on neutering my third-party dependencies to centralize and reclaim all power/trust back. The only measurement of caring I have is the absence of self-violation, particularily when I do not consider compromise as a viable option.

1 Like

I suppose this is a “second party problem” (?), between A and B, as in, who can you trust. But in a network of A, B, C and D, parts of the content may be forwarded and the problem always persists as long as there is an endpoint that isn’t completely safe. But would an optimal situation be where enduser isn’t able to use AI (prevented by system, limited on how to access and re-use your message/data) or can be trusted not to use (could use, and there could be a private way to do it, but you trust not to) - or both (or neither)?

The blog post annoyingly singles out Google, which is the biggest player in private emails, but MS has this too in organizations (no need to invoke Recall). There is how ever a big difference between “free service” and contractual services where certain promises a re made to not use data - guess which one has even a slight monetary penalty attached if promises a broken. And then there are other smaller entities doing this too - although with some of those at least, this can be done so that data stays as part of the organization’s own system and they can be sure it’s not used in training.

Governments probably care, although often it’s just eventually and after something happens. Or, alternatively we get stuff like EU’s DMA (doesn’t this sound like a picknic: Commission organises DMA compliance workshops with Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft (during June and July 2025)).

1 Like

For me it’s not so much about trust. Let’s say that I am A. B has consented to Google’s predation in exchange for a free (or subsidised) service. I have not consented. In fact I have not even been asked. However by communicating with B my content is now subject to predation.

Oh, goody. Who wouldn’t want to go on a DMA compliance workshop? :slight_smile:

2 Likes

Since Meta read your key stroke and your deleted and not posted information while typing. I think Alphabet do the same now. Because Meta done this 2007 and Google like 2015?

With A.I. we have some new Man in the Middle and a German Company know about some Mails got translated like personalized to the receiver if it is another language.

And i think that purism observe this. A information Change through A.I. Translation like this. And yes if some folks click on accept to have some cushy features. It will likely chance the Message like Spam filtering or to have some nice language inside.

And Yes it Read the Message. Google Read Messages from its beginning.

1 Like

Just in time, Proton is offering their version of privacy respecting AI service. This seems to be middle ground somewhere between the worst offenders and running your own AI in you own computer. See this blog announcement for more details. Although it seems muuuuch better than the alternatives when it comes to privacy and security, there’s very little info on (other) ethical aspects, which is a shame.

1 Like