Signal's new president: what are the risks?

What I believe to be a new danger has arisen.

It’s as it says. And this is a problem for everyone because of one simple reason: The organization she put together to protest social issues within large IT corporations believes that every technology and media company has an obligation to monitor and mitigate the harm done by the products they produce and the services they provide.
I think it’s reasonable to imagine that her idea of “harm” includes any memes or information contrary to anything the Google Walkout organization asserts to be true. Therefore, to mitigate this, Signal will very likely have to do many things the vast majority of its users will not like.

She claims she “brings a clear understanding of the environment shaped by the dominant tech business model, and of what it takes to build tech that rejects this model” but why would someone work in the industry for 17 years if she actually didn’t like what was going on all that time?

The best case scenario I’d be willing to bet on is that Signal just focuses on improving its ESG (aka Social Credit) score instead of implementing desired features but mostly leaves users alone, like Firefox. But I am very much expecting Signal to be revealed to be no longer fit for purpose. I want to be told I’m wrong here. I want to be told I can still trust Signal, because I don’t think anything strikes the same balance between privacy and convincing people you know to actually use it. But I can’t understand WHY Signal would pick such a person with obvious ambitions to lead the company, who got her degree in Rhetoric and is still an advisor at the FTC and runs an “AI Ethics” group (which to my knowledge mostly just do social engineering), unless they were going to start censoring Signal, de-platforming people from Signal, or placing backdoors in Signal’s encryption.
I’ll grant that she’s been on the board for two years now. But that just means 1. this warning is way overdue, and 2. adverse changes could very well drop on Signal users tomorrow.

It’s not like I don’t know there are alternatives. But asking a typical person to contact you through this QR code, or this string of text, or to create and share a public key, is just going to be met with blank stares, and pretending to consider it, followed by them just using familiar plaintext or refusing to to talk to you anymore. And that’s if they don’t get angry for you caring about it.

4 Likes

Na. You should use it but take care of the transmitted data. Just share importance Information over paper, or direct f2f communication without phones and computers.

Signal itself is fine. But… it would be encrypted on Phones with a OS that decode every received information to an A.I. which try to match your behavior to an advertise target, or to try to nudge your behavior to match a specific target.

However you should not loose your connection to people… cause of sig information transmission devices… for example. In Future, you could use encrypted mail, secure uncommercial messenger or self hosted systems.

Right now, signal is good… just merge later to a secure solution and delete, your and families messages. For History try to use physical copies on Paper in House, for example. If you still use Google and Co.

It depends, I daresay entirely, on whether “services they provide” covers what a user says. For example, if I offer to sell drugs on Signal, strictly speaking, I am providing the service (such as it is), not Signal. Signal is providing (“providing” used loosely, because the word implies intent) a vehicle for the service. If she agrees with my logic, then we’re good. She in all likelihood won’t, though, and will determine that it is in everyone’s best interest to prevent drug deals from happening via Signal communications, which I imagine would require monitoring and flagging and thus undermine the whole e2e encryption scheme.

I say this knowing nothing about the woman, but I’ve seen enough of society’s overreactions to be confident in my statement. I will also qualify my statement as a “prediction” versus a “condemnation” because I still believe in the principle of innocent until proven guilty.

…but I still wouldn’t switch to Signal until I see what happens first.

3 Likes

There is a possible way to stop people from saying things you don’t like without undermining the encryption: add a large database of badwords or a machine learning model designed to flag things that have a high probability of being badwords. This can be as annoying as you want. You can have the app lecture users on what they should say instead, or just shadowban or outright ban them, all just letting Signal’s servers know that a certain phone number/user has been banned.
This would make you a huge jerk, but it’s easy to be a huge jerk when you don’t see people who disagree with you as even being human, and that can happen to anyone who spends too much time anywhere on the internet.

Well, if the app is reading the messages after they’ve been decrypted, I would say that that in itself is undermining encryption.

I presume though that the code written to do the reading and flagging is on a server somewhere (as I understand it, that’s how all such systems work), but let’s elaborate to say that all that code is built into the app and stays on the device, such that no other entity sees anything. In that case, I would agree with you that the encryption scheme’s integrity remains intact.

So moving on from that, I think the argument would become is Signal intended to provide “secure communication” versus “free communication”, as in " freedom of speech." I suppose I don’t have the answer to that. I myself would not tolerate having my messages scanned to see if I’ve earned any punishments when I’m having a private conversation, but I can’t come out and say Signal would be wrong for doing that because it’s Signal’s messaging app (and by “doing that” I mean in the example provided above, where it all occurs on the device and never leaves it, such that my messages still rain private). If they want to implement that feature, OK. I just won’t be using it.

Thread on this over at the Signal forum if anyone wants to check in there:

https://community.signalusers.org/t/signal-blog-a-message-from-signals-new-president/46991

Therein is the problem. There is no objective, complete definition of “harm” and in many cases, even where some standard of harm is reached, there isn’t anywhere near universal agreement on what we should do about it.

Perhaps technology companies should focus first on the harm that they directly cause as actors themselves i.e. through privacy invasion, tracking, data collection and laundering, opaque algorithms, …

When they reach the ideal of not themselves being actors i.e. solely play the role of allowing communication between two parties, then they should rely on “safe harbour” and butt out.

I guess my opinion therefore is about as far away as is possible from hers i.e.

every technology and media company has an obligation to monitor and mitigate the harm done by the products they produce and the services they provide

As a thought experiment, let’s simplify the world. Imagine that the only means of electronic communication is via fixed line phones. Is it possible that sometimes harm is done and that that harm is either done directly using the phone or is coordinated using the phone? Yes, almost certainly.

But what would we tolerate by way of “monitor and mitigate” in our fixed line phone calls?

The threshold of what is acceptable (to some) has been lowered since the time that that world ceased to exist. I believe that this, in large part, is because it is possible. Technology has made greater levels of intrusiveness possible and sooner or later that intrusiveness then becomes the norm.

Pay the mortgage? I don’t know. I think there are a lot of people who don’t like the jobs that they are in - and many different reasons why they don’t do something about it. :wink:

2 Likes

Perhaps in their calculus of the harm of their actions vs the harm of their inaction they have determined their actions to be less harmful. It is after all one of many possible ways to calculate.

So yeah there isn’t going to be consensus on what constitutes harm nor what actions should be taken nor what degree of inaction is acceptable. Inaction is in a way an action unto itself in that calculus.

Just another thought to throw into the mix. After all these are complex things with no binary answers but rather gradients of values.

1 Like

For sure. If you put my first two paragraphs together, they could certainly be using a different definition of “harm” from me. So what I consider to be their harm, they don’t consider to be harm at all.

Not necessarily “at all” just possibly less harm than the perceived benefit. It doesn’t have to be harm/no harm, it can be degrees of more/less.

Whatever you think of any particular person currently believed to be in charge, the man problem with Signal is still that it is centralized. Even if you think the current leadership is good, you always have to worry that it may change in the future.

I think the right thing to do from a user freedom perspective is to switch away from Signal to something decentralized like matrix or XMPP.

5 Likes

I think your typo (? is ti a typo?) is… apt :smiley:

What if your conversation partner activates some sort of feature and shares his conversations with some server? Would he be able to share the texts you sent him? That way, you would have a culture where everyone observes each other and is able to downcredit other people in the social credit system.

1 Like

More information had arrived, although it’s vague. Signal wants users to contribute financially somehow. Without knowing more, that doesn’t really affect my primary concern that something I say on Signal’s service could start to cause Signal to take steps that adversely affect me. I get the impression that nothing’s been firmly decided yet.
Some methods might make me feel better since the business won’t want to anger its paying users too much and “if the service is free, you are the product”. But I might have to pay for other people to convince them to switch, since the average person doesn’t think about that. Also, sending money means sending more than just a phone number and requires you to have not adversely affected your bank’s ESG score enough to make them kick you out. Unless you use crypto, like the one Signal already has, but it’s not like that’s perfect either.

You would, yes. Let’s elaborate on this a bit: let’s say that this feature that was activated is some sort of backup or logging feature that requires the cleartext of the conversation to be saved somewhere else, sort of like the way google will put your photos on the cloud whenever you take a picture (though, personally, I can’t think of a reason one would want to do this since the messages are stored, encrypted, on Signal’s servers [right?], but we can ignore my opinion on the “why” for now). I don’t see any practical reason for storing only one side of the conversation (the feature simple mentor’s), so I would be assured that yes, what is being both sent and received would end up on a server somewhere else. This feature would certainly undermine the encryption scheme by way of bypassing it completely. As far as the mentioned social credit system, is that something the new president has mentioned? This may be a red herring in that it has been introduced into the discussion but purely as conjecture.

They are stored on Signal’s servers if you or someone you talked to set it up with a PIN, which is why I never set it up with a PIN. If you don’t, AFAIK they’re on your phone and your recipient’s phones. Calls are not stored. I tell people to please don’t set a PIN no matter how much the app asks, but unless I’m right there with them I can’t stop them.

I bought up ESG scoring and called it “social credit” because it really is basically social credit for corporations, except its rules are decided on entirely by private enterprise (i.e. banks) with essentially no government oversight or any other kind of oversight. It’s a big part of why every publicly traded business keeps getting involved with political issues even when it makes zero sense to do so. And it can influence decisions entities like Signal make.

Is that for backing up purposes? Or for using multiple devices? I can’t think of another reason to want that.

Also, from that article:

ESG scores are calculated by companies that use their own formulas and methods to quantify and measure how well publicly traded companies are meeting ESG metrics

At least now I have proof that it serves no real purpose.

Yes, purely hypothetical in the context of this discussion but it’s happening in China, and if it’s happening in China, it will be happening to us a few years later on. :wink:

To be clear, I am talking about actual social credit for individuals, not “social credit” for corporations.

It says it’s for backups when you first launch the Signal application. But it may also play a role in keeping conversations synced across multiple devices, I don’t know, I haven’t tried that because I use the calling feature more often than the instant messaging feature.

Here’s a recent, and heavily biased article that she co-authored. It’s all about seizing power to keep the vile evil right wing from having any influence over anything, under the pretense that current policy over basically all interactive computer services is somehow too biased in favor of promoting opinions of republican voters, who you shouldn’t like because they’re bad.
I’d like to opine more on it, but that’ll just start a flame war.

4 Likes