Signal's new president: what are the risks?

True, best to keep this discussion on the track its currently running. Its being pretty civil so far.

This sentence serves no purpose. It says “ESG score reflects the ESG metric”. You can say that about any metric, you mixed up bad writing with a bad metric.

Similarly, I find it funny to describe it as a “social credit”. You could as well call the amount of money in your pocket a “social credit”: everyone has some, it limits your opportunities when interacting with others, and influences decisions. I thought that “social credit” meant something more than that. Notice that Chinese “social credit” is not decided by private entities and is entirely overseen by the government. So by that definition, the Chinese system is not “social credit”?

I think that particular fear is overblown. Companies will optimize for whatever metrics they like. Don’t agree? Switch to a dstributed messaging system.

3 Likes

Yep. After reading that the new Signal president wants to kill people whose political views are right wing, well sort of, it sounds as if moving off Signal is the way to go. That breathtaking level of intolerance isn’t compatible with my values.

2 Likes

Where did you read that?

2 Likes

Last link posted (above) before my post. Headline is:

These Machines Won’t Kill Fascism: Toward a Militant Progressive Vision for Tech

co-authored by the incoming president of Signal.

Yeah, the headline is clickbait. The subhead is more representative of her agenda:

The left must vie for control over the algorithms, data, and infrastructure that shape our lives.

As you perhaps implied, I don’t actually want anyone - left or right - shaping or controlling my life, except me myself of course.

1 Like

It sounds like some of you are just concerned that someone with a different viewpoint than you has a prominent position in a company which makes software you use.

I think there’s something being overlooked that is very likely to ensure none of these potential problems actually happens. Signal’s product is encryption. Without encryption, it’s just another messaging app and every phone ships with one by default. If they give up on encryption they give up on having a product. That would suggest taking this sort of action would require a rather wild degree of incompetence. This is quite a lot to speculate solely based on differing viewpoints.

Another point that makes compromising the encryption unlikely is the linked document supporting trans people. The whole reason that group might need support is because they’re subject to attack. Weakening encryption would not help them or any other group that is being attacked, it would do the opposite. It would remove a platform they can use to freely talk to people they trust. And as any privacy activist would tell you when the government wants to put a backdoor in encryption, this will weaken it overall and the government won’t be the only people using it. The same principle applies here, and while I don’t know her story specificaly, “privacy activist” seems to be one of the labels that has been applied to her. So in this way as well, when you suggest she might compromise the encryption what you’re suggesting is not just a difference of opinion but a truly stunning degree of outright incompetence.

So far nothing here suggests that she is incompetent. If you’re really concerned that she would do anything suggested here then you would want to look for that first. Without some evidence of that you’re just implying she’s incompetent because your views differ in areas outside of the application you’re concerned about.

Of course if the goal is to boycott Signal because you disagree with the expressed views of the new president then that’s entirely different. Just go do that, don’t try to make up reasons to justify it. That’s entirely unnecessary.

3 Likes

I’m disappointed that you repeated the clickbait. It’s much different to call for killing compared to wanting to be influential.

2 Likes

Well, yes, I am concerned when that viewpoint is “users of interactive computer services should be kept from communicating certain things”.
I am not saying she is incompetent. One does not get to such leadership positions without being competent. In fact, I am saying she is dangerous, wants to be dangerous, and her competence makes her dangerous, to the groups of people she doesn’t like (and very likely anyone who resembles them).
And compromising the encryption does not endanger vulnerable groups, that the provider of an interactive computer service that they use, takes extra steps to protect. The service provider can refuse to share, or automatically delete the data once a user has been identified as belonging to a group they want to protect. Or they can refuse to collect data until the client flags the User or Group as being likely to be something they don’t like.
As I mentioned, there are ways for the client to determine is a User or Group does not align with a service provider’s values without sending them any message contents. An offending User or Group can just be deleted. Might this endanger vulnerable users that looked offending to the algorithm? Maybe, it all depends on how stringent the algorithm is, or just what constitutes a “Group” (that is, the server may have no persistent knowledge that a group even exists and you’re just sending messages to a bunch of endpoints at once).

The groups of people you mention will of course be unhappy with any changes I’ve speculated on, if they happen in a vacuum. But if there’s a big news story about Signal “harboring and protecting the far-right” or something like that, I’d expect the conversation to be quite different, and for Signal to take steps the moment pressure starts mounting. Which has been the case for AT LEAST the past two years, there just hasn’t been a cause for a pressure campaign against Signal yet. Under such circumstances, if a vulnerable person complains about the overall weakening of encryption, their voice will be drowned out by bitterness and spite.

There are very likely going to be warning signs in the source code of the client of the practices that concern me. I’m trying to warn people to be on the lookout for these things.
Until they’re present, Signal should theoretically be safe.

1 Like

There’s a problem with using an algorithm. Let’s use a very simple example.

You have an abuser and an abuse victim on the app. The theoretical algorithm correctly identifies the abuser. Your suggested actions happen and the abuser is kicked off the app. The abuse victim talks about their abuse to someone using the app. At this point, the algorithm is going to have a very hard time differentiating between the abuser and the victim. That’s hugely problematic. And it’s not going to be made okay by having a human review the situation.

An additional complication. The actions taken by the abuser are illegal. Perhaps the abuse victim is underage. Kicking the abuser off the app is one thing but it doesn’t necessarily stop the abuser from finding ways to contact the victim. Legally, if someone at Signal reviews the chat then they may be liable legally if they don’t report it. And that’s on top of the moral issues where it contradicts her stated intent that you’re concerned about in the first place.

I understand that you’re not using the word “incompetent” to describe any of these things but getting the company into this sort of situation that does not protect anyone and goes against the encryption ideas that are the main reason for the app to exist in the first place? Well, I think it would be hard to describe them as anything but incompetent bungling. There are huge problems with the behavior you’re afraid of and I don’t need to share your viewpoint on anything to notice that.

That’s the point I’m trying to get across here. What you’re afraid of isn’t just a new direction that some people will like and others will not. It would be the whole app and company imploding. And that wouldn’t accomplish anyone’s goals.

For my part, I’m speculating on what you describe as incompetent behavior because the article states:

  • We believe that every technology and media company has an obligation to monitor and mitigate the harm done by the products they produce and the services they provide — whether it’s content distribution, search, artificial intelligence, or social networking.

And I don’t know how one would achieve that while also maintaining any kind of privacy.

1 Like

Surveys are one example.

1 Like

Could you elaborate?

You can quantify anything you do by collecting a large number of people, and surveying them, like you do in medicine. The goal of that is to find those who have used something the company offers (search?), those who didn’t, and compare how well they are doing in life. Then you can take actions that you think will improve the outcome and compare agiain.

There are other, more reliable ways to monitor your impact. A/B testing, controlled trials, self-reporting, etc. Those don’t obliterate everyone’s privacy.

2 Likes

Are you suggesting taking surveys to see if people are feeling like they’re being mistreated on the platform?

I’m saying that even if you don’t know how to do someone else’s job, that doesn’t mean that that person is incompetent.

3 Likes

I never said anything about anyone being incompetent. Despite that fact, someone else inferred that I was thinking that. All I have ever said in a matter-of-fact way is that I don’t know how they would achieve monitoring without seeing message content.

Fair enough, I mist have misread your comment. But now I don’t see what point you were trying to make :stuck_out_tongue: Anyways, the quote mentioned monitoring harm, not monitoring message content. Harm may be an outcome of messages, but watching harm doesn’t imply violating privacy.

No, but (in the context of a messaging app like Signal) how do you watch the harm if you can’t see the messages?

Harm isn’t done to messages, it’s done to people. Choose a focus group, choose volunteers. Choose your employees, choose businesses, choose governments. Watch those, and you won’t have problems with privacy even if you decide the only way to evaluate what effect your business has on society is to watch the messages you pass through.

1 Like

A concrete example of harm you can cause but is independent of what your business does is underpaying your staff. Another example is causing vendor lock-in. Yet another is freeloading on some externality you don’t pay for.

1 Like