Imagine an Internet of Snitches , each scanning whatever data they have access to for evidence of crime. Beyond the OS itself, individual phone apps could start looking for contraband. Personal computers would follow their lead. Home network file servers could pore through photos, videos and file backups for CSAM and maybe even evidence of copyright infringement. Home routers could scan any unencrypted network traffic. Your voice assistant could use machine learning to decide when yelling in a household crosses the line into abuse. Your printer could analyze the documents and photos you send it.
It’s not much of a surprise to most people that their devices, especially their phones, are snitching on them to the hardware vendor (or app developer). Some people are surprised to discover just how much. I already wrote a post Snitching on Phones That Snitch on You that focused on the amount of data an idle Android and iOS device are sending to Google and Apple respectively, described how we avoid those problems on the Librem 5, and even explained how to use OpenSnitch to track any attempts by a malicious app to snitch on you.
So we know most devices and proprietary apps track people to some degree (even for paying customers), and that the problem has extended to cars. While many people don’t like the idea of this, they also shrug it off, not just because they don’t feel empowered to do much about it, but also because their data is “only” being used for marketing purposes. Someone is profiting off of the data, sure, but their data isn’t being used against them.
Yet we are starting to see how your data can be used against you. Police routinely get location data from data brokers to track suspects without having to get a warrant. Even private groups have paid data brokers to dig up dirt on people, leading to a Catholic priest’s resignation after location data revealed he used the Grindr app and frequented gay bars.
Crossing the Rubicon
So companies capture and sell our data, and the police and private groups sometimes buy that data to look for crimes. But up to this point, the “snitching” that devices did on you was indirect–it would send data to vendors or app developers to sell to brokers, but the only time that vendors might search your data and alert the authorities is when searching files stored on their own servers that you have shared. Up to now, actually scanning for potential contraband on a person’s device was a line companies wouldn’t cross.
This past week, however, Apple crossed that line. Apple announced in their new child safety initiative that they will scan all customers’ iPhone photos for CSAM (Child Sexual Abuse Material) before they are backed up to iCloud. Plenty of other groups have already weighed in on the risks and privacy implications of this particular move for iPhones and the EFF in particular has explained the issues well, so I won’t cover any of that here. What I will discuss, instead, are the broader implications of crossing the Rubicon into client-side scanning of devices for potential evidence of crimes.
Read the rest of the post here: