Google and Apple partnership for Contact Tracing


Based on a previous comment of yours I assumed this to be the case. Apparently, that was an incorrect assumption on my end.

That’s not how this works. Sure you read the correct spec? Seems like you’re either misinformed, or confusing it with another contact tracing scheme.

This is the one where your phone shouts random strings, and other phones within earshot “write down” the random strings they hear. When you get sick, your list of random strings for the past 14 days or so get uploaded. Periodically everyone else downloads the new list of “suspect” random strings. Their app checks whether it “heard” any of the new “words” on the list, and if so, informs the user that they may be at risk.

It’s not protected. It’s public data. That’s the whole point: there’s nothing in that database that could be traced back to you, unless you would give away the private keys with which those random strings are signed and tell the recipient “these keys belong to me, kieran”. And that would be like giving away your SSH private key: unbelievably stupid of you to do, but not a flaw in the SSH protocol.

I sure hope not! This isn’t going away in a couple weeks time, and there will be other epidemics/pandemics that would greatly benefit from having contact tracing from the get-go rather than having to wait 4 months before we get our shit together. Either way, this is a moot point. The only things that are uploaded are the randomly generated strings of confirmed cases, nothing that can be traced back to you.

Not really. Since when has the mainstream media ever correctly reported on anything technical? Unless Google or Apple explicitly say that these APIs, which are merely system calls, will only be available to certain organisations, I wouldn’t read too much into what an industry known for its lack of understanding of technical matters, and for its inaccurate reporting of anything to do with “these magical thinking machines” has to say on the subject.

Because the thing hinges on cryptography. And you want your cryptography algorithms developed by someone who actually knows what they’re doing. That’s why the API for generating and signing these random strings become part of the OS: to ensure that app developers have access to verified implementations of the APIs and won’t be tempted to write their own.

Because nobody would trust that app if it were only available from Google, so they instead just provide a cryptographic API that allows anyone to make the right system calls in order to generate one of those random strings. This means governments, health organisations, but also FOSS developers could write their own implementation, so anyone will be able to find an app that satisfies their trustworthiness requirements.

I think you’re confusing two different meanings of the word API. This is not a REST API where you have to pay for an account in order to access it. This is a local API: a set of methods with predefined method names and arguments to be used on the local machine. Aka: a “library”.

There is of course the server that collects all the uploaded data from known infections, and that will obviously also use an API in the traditional sense like you understood the word. Since they want to prevent jokers from spamming the system with false reports, I’m going to assume, based on what I’ve read so far (namely that it’s going to be the hospitals who upload the data) that uploading will require an account that will only be given out to verified health organisations. Whether they get charged for this or not is not really relevant. We’re talking about potential privacy invasions, let’s stay on topic.

You’re entitled to your scepticism. However, it appears that experts in the field (e.g. virologists, health administrations) and who don’t have anything to gain by invading your privacy (they’re not the government, nor are they coerced by them) seem to think that this would be enormously beneficial to both their work, and the public in general.

And their arguments make sense: it won’t stop the virus from spreading, but it will allow people to be notified they may be at risk before they show symptoms. These people can then get tested, so their contacts can receive a warning as well, or at least self-quarantine. This won’t help the people that they’ve already spread the virus to before being notified, but it will allow them to prevent spreading it further afterwards.

Um, NO. People on this thread have shown opposition to this very idea, without even understanding what they’re talking about. Paranoia is running rampant. “Not on my phone!”; “Please, Pruism, don’t react to this so that nobody will notice you and require you to install this on your phones as well” etc… This is no longer speculating, this is calling to action to oppose this scheme.

If you read what I wrote above, which explains exactly what this is and how it works, you’ll realise that it’s just a harmless system library that generates random strings. This library can then be used by applications to do secure, anonymous contact tracing. What’s being offered is a way to end the lockdowns sooner, whilst not infringing on your privacy. But apparently you prefer to be responsible for more deaths, or give up your rights of free movements for longer, rather than accepting an elegant and privacy preserving solution that would allow us to regain our freedoms with minimal casualties sooner.

Also: “behaving like” is not namecalling. Words have a meaning. Please use them properly.

Not sure what country you live in, but over here, the healthcare institutions are pretty open about this stuff and have set up their own information sites that don’t depend on the mainstream media. We know that people die due to the virus; that’s not even under discussion anymore. If that’s the best you can offer to the discussion, I suggest you stop bothering.


That was the bit that I misunderstood. Thanks for clarifying that.

True. But the data from this pandemic would not be applicable to a future pandemic. Current data would presumably be misleading if it is used and pointless if it is not used. So there should be a way of securely deleting current data once this pandemic is over even if you choose to leave the app installed.

However that doesn’t go to the heart of my point, which is: will governments force you to hand over your contact tracing data if you are suspected of a serious crime (or under other circumstances)?

It is very likely already that governments will force you to hand over your phone. We’ve already covered that topic in this forum.

What then stands between the phone and the contact tracing data? Nothing? A user-supplied passphrase that in many countries the user can be compelled to hand over? A biometric that in many countries the user can be compelled to hand over or that the government already has anyway? Something stronger?

I understand that the answer will differ widely between countries.

The answer to the question “will governments force you to hand over your contact tracing data” could be “no, trust us, we would never do that, we’re from the government” and many people will be mistrustful, and think that that answer is not good enough. In some countries that mistrust is very appropriate.

The answer could be “no, we are legislating to ensure that that does not happen and we are doing that before anyone downloads the app”. We are telling national security agencies that even if they think that contact tracing data will help in a terrorism or other security investigation, they can %^&* off.

Except with data that resides on your phone?


I don’t understand how there’s no tracing back to you ("tracing " is in the name provided here). Isn’t it the whole point of this system? Or is it like broarcasting “I, whose name is , have covid” and then your phone checks that random string in its list of encountered random strings to see if one matches? I could see that being acceptable, except that you’d have to be within Bluetooth range of the infected person in order to receive that broadcast. If its some other vehicle of notification, then I don’t see how it could be completely anonymous.


One detail that I haven’t noticed being mentioned in this thread is a clear explanation of the basic problem that Google and Apple are trying to solve. My understanding of it is as follows:

This is a new way to use Bluetooth. They are specifying a new Bluetooth profile. In the same way that you need a separate profile for file transfer or dial-up-networking or human interface devices or whatever, it turns out that you need a profile for contact tracing using these random identifiers.

And since this is a new Bluetooth profile, there was no operating system API for using it in apps, so they’ll each have to write one of those, too. iOS and Android don’t let apps access the raw hardware or low level APIs; it’s all through high-level APIs.

Since there are already contact tracing apps out there, clearly you can abuse existing Bluetooth functionality to achieve a similar end result, but it’s not “the right way”. It might break Bluetooth functionality for other apps, or overload additional semantics onto existing data fields in a way that stores up problems for the future. And by having a proper API for it, Android and iOS will be able to properly handle power and lifecycle management for contact tracing apps.


Well, since the data is worthless after 14 days, say 1 month to be on the safe side, there would be no reason to keep it around beyond that timeframe. For future pandemics, the time required to keep the data may be more or less, but an app update can take care of that.

For the rest we can’t draw any conclusions until we see the actual implementation. E.g. will this be an API that just generates and returns random strings, which the app is then supposed to store in its datastore? Or will this be an API that generates, returns, but also keeps track of its generated random strings beyond the app’s control?

Server side there may be more cause for concern. Even if we only get the last 14 days worth of data when we download it, who’s to say they’re not keeping more? Or keeping it perpetually? Would probably not even violate the letter of the GDPR since no personally identifiable data is kept, but it could be datamined to see which anonymous clusters of people hang out with each other. And if you combine this with other means of gathering part of that data, e.g. who’s connected to who on social media, you could probably widen the connections or draw other conclusions from said data. This is something we need to keep an eye on, and that should be regulated.

True, but it’d be easier for them to just check the logs from the cell towers, or your location timeline.

As long as a warrant is involved, I’m not really concerned. It’s kind of odd that we would think searching your home is perfectly acceptable with a search warrant (which can only be obtained by court order and presumably requires you to have a good case already), yet baulk at the idea of someone searching our phone under similar circumstances.

Random searches, on the other hand, no. But that’s again not a flaw with the scheme. Just because the government can ask for your SSH keys doesn’t mean SSH is broken. Just because it can ask for your banking credentials doesn’t mean online banking is broken.

No argument there. Privacy needs to be legislated.

Evidently. I meant: access to the data that’s outside your phone does not lead back to you. It’s impractical for the government to ask every citizen to hand over their phone on a regular basis to collect such data, and if they could, you’d have a bigger problem on your hands. And again, not a problem with the protocol.

Now, government mandated apps that use this protocol? I’d be weary of those too. Because they could be doing a lot more than just sending out and listening for random strings. They could upload everything to some government server, together with e.g. your phone number, in which case these random strings could be traced back to you. That’s why it’s important that the protocol be open, so that anyone can write their own app that uses it.


@jrial says above that you download lists of random identifiers that have been seen by people who have tested positive.

You check for your own identifier on that list.

Your random identifier changes every 15 minutes. So you have many identifiers.


Not exactly. When you test positive, your random identifiers are uploaded. Not those you have seen.

People then download these identifiers, and their app will tell them if there are identifiers it has seen.

Basically, the way you explain it, your app would know “I have been seen by someone who has the virus”, whereas in reality it’s “I have seen someone who has the virus”.


Another way of looking at that though is … since you can’t guarantee that you won’t be forced to hand over data, it may be prudent not to collect the data in the first place. You can’t be forced to hand over something that you don’t have.

The contact tracing app is extending the broken model of: collect it first, then oops we can’t control it. Repeat.

More and more data is being collected. We should push back when we can. I would have thought that anyone intending to get a Librem 5 in order to escape the clutches of Apple and Google would embrace that. It is just unfortunate that COVID has happened before we have our phones.

Not sure who that “we” is. :slight_smile:

I think enough discussion in this forum has illuminated why people feel differently about data that is held on a computer e.g. forced disclosure of keys / forced use of biometrics.


I see. So there’s a database of “known infected” and every “beacon,” we’ll say, that you encounter is compared with those in that database or list. If you encounter one, or volunteer to (maybe just the latter?) then yours gets added to the collection.

I suppose if the metadata (IP address, etc) is scrubbed after the fact, that could be OK. On paper.


I think it makes a big difference if accessing the data requires that you, the subject, are made aware of the access to the data. Much of the problem with data collection is when the data is held on someone else’s server and you have no idea who is and isn’t reading it for whatever purpose. It weighs on your mind. You feel like you’re being followed everywhere. (Or is that just me…)

At least if the police arrest you and seize your phone and its contents, you and everyone around you knows what’s happening. The issue gets confronted, head on. And (hopefully) they can’t do it routinely to everyone. I take some comfort from that.

And, if you hold the data yourself, then presumably you can delete it and never have to worry about it again. If it’s on someone’s server you have no proof it ever gets deleted. It might come back and bite you in twenty years’ time.

Now whether you have proof that your phone isn’t secretly uploading the data to a server anyway is another question.


Not really. The model is: I generate and broadcast random data. And if there’s a reason for me to inform the people I’ve been in contact with (i.e. the people who received my broadcasts) that they may be in danger, I can have that data uploaded.

Can this be abused? Of course. Everything can be abused. For example, the government could place devices that listen to this chatter in certain places, and if they suspect you of having committed a crime at one such location, they could ask for your data and compare it to what they recorded.

But this is expensive (expensive to install and expensive to maintain) and convoluted, and there are easier ways to place you at the crime scene, so I’m not particularly worried about this attack vector.

They also know that as soon as they abuse this, people will start removing the apps. Which means they can’t be used for their intended purpose anymore: keeping us safe when the next one breaks out, and minimising impact on the almighty Economy. It’s really in their interest NOT to abuse this system.

There’s nothing wrong with some healthy scepticism, but when detemining the opponent’s actions, always take into account their incentives and disincentives.


Not these days. Your phone’s contents might be seized remotely without you even being aware of it.

Otherwise yes it very much matters where the data is stored and who controls it and who accesses it.


Here’s my concerns.

  1. What is the opt-in mechanism? Is this going to be a setting (real or fake), or simply by having Bluetooth on? When it gets fully baked into the OS, will the Bluetooth switch actually turn the radio off?

  2. In the spec, it says, “ • If diagnosed with COVID-19, users consent to sharing Diagnosis Keys with the server.” What is the consent? Getting tested?

  3. Once baked into the OS, would a 3rd party app even be needed? The spec claims an API to start, then full OS integration.

  4. When someone tests positive and their keys are uploaded, are the keys recorded with any personal information connecting the keys to the person?

  5. Suppose there’s an investigation into a crime. Phone records are subpoenad to figure out who was in the area. A few arrests are made and the tracing data from their phones are gathered. To get a better picture of who was close to the crime, the set of data from suspects’ phones could then be uploaded and claim it to be an infected person. With that, the people that were within Bluetooth range of the crime would be warned of a possible contact and they likely then go to get tested, at which point they could be questioned.

Would there be an easier way to figure out who was near? Probably, if the location has security cameras. However, this data could connect people that meet outside of camera and cell coverage, for both good and bad reasons. It wouldn’t be a discovery method for the suspect, but it could reveal accomplices, though by disrupting a lot of innocent folks.

It’s all anonymized data, until it’s not.


a simple data redirection, back-up and bulk-collection-mass-storage for present or future on-demand “scrutiny” has been know to take place for quite some time now … the Snowden revelations have already happened or have we forgotten that already ?

this time is just “asking politely” … :mask:


Didn’t forget, just pointing out that this “Contact Tracing” works for other uses than COVID contacts and is yet another piece of the puzzle.


Two answers for that:

a) The keys are supposed to be random, meaningless, unique numbers. They shouldn’t carry any other information. If implemented properly, it should not be possible to infer anything from a key itself (or even from the entire set of available keys).

b) However clearly it is up to the testing agency what personal information they collect and record and whether they then associate that with the keys. I guarantee you that right now essentially all of your personal information will be associated with the testing process i.e. I doubt many countries, if any, allow anonymous testing. This is particularly the case while the testing process is slow and results don’t turn up for some days.

Let’s speculate a few months into the future. Test results are now available in minutes. They could allow anonymous testing and they could use your (current) key as the identifier for the test. So no personal information would need to be provided. Will they allow that? Who knows.

However that would prevent the government from doing proper random testing (since you might end up getting tested twice), as distinct from testing of walk-ins who suspect that they may have a health problem.

This is scary stuff. It suggests a much bigger agenda. Please can I have my Librem 5 already? :slight_smile:

In regional areas, location via cell tower is not very accurate i.e. when within a coverage area. Bluetooth tracking would be a nice adjunct to the excessive burden of surveillance that already occurs.


In regional areas, location via cell tower is not very accurate i.e. when within a coverage area. Bluetooth tracking would be a nice adjunct to the excessive burden of surveillance that already occurs.

When only in range of one or two towers, yes, but get three and the margin of error gets pretty small, and this will only get more accurate with cantennas popping up everywhere.

My thoughts on the keys being linked to an identity relates almost entirely to how the keys are handled once they leave the device. If there is a button in the settings to send keys, that’s one thing, but if they are copied over to the computer system of whatever facility did the test, then it’s almost guaranteed they will be tied to an identity. If we can trust that it is only the public keys being recorded, then at least only past keys are identified and not future keys.


The assumption of my comment was “one tower” (and no Stingrays :slight_smile: ).

I believe that that will occur - since you presumably cannot upload them to the necessary web site yourself. Hence the desire for anonymous testing - so that it doesn’t matter so much how the facility handles your identifiers.


Some of you missed that in some links of the posts in this thread there is mention to DP^3T, a Decentralized Privacy-Preserving Proximity Tracing, in their document researchers about digital rights explain very clearly the implementation
and they already implemented it: you find source code under open license for Android, iOS, SDK and server

We need to push governments to adopt this implementation


One thing that should be stated is that there is a difference between

the app that will come from “Google and Apple partnership for Contact Tracing”


the actual app that you will be “encouraged” to install in your country.

It seems like a lot of countries are going it alone (Singapore, Germany, Australia, France, …) and all the apps work differently.

So it is difficult to have a meaningful discussion about what the risks are of the app that applies to any given person.

I suppose that all the apps will be incompatible too. So if I travel to a different country, and if I am running the app, it won’t necessarily be correctly exchanging any data with the people around me. It could be though that once Google and Apple have something up and running, it will swamp all other contact tracing apps and eventually will be the sole app and will be compatible everywhere.