At What Point Is Privacy Worth Giving Up

Although I highly value privacy, one part of the privacy and security discussion should at least consider the merits of snooping. If someone is plotting crimes on a computer, I somewhat like the idea of them getting caught by some government agency surveilling them. If Iran or North Korea wants to build nuclear weapons, I like to see them caught via hacking their systems, or better yet, the US government inserting viruses in to their systems to impair their capabilities.

Let’s consider a future world where Purism has released a whole line of uncrackable phones, tablets, PCs, and network routers. Eventually they have access to chip Designers and Fabs, to create new silicon chips that are designed to prevent all snooping and cracking; no backdoors and no cracking nor snooping work arounds. From silicon design to board-level application, the final product is absolutely secure and completely private.

How do we keep foreign governments like Iran and North Korea from buying and using this technology for purposes that we oppose? How do we keep criminal syndicates from using this technology for illegal enterprises? What is the arguement that allows the government at that point, to require Purism to put government-accessible back doors in to their devices? Can we even trust the government? Putting aside the technological mechanisms, how should the system be structured to maintain ethical controls while preventing abuse?


Reasonable questions. My take on this would be:

Nothing is unhackable/ unbreakable and never will be. With enough manpower all systems can ultimately be compromised. Take for instance social engineering as another component in order to get into the minds of the users of secure systems.
If NSA (or any other spy agency) targets an individual with their resources specifically for a good reason, of course I would want them to succeed.

So I wouldn’t worry about catching the bad guys at all.

The issue is to prevent passive mass surveillance.
I just don’t want them to snoop up all the data in the public because it comes at no cost for them. This is where privacy and encryption come in in order to prevent producing the data in the first place and secondly to have all data encrypted, so that the three letter agencies would have to decide if they would want to pour resources on decrypting the data.

That’s how I understand and see the issue.


Why do you assume Iran and North Korea having nuclear weapons is any worse than the countries that already have them? You are talking from one point of reference and assuming all the law and order information you are getting from your country is valid.

The problem is that the will of the people doesn’t always match the will of the government, so laws/state decisions get made without the will of the people. This means that some activity you do today could be illegal tomorrow, even if a majority of people don’t support that decision.

Trading freedom for security is never the right path.


It is never worth giving up.


What’s the aphorism? That when one trades freedom for security, one ends up with neither?

The way I see it, the conversation of privacy vs security is a similar philosophical trope to freewill vs determinism - only with less metaphysics and more sociology. I imagine for each person it’s different. If we completely take the question of technology out of the conversation and go through some thought exercises about how we’d like our neighbors to act, we might gather fundamental insight. What information we’d be comfortable with strangers knowing? What information we’d be comfortable with our enemies knowing? Perhaps we might come to some sense of how we’d like our tech to reflect those dispositions.

1 Like

This is exactly why the rights to privacy, free speech, and arms must always be preserved.


My 2 cents is that people are innocent until proven guilty. If the objective were to prevent crime, knives would be outlawed. In OP’s example, we have people with nukes who especially shouldn’t have nukes. I don’t think any sort of subterfuge or preemption should occur unless there is good evidence that they are trying or have nukes. Crime is one thing, and it ought to be dealt with. Potential for crime is another entirely, and I don’t think that it is something that can be ethically dealt with. That line is entirely too easy to cross.

So to answer @StevenR, I think the important part is the motivation behind developing super unhackable hardware is “good,” and therefore should be allowed and encouraged to happen. Usage of that technology, to include selling it to “bad” people, is the responsibility of those engaged in such actions. It is those actions that should be judged and, if necessary, dealt with.


Is this a serious question or are you just playing devil’s advocate?

I mean for one, most of the countries who have them haven’t continually expressed a wish to destroy another country or people. For another, the main country in alliance with many other countries (NATO) with them has been able to use them over the last 70+ years to help ensure peace not just in Europe but elsewhere.

It’s unfortunate you asked such a question, when the rest of your post made a lot of sense.


Pretty much this. Iran, North Korea already have countries like the US, and organizations like the EU to contend with via sanctions, etc. US citizens can’t depend on corporations to do the right thing or for state/federal to legislate and enforce in the interests of its citizens over corporations or its own self preservation. One of the only ways to fight immoral corporations and governments is to contribute to FOSS projects like the GNOME/KDE app ecosystem, etc to fight to put power back in the hands of citizens and keep it that way. Does improving stuff that keeps your privacy benefit the bad guys? Sure. Improving global air quality, etc also benefits them.

If the government doesn’t want terrorism, maybe the CIA shouldn’t have trained up the Taliban and given them who knows how many weapons or the Reagan administration secretly and illegally selling arms to Iran to fund rebels to overthrow a South American government they didn’t like to name just two examples.

Encryption and privacy improving tech can’t be selectively beneficial because that’s not how it works. Any weakness for one person is a weakness for all users. You can’t make air selectively work for non-‘criminals’.

Also agree with the unhackable bit. Nothing is theoretically unhackable, it’s just a matter of resourcing/time with the goal of the mouse being to make it inconvenient enough for the cat with resourcing and time costs such that it becomes impractical to go after that mouse.


I wonder what the people of those countries with dictators would say about strong encryption and wether it would make their lives better to be able to communicate freely?


No. I’m serious. People are the same everywhere. I don’t trust any country’s government having nukes.


I agree with that but like one dy/dx unit less than 100%. Want to add details on why you are confident?

My reason for confidence

It stems from that whole NP-Complete set of problems from a Theory of Computation class I took while going to University at Buffalo. It was the class where you go super deep into Finite State Machines (pay-phones, vending machines, etc.), Turing Machines (software, virtual servers, Docker things, etc.), and languages.

At some point, it clicked for me as we were made to prove that any Turing Machine could be represented by a Natural number, but the possible inputs to a Turing Machine require the Real numbers to represent them. Attacks are in the set of Reals and defenses are in the set of Natural numbers. Abandon hope all ye who enter here into the realm of privacy and security. Not that we shouldn’t try, but we shouldn’t ever be fooled that their’s complete privacy or security.

It spills over into the physical world too.Like an established Army with a Fort that is deemed to be impenetrable by the enemies, weapons neglects the possibility of the enemy thinking of a different weapon (skip to about 1:34:30).

People who think they have an air-tight argument for some philosophical position haven’t tested the Real number type size huge-infinity of possible counter-arguments.

Perhaps @StevenR this is an intractable discussion because it really depends on what an entity is planning on doing with their privacy and/or security instead of merely having it.
(Shit, I wrote that before seeing @Gavaudan wrote the same thing.)


Fortunately your trust hasn’t been necessary or essential in securing peace across the world.

Don’t get me wrong. I wish that the weapons didn’t exist, but am happy they helped to end and prevent wars since their inception. As well as create one of the safest forms of energy production.

Still, we can agree to the disagree.

WRT to the topic, I would agree that anything is hackable. The main reason for this is because a solution can’t be found that contends with every possible way of reaching it. Be that via hardware or software.


I think this is one of those questions that if you ask 1000 people you’ll get 1000 different responses.

To me the core problem is different people have had different experiences that they would like to avoid in the future and there’s no easy solution.

One significant flaw in your logic, in my opinion, is that what you see as “bad” another may see as “good” with almost no objective good/bad or right/wrong.

For me, I think sometimes people look to technology to solve a people problem and that’s not how I think the problem should be reviewed.

IMHO it all boils down to: Privacy is pretty useless without security protecting it, and security is pretty useless without something private to keep secure. As such they’re both equally important and trading off either degrades both.


I guess you’re just more patriotic than me. :wink:


I agree with you there - but to your point - people are the same everywhere.

With some exception, people want to live. There is truth in peace through mutually assured destruction.

People with enough fortitude to become a head of state, dictator, or any other major world powerhead a very seldom suicidal. Therefore, while you might not trust them with your loved ones or your money, sadly enough, you can probably trust them with nukes so long as all the major powers have them.

2 Likes fifth bullet point

Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.


All good questions / good discussion.

I am not building nukes or plotting crimes on my computer therefore there is never a point at which my privacy is worth giving up.

Let’s be realistic. North Korea already has nuclear weapons, and has intentionally made that known. They are not “caught”, nor do they need to be “caught”. Deterrence is not keeping it a secret.

It is doubtful that any surveillance or hacking will prevent any country from building nuclear weapons, in the longer term. You can only get lucky with something like Stuxnet for so long.

It is also arguable that if Purism has released a whole line of uncrackable devices then so could <insert name of axis of evil country> - particularly given that Purism intends the design to be open. (Hence it would be worse than that the government would demand Purism insert a backdoor. The government would also have to demand that Purism stop being open, which would fundamentally undermine what Purism is trying to do.)

So you are at best holding back the tide for so long, while, as Ben Franklin observed, corroding your liberty.

You can trust the government to abuse and exceed its powers. Is that what you meant? :slight_smile:

You don’t. It hasn’t worked in the past. It won’t work now or in the future.

It might work for a while with niche and/or large items. It isn’t going to work with mainstream portable consumer devices.

If you are worried about <insert name of country> getting hold of (future, fully open) L5s then the US government is better to attack via the mobile phone network infrastructure, where things are basically wide open. Hence the Huawei hoo haa.


That would be a huge discussion in its own right.

I don’t think it matters for the question as originally posed because it was a given that the device is unhackable (at least in practice) and hence the government wants Purism to backdoor the device and/or prevent the device falling into the hands of bad actors.

One observation on the subject of “unhackable”: Suppose that an encryption algorithm is intentionally backdoored (as has been alleged without much proof from time to time). Even with perfect chips for everything and open source for everything, the device will still be very hackable for many purposes. So apparently Purism is also going to have to commission the development of encryption and hashing algorithms. This is going to be a long project …

1 Like

I don’t think Free software/firmware/hardware means what you think it means.