Nothing private

Yes, that’s exactly what’s going on. Nothing Private uses the Client.js JavaScript library to do its fingerprinting:

If I understand it right, nothingprivate.ml stores the 32-bit fingerprint from Client.js together with the name you enter.

To see the different things Client.js uses to compute the fingerprint, check the demo site https://clientjs.org/ .

Digging into the data there, you might be able to find out how to reduce your uniqueness, or perhaps regularly change things so that your fingerprint changes.

In that case you’re best off using either random readouts, or the most common readouts.

Fingerprinting is a really tricky one to deal with. Thankfully you’d pretty much always have plausible deniability if someone could only track you down via fingerprinting. I don’t think it’d be nearly definitive enough to hold any water in court or something of that nature. Tracking via fingerprinting is a method that requires making a lot of assumptions.

There’s plenty of tools that can be used to spoof or block your fingerprints and user-agent. I’m also aware of tools that allow you to change your readouts system-side - beyond the browser and in the OS.

But ultimately the effectiveness of it all is arguable. I think it’d be really difficult to trick the FBI & NSA if they were after legitimately after you. For example, I use Mullvad VPN, they could identify that I connected via a Mullvad IP and correlate that with my fingerprint, then start making a suspect out of anyone that connects over Mullvad that has even a similar fingerprint, or speech/wording patterns, etc.

When I want to track someone for real, there’s five main things I look out for:

  • IP Address Patterns (they may come from the same ip range, general region, or VPN/proxy service).
  • Fingerprint / User-Agent Patterns (similar readouts. Seemingly random or blank readouts can ALSO be used as a correlation, and completely blocked readouts most certainly can be).
  • Patterns in the user’s choice of usernames.
  • Patterns in the user’s choice of info (sex, age, location, etc)
  • Patterns in the user’s overall speech and personality. (EG: How they talk. What words they tend to choose to use over others. How they structure their sentences and paragraphs. How they deal with grammar. Common spelling mistakes they make. What their interests are. Etc).

I think it’s REALLY hard to mix it all up enough to skirt someone who’s serious professional who’s determined to catch you. I think you’d have to use multiple VPN services, change your fingerprint to specific common readouts often (and never change it in the middle of a session), and be very careful to not sound like the same person either - essentially editing your entire personality and putting on a new face.

I figure Tor can help a lot with the IP end of things, but we all know it comes with it’s own concerns.

In the end since I’m not a criminal or anything, I pretty much just go to the reasonable extent to protect my security/privacy. Meaning just using a VPN so that my ISP or any wiretappers can’t see my naked traffic, and my passwords are safe etc. I figure if you were running from globally-influential three-letter agencies, boy, you’re in for a headache.

Ultimately, it’s incredibly difficult to keep yourself from being identified at least loosely on the internet (they’ll at least know that you’re the same person, even though they won’t necessarily know your real name and info unless you were careless and leaked it). However it’s relatively easy to keep yourself from getting hacked, which is actually a different topic altogether. Anonymity is harder than security nowadays it seems.

In the end I’m not terribly sure what the actual best way to deal with fingerprinting is. That’s a tricky one. I think you might have reasonable arguments between people who say “Use the most common readout possible” and others who would say “Randomize the readout” or “Blank readouts” or “Block the readout”.

It’d be nice if we could get some kind of movement going where everyone on the internet agreed to just use the same fingerprint readout. It wouldn’t be useful anymore in that case, since everyone would be the same! But lmao, good luck with getting that to happen right?

1 Like

I like your suggestion. We cannot convince everyone in the internet, but if all PureBrowser have the same readout by default, then we’re anonymized, Aren’t we?

Well if we’re to go that route, we should analyze data that’s been collected from the general populace and determine what schematics are the most common.

But even if you use all of the most common individual readouts, you’re still going to end up with a readout that only a tiny percentage of users have. And one issue would be if certain hardware in the readout are never found in the same system.

I’ll bet that the most common readout is that of a MacBook or iPhone model, tbh, considering how standardized they are.

Anyway, yeah if we could have an extension that spoofs people’s user-agent and fingerprints to be the same as the most common readouts and have as many people adopt it as possible, we could probably break tracking via fingerprinting by polluting the fingerprint pool with duplicates.

Identification through fingerprinting and user-agents only works as long as everyone’s is unique. If we wage war on the uniqueness itself and make everyone’s the same, it doesn’t work anymore.

The problems of course is that some sites use user-agents and fingerprinting to execute important functions or decide how it needs to display content. Furthermore, it would also make the gathering of statistics far less useful. Quite a headache for developers who rely on collecting statistics in order to know how to tailor their designs and troubleshoot problems, probably. And users of it would need to be able to troubleshoot and allow fingerprinting for individual sites that require an accurate readout. I know some that do.

Although i don’t understand much, i like to see that there’s a solution in sight.
Having a readout that a tiny percentage have is infinitely better than having a unique readout.
For the screen size, the readout should be the real size of the screen (13’ or 15’) this size is not unique (is it?).

This is why I’d suggest Purism add an addon that blocks JS. They could implement it in several ways. For example adding the addon but disabling it (with an informational note explaining what it is and how to use it, with a check box offering to prevent this message from appearing again later) or to enable it by default and then add a similar informational note explaining in a easy way what it does, how to use it and how to disable it if one would like to.

Edit: Wondering if there is an addon that would by default drop all the cookies from all websites when you close the browser except the websites that you whitelist. IMO, that would be great to have in PureBrowser.

1 Like

https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/

1 Like

Thank you.
It’s released under the MIT license + it being open source makes it FOSS, right? Therefore it could be added into PureBrowser.

1 Like

Yep. It is FLOSS and could be included with Pure Browser. I think that even if they don’t include it, it could be added in a list of “featured addons”.

NoScript
Cookie-AutoDelete
UBlock Origin
Privacy Badger
DuckDuckGo Privacy Essentials
LibreJS

I already mentioned self-destroying cookies in an earlier post. It deletes cookies AS your browsing, not just after you close the browser. It’s also open-source.

Personally my browser uses the following addons:

  • uBlock Origin
  • AdGuard
  • HTTPS Everywhere
  • ScriptSafe
  • Privacy Badger
  • Disconnect
  • Decentraleyes
  • Self-Destroying Cookies
  • Proxy SwitchyOmega

Along with fine-tuning in the browser settings themselves.

Each extension/addon serves a purpose. For example you may point out that AdGuard is redundant to someone who has uBlock Origin, but it really isn’t - AdGuard has filter lists that uBO doesn’t, and it has a feature that can replace the Firefox’s “Block Dangerous Content” setting - and does it anonymously using hashing methods.

Self-Destroying cookies deletes cookies as you browse. ScriptSafe, using my configuration, blocks “unwanted” scripts and fingerprinting and spoofs your user-agent. Decentraleyes locally injects content that CDNs typically provide.

Proxy SwitchyOmega is there to allow you to use the browser with the Tor Expert Bundle to make it into your own version of the Tor browser. You do this by adding a SOCKS5 proxy configuration and setting it to listen on 127.0.0.1 port 9050. Of course you can do such settings in the browser configuration, but what the addon provides is a convenient button you can press to hop on/off the proxy without needing to enter the browser’s advanced settings, more or less. If you do use this, I strongly suggest setting HTTPS Everywhere to “block all unencrypted requests” as the Tor exit node can see unencrypted traffic, of course. And obviously, Tor is no replacement for a VPN and I use a VPN + Tor configuration.

The browser itself is heavily modified too. Set to always use private browsing mode, blocks third-party cookies, and some things in “about:config” or “chrome://flags” are also modified for maximum security (EG: The hard-disabling of WebRTC).

I’ll be sure to share it soon, I’ve just been on vacation and not having the time since my dad keeps getting drunk and pulling me around town everywhere. sigh… the issue right now is I need to provide an installer.

Plus, I’m on Windows right now, so I’d only be able to provide an installer for Win users. I’ll just provide instructions for the rest of you. I’m really just hoping to swtich to a Purism system someday - currently I’m on a laptop I’ve had since 2011 (lmao) and Linux doesn’t work on it (it uses an old Nvidia GPU - a GPU from the time that Linux and Nvidia had a bad relationship and thus it doesn’t work with it, and the computer predates Intel integrated graphics). I’m pretty much stuck in Windows hell right now, but at least I’m using the LTSB version and rooted-out the telemetry and such using group policy, deleting services, and using some tools.

I’m really tore-up on what to do for my next computer really. I want a Purism system for my communications, but I know that I need a Windows system for everything else - video games and productivity software and everything. I may just have to dual-boot on whatever desktop I get until I can afford a Purism machine as a second computer.

3 Likes

I applaud you and your efforts. But it is sad that it has come to this. Why can’t you find a super simple web browser any more? I want Firefox (or whatever, but has to be 100% free-open-source) in a completely modular system. At its simplest I could just compile html support. Add the css module if I want it. javascript module. webrtc, etc. etc. I know, I must be smoking some good stuff eh? I wouldn’t even care if it was just a compile-time thing as it’d be something most users would never care about. But, last time I looked at the source for any of these browsers I got brain sick quick.

At this point I just strip down Firefox as much as I care to spend time on it, and then run it in a firejail that I reset each launch. For some things like work, I have a separate firejail that I allow to persist over time.

I mean, I seriously can’t find a browser that can be configured to 100% not write any thing to disk. Even with caching off, cookies off, private mode, etc. etc. every single one drops a bunch of crap to disk that it may or may not clean off later.

Sorry, I got a bit ranty there.

Because the internet is designed in such a way that blocking all “the bad stuff” makes it malfunction. Essential code and cookies are interwoven with things you need to run a website as intended.

It’s just about impossible to filter out all the “bad stuff” without also blocking content required to display the page as intended. We’re lucky to have people who spend lots of time creating filter lists for content that can be fully blocked without causing undesired issues - and the lists are often created with cosmetic fixes included to keep it from mangling the website visually as well. Those lists run much of our blocking capability today.

If you really only want to display the internet in plaintext and without the ability to log-in to any site or service, I suppose you could install a browser on your system, disable everything in it’s settings, and make your hard disk read-only.

In Firefox you can block all cookies and set your web-cache storage allotment to 0, at least.

Yeah, I was just being the grouchy old timer I am. I really do find it sad “we” let things get to this state though. A million little cuts over the years.

At 24 I’m a bit younger probably. The internet has just always been this way for me - if it was ever totally trustworthy then I’m too young to remember such a time.

I mean, really, is it not getting better? The open-source movements of today didn’t really exist as strongly 20+ years ago did it? My understanding is that the early internet and computers was pretty much totally closed-source software and firmware all the way down, most of developed by tech giants and ISPs.

Seems to me that most of the open-source movement is all about reverse-engineering what today’s tech giants did back in the day, and making our own “trustworthy” version of it.

I see it as improving for those of us that are tech-savvy, honestly. Unfortunately it’s also worsening for those that aren’t because they’re on things like Facebook that harvest their data.

But then again, they’ve have plenty of warning, and today’s news should be a huge wake-up call. If they choose to turn a blind eye and use those services anyway, touting the old “I have nothing to hide” tired cliche, then they’ve made their choice and made their own bed.

Anyway, I think that we really probably have more power to combat spying now than ever. Spying has increased a thousand fold globally as well - but so has our ability to fight back.

I have no doubt that long ago when everything was proprietary, bad actors made-off with literally everything people did. They just didn’t know because they didn’t have any means to detect it or do anything about it.

The new technology world will probably be better for those that are intelligent and willing to put forth effort. It’ll be worse for those that don’t and choose to hide behind tired excuses rather than put forth any effort.

Hmm. Well, there are interesting views on this, especially from a crypto point of view. For many, many years it was completely illegal to “export” any crypto from the USA. So tons of stuff just didn’t have secure options because they really couldn’t. Some offered hooks that allowed you to plug in strong security, but it really was an ugly thing to deal with. Look up the history on PGP sometime; Phil Zimmerman was a huge boon during this time.

But, as far as free and open software it depends on the person. I was using Slackware in the early 1990s. This was back when it was on a big bunch of floppy disks, that I managed to download over time through my BBS contacts. NCSA Mosiac was the first browser I was really familiar with and, you’re right, looking back at that we had the source code but it was not “free” even back then, it had a restrictive license. But most of the Slackware distro was free and open source software.

In short, no, it was never really trustworthy. I’m not sure if open-source is stronger now than in the past, by general ratios, but my guess would be that it’s about the same, again, ratio-wise. But the “web”, the web has gotten much much worse as it has become more complex and feature-rich.

Interesting bit of history there that I didn’t know. I think I’ve heard that even today most kinds of encryption requires some sort of authorization from the Department of Defense and that technically everyone using SSL online today is breaking the law - it’s just not really enforced anymore. I forget what it was specficially - anything with more than an 80-bit cipher or something like that?

I think these rules were written during Cold War times and had to do with keeping our methods out of the USSR’s hands, or giving them any new methods to work with, weren’t they? Pretty sure the TL;DR is that the USA wanted to keep their technological advancements as private as possible.

But yeah I think if you’re smart, the modern world is better than ever. It’s just really bad for those that are careless. All in all, the “good guys” and “bad guys” have both gotten way more powerful as the tools have matured.

I think it all comes down to education - which is what it comes down to in a lot of subjects including politics. We have a real education issue in the United States now, and it’s showing in every sector including our political shitshow.

Nowadays it’s dangerous to be careless and ignorant of the internet. Educating yourself is far better than expecting any AV system to nanny you. That’s what I often end-up telling people - you are the best AV system, you just need to update your definitions.

2 Likes

I have the full suite of Firefox privacy plugins installed (including CanvasBlocker & User-Agent Switcher), and using them I’m able to successfully defeat tracking from http://www.nothingprivate.ml/

However, this setup still fails to beat fingerprinting at the EFF’s Panopticlick. Still have three very unique values: hash of canvas fingerprint, hash of WebGL fingerprint, and browser plugin details. The user agent uniqueness was fixed though.

This indicates to me that CanvasBlocker isn’t working as hoped. Would be curious to see others Panopticlick fingerprinting readouts for hash of WebGL fingerprint & hash of canvas fingerprint to see if other plugins work better at combating this.

I have successfully defeat it by turning privacy.resistFingerprinting in about:config on. Seems that is what is needed to defeat the site after all.

2 Likes

Sadly, this new parameter was just added in Firefox 55, and so it is not yet supported by PureBrowser. Anyone know when the next Firefox ESR update is coming? I’d love to enjoy these enhanced privacy protections in PureBrowser. Also, if it supports Quantum it’ll be a big upgrade.

According to Mozilla, the next ESR should be Firefox 60 (the next FireFox release, expected releasing at 2018-05-09 according to some sources). But making PureBrowser based on it might be a tuff job since Quantum introduces a ton of changes.

Reference:
https://blog.mozilla.org/futurereleases/2018/01/11/announcing-esr60-policy-engine/