Cloudflare's invisible CAPTCHA probes browsers with JavaScript

Supposedly, it maintains the user’s privacy while it looks “at some session data (like headers, user agent, and browser characteristics) to validate users without challenging them”… and without storing any cookies.

I despise and viscerally resent Google’s reCAPTCHA, of course, for obvious reasons, not to mention the fact that they have been successful in getting it widely seeded throughout the internet.

Maybe Cloudflare’s solution is better… Thoughts?

1 Like

This is the devil. Why should I enable scripts, take down my defenses - why do I need to compromise my security? And why, on why, is there no alternative classic manual method - this will not work every time and some alternatives need to exist. This may seem benign but the same method could be used to limit users in all kinds of ways for all kinds of reasons.


One might really have to access a website - banking, mobile phone account, etc. - in which case blocking the script that enables the browser check may not be a viable option. I still think it’s unjust and insulting to have to prove my humanness in order to even visit a website. I’ve also sometimes had to complete CAPTCHAs after I’ve already logged in, even. :man_facepalming:

I agree there should be an alternative, though. One type of human check I can tolerate is the kind that Hotels(dot)com uses; it presents a grid of cartoon scenes and instructs me to “pick the penguin.” (So far, it has always been a penguin for me. :smiley:) At least this type doesn’t seem like it’s collecting data from me.

1 Like

But that makes sense. Just because you logged in doesn’t prove that you are human. It would be easy enough for a malicious human to create an account, login, and then use the resulting credentials in his malicious script.

It is true that humanness checking is used to prevent automated attacks against login forms (brute forcing) and for that purpose you are right to question why you have to prove humanness after logging in.

Likewise it does also make sense to have to prove humanness even to visit a web site, in the contexts of

  • retrieving information from that web site because, for example, they don’t want a robot scraping the entire contents of the web site (either because of the load that that would impose on the web site or, more likely, because they want to keep control of the information and ensure that only ‘fair use’ is made of the information), or
  • a contact form that will generate e.g. an email when you submit the form

and no doubt other contexts.

So I think you should assume that proving humanness will be around for some time to come … in which case you want a way of proving humanness that

a) does not use Javascript, and
b) does not store any information anywhere beyond the immediate transaction of proving humanness.

My web sites meet those conditions but they would also easily be breakable if a human being looked at the behaviour of the web site and then explicitly coded support to break the human check into his malicious script - or if a fairly(?) intelligent AI looked at the behaviour of the web site.

Hence you really want a third condition

c) not easily breakable.

With the idea in the OP it obviously depends on what data is looked at and whether, for example, anti-fingerprinting will cause it to give false results (says not human when actually is).

Yes, this is a fundamental conflict. If a CAPTCHA is reasonably effective, and particularly if it is free, laziness and other factors say that it will be widely used - but if it involves any recording, it also becomes a privacy threat, following you throughout the internet.

Because CAPTCHAs are used to protect login forms, random web sites do have an incentive to find an effective CAPTCHA and use it.

1 Like

Although only a minor part of Cloudflare’s solution, Private Access Tokens sound like a bad direction to be going in and they will probably never work with Linux or other open source environments. They probably only work at all on iOS because on iOS all web browsers are required to be a wrapper around the one standard built-in web browser - and that approach is something that is dubious at best.

I have my doubts that any static Javascript will work for long. Code that runs on the client is by definition untrustworthy to the server when the client is malicious. It is unclear how static the Javascript is, and I certainly haven’t inspected the Javascript to see what it does.

I guess any solution that dilutes the near monopoly of a Google solution is an improvement.

1 Like

Reminds me of the art of Eric Joyner about robots and donuts:

1 Like

This “bonus”: … every opportunity is an opportunity, every change is a risk…

1 Like

That particular one looks like a Windows exploit, which I reckon will fail fairly quickly for most of us. Maybe they have a Linux variant although they would have to detect the operating system on which the browser is running correctly in order to serve out the correct variant.

(Digressing further here, I’ve seen some attacks against consumer-class routers that run Linux where the initial malicious shell script just iterates through each architecture, downloading and running a malicious executable, in the hope ideally that exactly one of the executables runs and the others all bomb out immediately or at least that if more than one executable runs then they don’t interfere with each other.)