Quantum Cracking Counter Measures Reprise

Fair question. So to clarify, in the early phase of the QC technology S curve, I would expect exponential improvements because a lot of the technology hitchhikes on semiconductor scaling. You might even see Groverlike square-rooting of keys when they’re sufficiently short (although, even then, not exactly, because you’re going to be taxed by noise to some degree depending on the competency of your error correction scheme). But at some point, you can’t shove any more data into a physical system because the Universe is discrete at its roots. From an information perspective, it’s a giant connected graph incapable of supporting unlimited precision. Therefore, you have to hit a wall at some point.

This is what space probably looks like as it evolves, below comapratively macroscopic QM:

Here’s the comprehensive metatheory from first principles:

https://wolframphysics.org/technical-introduction

Again, this doesn’t describe our Universe specifically, but based on their simulations, it seems likely that one of the possible parameterizations of the metatheory would result in the reproduction of QM. Here’s some analysis of what they’ve found so far:

https://www.wolframphysics.org/technical-introduction/potential-relation-to-physics/basic-concepts-of-quantum-mechanics

More abstractly, here’s a good laymen’s intro to the ramifications of spacetime being a quasicrystal, in other words, a structurally complex 4D projection of, in this case, an 8D base reality. Sorry, it’s cringeworthy in bits, but it gets the point across:

https://www.youtube.com/watch?v=w0ztlIAYTCU

While I couldn’t find the part where Klee Irwin discusses the processing limits of the QSN, here’s a decent presentation of the concept of nonergodicity, which might be better termed “probabilistic dehomogenization”, meaning the notion that, at sufficiently small scales, not every allowed state actually gets visited by a quantum system because, well, it runs out of entropy. This is due to (1) the QSN has only so much memory to offer and, far more significantly, (2) there’s a feedback loop connecting macroscale structure to microscale phenomena which corrupts the otherwise unbiased state search of a QC operating in the real world, completely apart from ordinary quantum noise. QCs are highly structured artificial machines subject to this nonergodic self-influence, which is an anaethema to efficient state exploration. It’s quite unclear that good engineering practices will allow us to escape this strange attractor issue.

https://www.youtube.com/watch?v=lTMGiDi7ld4

The channel itself is worth some exploration, although it’s more of a collection of lectures than a tutorial:

https://www.youtube.com/c/QuantumGravityResearch/videos

Now, you might say “so what” because the information density limits of physical systems are astronomical by any measure. Even at one bit per atom, we’re talking Avogadro’s number of bits in a handful of sand. But remember: the bill adds up quickly when we’re talking about cutting key spaces by enough to be solvable in reasonable serial time; you’re asking the Universe for exponentially more operations every time you cut just one more bit off the space. The Universe isn’t analog, so at some point it has to resist by not actually trying all possible pathways, but merely a subset of them. But you wouldn’t necessarily notice because the QC seems to output a different candidate key every time you turn the crank. And indeed, if you could turn the crank an infinite number of times, you’d see every possible output, although some much more often than others. But that doesn’t mean the computation is ergodic on any given attempt. It just means it’s ergodic in the classical sense of serialized exploration. Therefore, unfortunately, it’s not the machine of Shor or Grover that was featured in the brochure.

1 Like

Ah, so you were discussing what would happen if quantum computing as generally understood was not actually possible. Did I get that right?

You’re right, it is cringy. And even with my very limited understanding of the subject, I was able to recognize several flawed arguments in that video, so I don’t trust it one bit. Even if some things may make sense.

Judging by the beginning, the style would be quite fitting for a Myst-style mystery. I dig it.

At least until the part where they build the term “quasicrystal” on a wrong example.

@dpr @dcz I think you two are still missing the point. Did you even look at the other sources?

The Universe is not analog. Shor’s algorithm doesn’t scale the way that QM says it does. If you can crack 40-bit keys like 20-bit keys, then great, but that doesn’t mean you can crack 100-bit keys like 50-bit keys.

Steven Wolfram is no dummy, for that matter. He’s the brains behind 4 decades of Mathematica.

And specifically, how is the quasicrystal example from the Quantum Gravity Research video wrong?

And to be fair, the computational power of QM has never been tested beyond, if I recall, about (10^6)X speedup. That’s absolutely fantastic for certain applications, but in no way changes the game for properly configured cryptographic keys. Is there any evidence that the scaling factor is any higher than that, let alone infinity?

On the contrary, there’s plenty of experimental evidence that QC advantage will hit a wall, and the limitations have a lot to do with the structure of space at the Planck scale.

Some will point to protein folding as evidence that QCs have “effectively infinite” scalability, because after all, a protein is an analog QC which is trying to solve the problem of how to optimally fold itself. The problem with this argument is: (1) we don’t actually know the optimal folding topology, so for all we know, it’s just finding the most likely “pretty good” solution that’s at the bottom of a wide energy valley, as opposed to the optimal solution that’s at the bottom of a deep, narrow well; and (2) nonergodicity would explain why proteins all, for the most part, fold in the same way: they’re not trying all solutions to find the optimal one, but rather responding to influences higher up the complexity chain which result in probability dehomogenization that favors their ultimate folding choice. Even if this is wrong, the fact remains that we have no way of knowing how thoroughly they explore their space of allowed shapes before settling on one. In other words, QM’s state exploration has not been tested beyond what I mentioned above.

(The 10^6 factor is a rough conclusion from a recent Google experiment, but it relies on the assumption that the classical algorithm used for performance comparison is nearly optimal. That is, of course, dubious to begin with.)

I think this here is where you claim something thats generally accepted is wrong.

An orthographic projection of a regular pattern into any other space still yields a regular pattern. You can’t turn a (regular) crystal into a (irregular) quasicrystal by just using a linear projection. The shadow presented was a regular structure, and therefore definitely not a quasicrystal.

The answer to the question that you don’t know to ask is: Super-singular Elliptical Curve Isogeny Cryptography. I use it during some gov work to prevent quantum decryption (whether now or in the future) from being used by them to observe my methodology of performing XYZ I’m paid to perform, or hide opsec procedures. If you have questions, contact@brianpwright.info or brianpwright@protonmail.com if you prefer encrypted

2 Likes

I just wanted to say as someone new to this level of opsec, this whole thread is incredibly interesting and very informative, thank you.

Is it really generally accepted that Shor’s algorithm scales to infinity? I don’t think so, anymore than I assume that Netwon’s contemporaries would have thought that velocity can scale to infinity. (But what do I know. Maybe they did.)

I mean, if that were true, then how could we even have the notion of a Planck scale?

The point isn’t that it works with 40-bit keys but not with 100-bit keys. The point is that we don’t know where the wall is, and likely won’t until we actually try. But the mere fact that a wall exists means that we must progressively slow down as we approach it, just like mass increases as we approach the speed of light. It’s one thing to assume that science works in places we haven’t tested because it works in both more extreme and less extreme cases. But it’s philosophical to assume that because it works up to parameter X, that it therefore works at parameter 2X. Newtonian mechanics extrapolates just beautifully until you get to around 10^8 m/s.

To put it another way, suppose we have 10 qubits, all entangled and in a balanced superposition of states. Then, neglecting noise and assuming an ideal experimental setup, we have 2^20 possible states, as each of them provides a factor of 4 (https://en.wikipedia.org/wiki/Superdense_coding). Do you really believe that the system is actually in all 2^20 states at the same time? I do, in fact, believe that. But what if I have 100 qubits? Do you believe that the system is fairly (or even unfairly) spread over 2^200 possible states? I rather doubt it. At 1000 qubits, I’d bet the farm that not all 2^2000 states coexist. Add more qubits still, and we’re reaching a lesser and lesser density of allowed states that are actually being explored up to the moment that a measurement occurs. How, then, can one argue that ergodicity – effectively, information density – is always at 100% of what is theoretically allowed, and yet have any meaning at all to the Planck scale?

Honestly, I don’t mean this flippantly. I’m really trying to see what’s missing here.

I believe you’re correct and it’s good you pointed that out. However, the quasicrystal aspect of the QSN comes from space being a tiling of E8’s. The lady’s tinker toy periodic structure in the video doesn’t really do that idea much justice. They should have shown a Penrose tiling as a better 2D analog. This is a rambling presentation of how Quantum Gravity Research thinks it works in 8D:

https://www.youtube.com/watch?v=_g2JWp6F_xQ

I’m not sure what you mean by “scales”, but quantum computation is a field somewhat unrelated to quantum physics, as much as computing is not a field that’s related to physics. Rather it’s based on mathematics. Even then, apart from decoherence problems, I don’t know of any physical limitations to applying quantum computers on physical hardware.

Which leads me to believe your assertions are not based on the generally accepted understanding of physics.

1 Like

I don’t doubt that the algorithm, in the mathematical sense, will provide exactly the scaling that it claims, in terms of how many qubits one would expect to require in order to crack keys of a given length. My contention is that the physical world won’t support endless exponential scaling, regardless of how many qubits you have. It might not slow to zero, but it certainly won’t continue exponentially, even under ideal experimental conditions.

Again: How is it that we can have discrete transactions on the Plank scale (or any finite scale, for that matter), and still assume that we can create machines which have no limit to the number of states which we might explore per qubit? A qubit gives you 4 states, 2 of them give you 16, 3 give you 64, etc. When you divide the number of states by the number of qubits, the states per qubit diverges to infinity. Is that not a problem?

And again: Why believe in aspects of QM that have not even been tested, anymore than you would expect theories that work at the speed of sound to work at the speed of light? I’m not partial to whether or not it’s generally accepted that QM extends infinitely beyond its experimentally supported domain. I just don’t see any reason to believe that it would, especially because we have explicit experimental evidence that there is a fundamental limit to measurement resolution. Were that not the case, then infinite scaling would at least be a simpler theory.

Which is effectively equal to the physical sense https://www.scienceforums.net/topic/85561-shut-up-and-calculate-is-this-one-of-the-scientific-noble-values-we-should-apply/

I don’t have anything against novel explanations, but presenting them as true in lay forums where people aren’t equipped to evaluate them is not fair. It’s especially bad when those people might then want to make decisions about their security based on what they hear.

Let people know the generally accepted models, and fight for the novel ones where the experts are.

1 Like

Let people know the generally accepted models, and fight for the novel ones where the experts are.

I do appreciate the “shut up and calculate” addage that has well survived the test of time. As that forum discussion you linked says in various ways, it’s good advice when we have a theory that applies to phenomena far beyond our mental capacity to understand in totality, and in particular for dealing with QM solutions that have unintuitive implications about the way things work that are nonetheless sure to be accurate.

But this issue isn’t that. It’s about the boundary conditions under which QM has been rigorously tested (or tested at all). What we desperately need is experimental evidence of how far we can actually assume ergodicity of any given sytem – even something as conceptually simple as a bunch of qubits entangled in an unbiased superposition. I think it’s a mistake to propagate the assumption that this is known to be large enough to be threatening to any nontrivial key length. Infinite scaling might be generally accepted (after all, it is elegant), but I’m more interested in what the evidence actually says about the way the world works, than what people generally accept.

There’s nothing wrong with being conservative and moving to longer keys with supposedly quantum-resistant key exchanges and symmetric ciphers. It’s probably prudent, with the caveat that those algorithms are of course newer and therefore less widely scrutinized than plain old AES or RSA. So they could end up being less safe (unlikely) or subtly misimplemented (less unlikely).

QM has been extensively tested in all its assertions within various limits. Insofar as computational advantage goes, the best demonstration I’m aware of is the widely reported one this year from Google, which features a speedup factor of about 10^6 relative to a supposedly optimal-ish classical approach.

But… hmm… IBM think it can be done in days:

and this team claims to have done it, and done it better, in under a week on 60 GPUs…

Forehead, meet wall.

My point isn’t that QCs will never be able to do anything a million times faster than your laptop. (I emphatically think that they will.) It’s that the evidence for the overall maximum acceleration factor, let alone on a per-qubit basis for a specific algorithm, is utterly untested in the domain which is relevant to cracking keys – keys which, classically speaking, would be regarded as safe for a lifetime.

So everything I said about 10^6 was wrong. We’re back to some exponentially lesser advantage.

I don’t have anything against novel explanations, but presenting them as true in lay forums where people aren’t equipped to evaluate them is not fair.

Totally agree. But what have I presented as true, apart from the Planck scale being a wall of sorts, and the fact that no credible group has pushed QC supremacy anywhere near key-relevant levels? I’m not sure what specifically you’re referring to, but I’m open to correction.

I’m referring to the claim that there is some inherent limitation in quantum mechanics that makes quantum computing inapplicable above some limit. Aka “quantum doesn’t scale”.

I said that most people here don’t have the capacity to evaluate claims about quantum mechanics. That includes me. So unless you can find a quote from a mainstream quantum researcher supporting this, to me the claim stays a conjecture and independent of truth.

1 Like

I think you have a worthwhile question. It’s not like the first thing you see when you open a QM textbook is a warning telling you that (1) it’s incompatible with general relativity (GR) and (2) one battleground between the them lives at the Planck scale. So why does QM break down there? (And by “break down”, I don’t mean that it grinds to a sudden halt. I mean that its behavior becomes increasingly abberant we approach the Planck scale.)

But why?

So we have the Schrödinger equation which is the workhorse of closed-system analysis in QM:

It’s noteworthy that, baked right into it, is this “h bar” constant which refers to the Planck scale, and is indeed called “Planck’s constant”. (I know you already know this, but maybe some others here don’t.) So immediately this tells us that there’s something important about this scale, although in and of itself, it doesn’t portend any kind of breakdown.

I actually remember the day that I had a personal crisis with the equation because it didn’t seem to account for relativity. At the time, I was too naive to know just how serious the crisis was, so I marched off to my QM professor’s office, and asked for help. What he told me was that while Schrödinger’s equation was good enough for plenty of practical applications, a more honest description of reality was offered by the Dirac equation because it accounted for special (as in, not general) relativity:

And, indeed, h bar was still baked into it, as well. But again, that in itself doesn’t imply any sort of boundary condition.

The problem, trivial as it may sound, is this: the Dirac equation is a differential equation describing how the wave function changes in time and thus, within the confines of certain energy barriers (a “quantum well” of some form, such as an ion trap), how the distribution describing the relative probability amplitudes of all allowed states changes in time. The same could be said of Schrödinger, albeit under stricter velocity limits.

However, at the Planck scale, the derivative upon which the entire equation depends doesn’t work anymore, because now we’re counting pixels of space and time instead of allowing space and time to evolve like the real valued functions of analog math. There’s still plenty that one can learn about the world via integers, but calculus is no longer possible. The closer we get to the Planck scale, the worse that calculus performs relative to what’s actually happening. While certain qualitative aspects of QM are likely to survive down there, they definitely aren’t described by calculus, but rather some form of discrete evolutionary process, hence branchial space, the QSN, or some similar theoretical framework. By the way, it’s public information (albeit probably obscure) that the QGR and Wolfram Physics teams have spent time and effort investigating each other’s theories, and, it seems to me, are drifting toward some form of mutual unification (of those theories, not their organizations).

So, short answer: QM breaks down at the Planck scale because Dirac’s mathematical assumptions about spatial smoothness stop working at that point. If calculus doesn’t work, then quantum Fourier transforms don’t work. If quantum Fourier transforms don’t work, then Shor’s algorithm doesn’t work.

Long answer: OK – you got me – maybe some clever person out there has found a way to explain why the Planck scale is a mirage, in the sense that it only appears to force pixellation, but is just a numerical illusion created by other inaccuracies in QM. Perhaps if we unified QM and GR, the mirage would disappear and then it’s analog all the way down. So far as I can discern at the moment, though, the experimental and theory efforts are being directed at how to have pixellation coexist with the two in a way that makes them both imply the same evolution – not how to explain away pixellation as an illusion. The direct effect of this effort is to spin off into discrete graph dynamics, which is exactly what the aforementioned teams (and many others) are attempting to construct. (This is what Irwin means when he talks about the “principle of efficient language” in QSN evolution, or Wolfram when he refers to “rulial space” and “branchial space”.) It’s no coincidence, for instance, that in Fang Fang’s description of the QSN, she’s careful to mention that “arbirtrary closeness” of nodes is not allowed, because if it were, then space would not be discrete, and the theory would be in direct conflict with pixellation.

Here’s a really succinct statement of the Planck scale’s significance by Frank Heile from Stanford: “The Planck length is the smallest distance scale we can probe with accelerators.” In other words, it’s where the evidence trail stops. Anthing beyond that is purely theoretical extrapolation at this point. QM hasn’t been tested down there, nor has anything else. Here’s his full discussion:

And this 2016 article discusses proposed experiments to reconcile Planck pixellation with key implications of GR (while implicity admitting that pixellation is real even though it’s incompatible with GR’s equations just as with those of QM):

I have no idea what any of this has to do with data access control or see any connection to ciphers… If there is a relation can we ELI5 please?

It doesn’t. The conjecture here is that there are limitations to quantum computing that make it less powerul than quantum computing scientists believe.

However, there is no quote from an actual quantum physicist, meaning that it’s safe to ignore.

2 Likes

It’s safe to ignore the fact that a theory hasn’t been tested beyond certain bounds, and yet trust an algorithm implemented in the real world to scale beyond those bounds? Especially when there’s an explicit theoretical motivation to assume that it stops scaling at a specific point? (Well, of course, it is safe to use longer keys than you otherwise might need to(*). I mean “safe” as “reliable to believe”, in this case.) I don’t think the lack of any quotes implies that theoreticians are unaware of this point. Some of them likely take it for granted, but don’t mention it because the bounds are so huge as to be irrelevant for everything but key cracking. Can you find any of them claiming the opposite, that is, that QM happily scales across the Planck barrier, in particular? Maybe you can. That would be impressive, if so.

Well, it has everything to do with choice of ciphers and key lengths. But in current practice, dcz is correct because we don’t know where the wall lies for a given cipher or key exchange, then we could say what lengths are too short for a given duration of required privacy. I go could into details, but there’s a realistic possibility that QCs will suddenly pull ahead of classical computers and stay in the lead for a while, only to be leapfrogged permanently by the latter. If true, this would have clear implications for the tradeoff between key length and privacy duration.

As to Shor’s algorithm and quantum factoring in general, not much has been accomplished to date, even if we compare with existing laptop CPU capabilities (not that that means the question is settled or that it will never work):

“None of the above techniques implement Shor’s algorithm [but have factored larger integers]. They express factorization as a combinatorial minimization problem, solved using a variant of Grover’s algorithm or adiabatic quantum computing. There is no argument for hope that these approaches could scale to factorization of integers of cryptographic interest.”

(*) By “longer”, I mean “chosen randomly from a larger subset of all possible keys”. This is not equivalent to saying that it’s always safe to use wider flavors of the same class of ciphers or key exchanges. It probably is, in general, but that’s a different analysis, for example:

I couldn’t resist my own challenge, so I did lots more digging…

Scott Aaronson (https://en.wikipedia.org/wiki/Scott_Aaronson) explains that the amount of entropy that can be packed into a given volume is physically limited.

From https://www.scottaaronson.com/democritus/lec20.html :

“To see what any of this has to do with computation, we have to take a detour into the holographic bound. This is one of the few things that seems to be known about quantum gravity, with the string theorists and loop quantum gravity people actually agreeing… We saw way back in the first lecture that there’s this Planck area ℓ(p^2) = Gℏ/(c^3). You can get it by combining a bunch of physical constants together until the units cancel such that you get (length^2). Planck himself did that back around 1900. This is clearly very deep, because you’re throwing together Newton’s constant, Planck’s constant and the speed of light and you’re getting a length [area] scale which is on the order of 10^(-69) m^2. The holographic bound says that in any region of spacetime, the amount of entropy that you can put in the region – or up to a small constant, the number of bits you can store in it – is at most the surface area of the region measured in Planck units divided by 4. This is the surprising part: the number of bits you can store doesn’t grow with the volume, it grows with the surface area.”

This is way beyond what we could hope to achieve technologically, because such dense packing would result in the formation of black holes. It’s not surprising that the square root of this area is on the order of the Planck length. Anyway, the surface area of a Planck cube is 6 Planck areas, so this bound is something on the order of a bit per Planck cube. (Some other physicists have made the argument that, if length is quantized, it must actually occur well below the Planck scale. Even if that’s true, Aaronson is talking about pixellation in the sense of limits on spatial entropy density, which is what successful computation ultimately depends upon.)

Now, is it possible that this entropy density limit only exists within one of QM’s “many worlds”, but there is no global limit (because information in parallel worlds isn’t physically relevant information)? In other words, can we have infinite numbers of different parallel worlds representing all possible states of all the bits in the system? If QM is taken literally, then yes, we can. And that would, in fact, be compatible with notions of branchial space if branchial space expanded fast enough to ensure that the fraction of worlds containing the correct solution are equal to the solution’s probability amplitude under the quantum algorithm in question. So in other words, we could have spatial entropy pixellation but still have a total search bandwidth which is limited only by the serial aspect of the particular quantum algorithm. One doesn’t necessarily contradict the other.

This is what Wolfram’s Jonathan Gorard says about consistency between branchial evolution and (ideal) QCs:

Q. “How does quantum computation work within the context of your models?”

A. “In a surprisingly clean way! In short, one straightforward consequence of our interpretation of quantum mechanics in terms of multiway evolutions is the following, very concrete, interpretation of the relationship between Turing machines, non-deterministic Turing machines, and quantum Turing machines: classical Turing machines evolve along a single path of the multiway system (using a deterministic rule to select which branches to follow), non-deterministic Turing machines also evolve along single paths (but now using a non-deterministic rule to select the sequence of successive branches to take), and quantum Turing machines evolve across the entire multiway system itself (i.e. along a superposition of all possible paths).”

When I think about it, he this doesn’t say it outright, but I suspect he’s trying to say that branchial evolution does indeed comprehend all possible paths and does so in few enough evolutionary steps to keep pace with what QM would allow to be observed any any given time as a QC executes. In other words, it sounds like he’s implying a guarantee of sufficient, and timely, ergodicity. If I’m understanding him correctly, then I think this is quite remarkable because, judging from what they’ve said in interviews, they never made any effort to design their approach in order to ensure that it was capable of doing so. It just sort of fell out from their simulations. And all the moreso because they still haven’t found the rulial parameters consistent with our particular brand of physical laws.

What’s really elegant about this is that you can still have pixellation of entropy (and even spacetime, for that matter), and still have a finite number of worlds, while nevertheless having quantum advantage. So I think you win!

Assuming that a QC can be built with sufficient error correction, then this means it’s all back to the serial residue of quantum algorithms. That’s often touted as N^(1/2), but that’s just naive unordered search. In other cases, it can be considerably less or more, and there are plenty of papers on this. Apart from quantum algorithm improvement, the wall gets hit when the serial “megahertz” of the QC isn’t sufficient to search even the residue (or you can’t afford to buy more QCs). That MHz, thus far, has been orders of magnitude slower than classical, and in any event shouldn’t be expected to grow any faster, due to both being dependent upon semiconductor scaling. It’s also quite clear, if for no other reason than error correction overhead and the time required to establish superposition, that QC MHz will always lag classical MHz. So at least, this allows us to put some upper bounds on key privacy time, given a model for how one expects this speed to evolve over time. There’s also still the question of Irwin’s probabilistic dehomogenization in macroscale complex systems. That’s an open question with respect to the realworld quantum coupon, but as yet there’s not even a theory as to how it would occur.