Hello and good morning.
I decided yesterday afternoon that I would try to write something a bit different for today’s post, rather than just another litany of my depression and despair, since I’m sure any dedicated readers are probably getting almost as tired of reading them as I am of experiencing them. I cannot directly alter the fact that I experience them—if I could, I would—but I don’t have to make it an uninterrupted trail of goo for you all to slog through on a daily basis.
I came up with two, more or less unrelated, ideas, but I’m only going to focus on the first, which is nearer and dearer to my heart and mind, in any case. It’s also been something I’ve thought about on and off for some time. I do wonder what pertinent quote from Shakespeare I’ll find to alter to make the title, but of course, you who are reading will already know the answer.
Don’t spoil it for me, okay? I want to be surprised.
Anyway, the idea I wanted to bounce around today has to do with the question of the discontinuity of reality at a mathematical level.
I’m sure many of you are aware that, from the perspective of quantum mechanics, there is no sensible differentiation in, for instance, location at any scale smaller than the Planck length, which is about 1.6 x 10-35 meters, or in time below the Planck time, which is roughly 10-43 seconds.
There are various reasons for this, and I won’t try to get into them, but this is generally agreed upon by all the scientists who work in the field. It’s part of why there is an upper limit to the number of possible quantum states within any given region of spacetime, defined, thanks to Bekenstein and Hawking, as the surface area of an event horizon surrounding that region as measured in units of square Planck length.
Thus, based on the best current understanding of the micro-world, the universe is not so much pixelated as blurry at the smallest scales. Admittedly, these are very small scales—far smaller than we can probe currently, so we may, in principle, be wrong about some of it, and quantum gravity might change our understanding, but there are strong reasons for this assessment.
Now, mathematics—thanks to work threshed out by Newton and Leibniz, building on ground first broken (though no one quite realized it at the time) by Archimedes about two millennia earlier*—can deal with things that are truly continuously divisible.
Those of you who took high school level calculus (or higher) probably recall that a derivative involves finding the instantaneous slope, or rate of change, of a curve describing some function, such as the instantaneous acceleration being the rate of change of the “speed”. The idea of it had to do with taking the slope of a line connecting any two nearby points on the curve and bringing them closer and closer together, taking the limit as that distance goes toward zero.
Analogously, integrating a function involves finding the area under a curve, and is in a way the opposite of a derivative. This involves splitting the area under the curve into rectangles of fixed width at any given point along the curve (the height defined by the value of the curve at that point) and adding them together, then taking the width between the points to be smaller and smaller, until one approaches the limit of an infinite sum of “infinitesimally” narrow rectangles.
These processes are tremendously useful, and can describe the orbits of astronomical objects and the trajectories of ballistic materials, just to take two simple examples. They are good for describing the universe in many ways, and they often produce useful and accurate answers and predictions to the best of anyone’s ability to measure.
But that raises my question. Do we currently have the capacity to tell the difference between processes in the universe—say, for instance, acceleration due to gravity—being truly continuous or them being in a sense discontinuous?
We know that the Real Numbers are uncountably infinite, as a matter of pure mathematics. Between any two nonidentical real numbers, however arbitrarily close together, exists an uncountably infinite number of more real numbers, as large—so to speak—as the number of real numbers themselves, a Russian doll in which every new doll revealed by opening the previous one has just as many dolls inside it as there were inside the original Russian doll…but even more unlimited than that.
This is, however, not necessarily relevant to reality**. Just to demonstrate that fact: we can calculate Pi (π), the ratio of the circumference of a circle to its diameter, to any number of decimal points we might choose, but it will never come to an end—it’s an infinite, non-repeating decimal number, one of the “transcendental” numbers. Pi has been calculated to 62.8 trillion digits (as of 2021) but that’s not a number we could ever measure as the ratio of the circumference of any actual circle to its diameter.
I’ve read (from a reliable source) that only 39 digits of Pi are necessary to calculate the circumference of the visible universe*** to the fineness of a single hydrogen atom. Now, a hydrogen atom is about 1033 Planck lengths across, according to a quick search, so that means, in principle, we’d only need Pi to 72 digits or so to calculate the diameter of the universe to the nearest Planck length. That’s a fairly large number of digits, but it’s smaller than the order of magnitude of, for instance, the estimated number of baryons in the visible universe, and is smaller than the entropy “contained” in even a solar mass black hole****, unless I’m seriously misremembering.
So, finally, my question is, how well have mathematicians ascertained that aspects of reality can in truth be described by equations that are actually continuously variable, or whether we could ever tell the difference?
A computer, for instance, could simulate some model of a continuously varying system to a high degree of precision by taking each current state and then applying an approximation of the applicable equations to the next state, iterating each step in sequence, as if recapitulating the steps that led to the limit defining the derivative or the integral of a function. This would be considered an approximation of the true function, of course, but one could, in principle, get arbitrarily close to the true function by taking one’s intervals to be arbitrarily small—solving, for instance, or at least simulating, the three (or more) body gravitational problem, by calculating, at each instant, the net effect of each object on all the others, calculating the acceleration, applying it, moving each thing a tiny step, then recalculating.
But what if it’s not the step-wise approach that’s the approximation? What if the continuously differentiable functions we use to describe things like gravity and electromagnetism and the various quantum mechanical matters are the approximations? What if reality is more Δx/Δt than dx/dt?
Obviously this is a simple enough concept to come up with, and I’m far from the first one to think of it.
My more immediate question is, has anyone demonstrated mathematically just how fine our measurements would have to be to tell whether, for instance, the orbit of a planet around a star follow a truly continuously differentiable path, or if it is just a step-wise, iterated process? If one were able, for instance, to simulate the orbit of a planet, say, by iterating an approximation each Planck time, and reconfiguring the system at each step to the nearest Planck length, how long, in principle, would it take to be able to tell the difference between that simulation and a truly continuously differentiable motion? Could there, given the constraints upon the nature of reality applied by our best understanding of quantum mechanics and the like, ever be any measurable difference?
I don’t know if this has been addressed by mathematicians. It may not have any practical implications, since we’re a long way from being able to measure reality precisely enough—or so I suspect—to tell that difference. But I wonder if it’s been worked out just how finely we would need to be able to measure to tell if reality is truly continuously differentiable.
If anyone reading is a mathematician familiar enough with this sort of question to give me an answer, I would love to hear it. Or if you know a mathematician with appropriate expertise, or a physicist of similar expertise, I would dearly like to know if anyone has done any explorations from the mathematical (not simply the practical) point of view regarding this.
That’s it, that’s my subject for the day. I feel that I’ve been very ham handed and brutally quick in the way I’ve gotten into the subject, and for that, I apologize. I only have the time to write this between my shower and when I leave to go to the train station, so it’s a bit quick and dirty, as they say.
Obviously, I don’t have time or space today to address my other, unrelated question, which is about whether the legality and ubiquity of large-jackpot lotteries of various kinds has changed the general psychology of, for instance, the American people in a way that has decreased “average” ambition and work ethic, providing “bread and circuses” to the masses in a way that has at least contributed to the greater economic disparity between socio-economic levels in the nation (and the world) and the gradual dissolution of the middle class?
I wouldn’t dream of thinking it the only or even the dispositive factor, but I wonder if it might have contributed.
Maybe I’ll write about that tomorrow. Weirdly enough, we may have a harder time coming up with definitive answers for that question than the one I tried to discuss today. Mathematics and physics are easy, in a sense. Biology, psychology, sociology, economics…these things are truly hard to model and describe in useful, predictive ways, because the systems are so complex, with so many variables, both dependent and independent. Even weather, the quintessentially chaotic system, may be more tractable.
I hope this has been more interesting than my usual reflections and projections of gloom. I also hope you all have a very good day, and maybe that you think a bit about what I’ve written.
TTFN

*What a Mary-Sue that guy was! I mean, forget the whole acrimonious debate on priority between Newton and Leibniz regarding calculus, these guys were about two thousand years behind the Eureka Man!
**Though it could be, even if distance and time and not limitlessly divisible. For instance, if the Everettian “Many Worlds” description of quantum mechanics is correct, the overall “space” of “universes” created at points of decoherence/branching could be infinitely and continuously divisible, making it a no-brainer as to how many potentially different worlds there might be in that space—not “real” space, but the orthogonal space that contains all the branches of the many worlds. However, that might not be infinitely divisible, either.
***That’s everything that can, even in principle, be seen given the finite time light has had to reach us since the Big Bang.
****The Entropy is about 1077, but Entropy is proportional to the natural log (basically, taking a log is the opposite of raising something to a power) of the number of possible microstates in a system, so that number of states is e to the 1077 power, or e multiplied by itself 100000000000000000000000000000000000000000000000000000000000000000000000000000 times.

Pingback: “I thought you died alone a long, long time ago” – Robert Elessar