It’s the end of the week, but weakness persists

It’s Friday again, at last, and this is indeed the final day of the work week for me.  I am not expected to work tomorrow, and I think that even if they decided they were going to hold the office open tomorrow, I would not go in.  I am too tired and dispirited to once again throw myself into the gears of the machine just because other people want me to do it*.  Honestly, I feel it’s more likely that I’ll throw myself into real machinery than that I will go to work tomorrow.

Speaking of such throwing, as I was leaving the train yesterday evening, I found myself looking under the engine, seeing where the wheels meet the tracks, and wondering if I would have the guts just to lay my head across the track‒my neck, really‒and let myself be run over.  It would be a quick death, I suspect.

I don’t think I have the guts, though, not right now.  Also, it would be rude to screw up people’s commutes.  But it does carry a weird kind of perverse attraction.

Nothing else of interest is happening, really.  Well, perhaps one might concede that there are many interesting things happening, in the sense of the old curse, “may you live in interesting times”.  Unfortunately, even those types of interesting things that are happening are so…well, almost so trite, so pathetic, so contemptible, so predictable, so “been done already”.  None of the weird, would-be interesting, things that are happening are impressive in any sense.

Okay, well, I’ll concede the relative interest and impressiveness of the Artemis II trip around the Moon recently.  It would be more interesting if it hadn’t been something we’d done literally before I was born (3 months before, to the date, for Apollo 11’s landing), using computer systems that were‒to use the most conservative definition‒28 iterations of Moore’s Law ago.  So, literally, by more (ha) than one measure of that “law”, we are at least 2 to the 28th times as advanced, computationally, as we were when we first went to the moon.  That’s conservative, because I’ve heard descriptions of Moore’s Law that put the doubling time at 18 months, in which case there would be about 37 doubling times since then.

For those of you for whom exponentials don’t carry quite the visceral impact they ought to carry, 2 to the 28th power is 268,435,456.  So, by more conservative, every-two-year characterizations of Moore’s Law, our current computational powers are more than 268 million times what they were in 1969.  By less conservative estimates they are 137,438,953,472 times, so more than 137 billion times as advanced.

To be fair, it’s just computer tech that has advanced like that.  The process of engineering rockets hasn’t improved to the same degree, because that’s a large-scale engineering thing, and is more constrained by the rate at which one can directly interrogate nature and build and test technology.  Still, we went from Kitty Hawk to the Moon in about two thirds of a century, but in the more than half a century since, we’ve certainly not extended that streak much.

Okay, to be fair, we got pretty good at sending out space probes and such.  Even then, though, our most distant and still most impressive probes were launched in the late seventies.  There have been some quite impressive things since, and I intend no shade to be thrown at them.  That Pluto thing was very impressive, as are almost all of the Mars missions and the probes sent to other planets (on the other hand, the ISS isn’t that much more impressive than Skylab was).

The things that have improved significantly have largely done so solely because the increased capacity of computers has assisted in modeling and, well, computing things.  So, rocket science has improved to the degree that computer science has improved, divided by the fraction of that improvement that cannot make a difference in how well rockets can be made.  Something like that, anyway.

Yeah, rocket science hasn’t advanced much in my lifetime.  Brain surgery has done a bit better, but not as much better as one might have reasonably expected.

Then again, we’ve certainly improved our ability to make memes and now AI images and videos to make fun of people and express our own loyalties or outrages.  Yes, in a real sense, many of our greatest advances in recent decades have been improvements in our ability to hurl feces at each other like the monkeys we all are.  We appear to be more engaged by such shit-flinging even then we are by sex, which seems mind-boggling.

I say “we” but that broad description does not apply to everyone.  Some people are still more interested in sex.  At first glance, that would seem to be the more evolutionarily stable of the approaches, but it’s an empirical question, so we can really just wait and see which, if either, of the two tendencies prevails in the long run.

To paraphrase Dave Barry, I myself plan to be dead.

Anyway, that’s about all I have to say for this week.  I don’t mean to make a post tomorrow, but of course, as always, that is barring the unforeseen.

I hope you have a good weekend.


*Okay, to be fair, that’s not really the reason.  I do it because I get paid.

It’s not about me. It’s not about you. It’s not even about everyone.

It’s Friday today (as I write this, anyway‒it may be another day entirely as you read it), and I am in the process of heading to work.  I will also be working tomorrow, barring (as ever) the unforeseen.  And that doesn’t just include the foreseen unforeseen; the unforeseen unforeseen (especially that one) can also change what happens tomorrow, in ways that we do not expect, more or less by definition.

Of course, the Tao te Ching advises us to act without expectation, and I suppose that’s pretty good advice.  The universe doesn’t make special deals, such that if you do some particular thing, it will definitely turn out the way you hope.  The universe does what it has always done, and you are not the subject or the object of its action‒you are just one of the innumerable things the universe does.  It did not have to ask your permission, and it will not apologize.  It also does not make exceptions, not as far as anyone can see.

 

Since the beginning

not one unusual thing

has ever happened*.

 

You can imagine and draw a map that looks any way you want, that contains fairy lands and misty mountains and roads that are shorter in one direction than another**, but if your map doesn’t match the actual territory, it’s not going to be useful for traveling through that territory safely and successfully (by whatever reasonable criteria you might judge success).  Likewise, blank spots on the map don’t imply blank spots in the territory, and writing “here be dragons” does not somehow conjure dragons into existence (alas).

Reality is that which actually exists, whether or not anyone “believes” it or “believes in it”, whether or not anyone has been, is now, or ever will be aware of it.  Heck, if eternal inflation and a consequent inflationary multiverse following (for instance) the string landscape are true, then the vast majority of the stuff of reality will never, ever be known, because most of it‒the ever-expanding inflaton field and those bubble universes where local laws are such that complexity cannot exist, as well as those huge stretches of even our universe that precede (or follow) any existence of life‒will never be accessible to conscious experience.

That’s okay.  Man is not the measure (nor the measurer) of all things.  Man is the measure of almost nothing.  Man‒indeed, all life of which we know‒is a tiny little epiphenomenon that exists in a tiny little sphere of nonzero thickness on and around the surface of the Earth.  I’ll try to remember to do the math comparing that volume to the volume of the visible universe and put it in a footnote below.  If it’s not there, I didn’t do it***.

One sometimes hears people say‒often they seem to be trying to make excuses for themselves to believe in some deity or other‒that the universe is exquisitely tuned for life, such that it requires explanation by some “supernatural” means.

When I hear or read such things, my reaction is, “What universe are you looking at?!?”  Almost no place in the universe can be survived by life as we know it, let alone produce it.  The fraction is so close to nonexistent that it is zero to a good first approximation, and a good second approximation, and a good third, and so on.

It may seem that time could possibly give us a bit more comfort than space does, since life on Earth has existed between roughly a fourth and a third of the time since our Big Bang.  But the future of this universe gives every indication of being without end, whereas conditions for large scale matter to exist‒as far as we can tell‒will not last long (not compared to infinity, which to be fair, nothing is, not even TREE(3) or Graham’s number or any other huge but finite numbers).

By the time the last supermassive black holes finish evaporating due to Hawking radiation, which will be about a googol years, things will already have been impossible for any kind of life we would recognize for eons of eons.

Of course, it’s conceivable that life will grow to become cosmically important and able to engineer specific ways for the universe to avoid heat death (or whatever is coming), or to make new universes, or whatever.  But that’s a mightily narrow course for the future to thread.

And the time until a straightforward Poincaré recurrence of the current state of our universe makes a googol years seem unnoticeably teensy by comparison.

Anyway, the main point I’m making, if there is one, is that the universe neither promises nor owes you anything.  That doesn’t mean it’s not okay for things to be important to you.  You matter (on the scales we’ve been considering) nearly as much as the whole Andromeda galaxy.

It’s fine for you to try to make your life what you want it to be.  Why not?  There’s no one else who has any legitimate claim to it (not counting children, friends, etc., all of whom could be considered part of “what you want it to be”).  Just don’t expect other people, let alone the vastly bigger number of things that are not people, to be also trying to make your life the way you want it to be.

Okay, that’ll do, pig.  I’m tired (What else is new?).  I’ll most likely write a post tomorrow.  I hope you have a good day.


*I got this haiku from Eliezer Yudkowsky’s Rationality: From AI to Zombies, though I am not sure if it originated with him.

**Actually, I’m not sure how you would draw that.

***I did it, though I initially made a mistake in calculating the surface area of the Earth, as you can see below if you look closely (I forgot to square pi in the denominator).  Anyway, assuming that the depth-to-height range of life on Earth is about 20 km, then the volume for life as we know it is about 1 x 10^19 cubic meters.  The volume of the visible universe on the other hand is 2.6 x 10^81 cubic meters (if my calculations are correct).  That means that the fraction of the universe that is, to our knowledge, amenable to life is 3.8 x 10^(-63), or 0.0000000000000000000000000000000000000000000000000000000000000038 of the volume of the universe.  By comparison, the fraction of your volume represented by one of your tens of trillions of cells is roughly 10^(-12), or .000000000001.  You lose thousands of cells every proverbial time you scratch your nose.  How much do you notice them?  How much less would the universe notice if it scratched all life off?

Reality, calories, and joules, oh my!

I had a moment of idle curiosity this morning just before starting to write this.  I recalled the bit of trivia that the average human power output/consumption is something around 80 or 100 Watts.  I wasn’t sure which was more typical, but it doesn’t really matter; the numbers are well within the same order of magnitude, despite having nominally different numbers of digits.

Anyway, I decided to convert that into kilocalories* per day, just to confirm that the typically described numbers match up, because if they don’t, then something very strange is going on.

A Watt is a joule per second**, so to figure out how much energy output (in joules) there is in or from a human per day, you just multiply the watts times the number of seconds in a day (24 hours per day x 60 minutes per hour x 60 seconds per minute, or 86,400 seconds per day).  Multiply that by the above-noted wattage and you get between about 6 and 8 million joules per day.

Now, there are 4,184 joules per kilocalorie, so dividing that into the number of joules yields:  roughly between 1600 and 2000 kilocalories a day, which matches the data on basal metabolic rates.  Neat.

Of course, they must match up, otherwise there would clearly be some major logical inconsistencies in our understanding of such thermodynamicalish matters.  I don’t suspect that such a mismatch would have survived the scrutiny of scientists much longer than a snowball would last in a blast furnace; in other words, I consider textbook level physics to be pretty darn reliable.  Nevertheless, it is good occasionally to check even such basic things, just to confirm for yourself that your understanding of reality is internally consistent and consistent with that which is measured and described by other people.

This is not to say that I worry about whether my “reality” is significantly different than that of other people.  I don’t.  While I have no doubt that the specific details of my personal experience are unique, this is so only in rather trivial ways.

I’ve not encountered any occurrence or argument that made me doubt whether everyone around me is subject to the same laws of physics as those to which I am subject.  Of course, if tasked or merely bored, I can conceive of ways in which all that I think I know is illusory and/or delusional, as in the argument that precedes the cogito in Descartes’s most famous (non-mathematical) work.

With a bit of effort, one can almost always imagine ways in which the world could be deeply different than it seems.  I’ve been known to do that at length‒indeed, at book length‒myself.  But the fact that a thing can be imagined is not a reason, by itself, to promote a concept into “might actually be true” space.  Presumably, there are limitless such things that could be imagined, but almost by definition (at least as I am using the word) there is only one reality.

Reality, as far as I can see, cannot contradict itself; actual paradoxes cannot be instantiated.  I’d probably be prepared to bet my life on those propositions.  But even if reality could contradict itself, that would also be a fact about reality.  Whatever reality is, it is.

That’s trivial, of course, but sometimes it’s good to be reminded of the trivial things that one carries in one’s background knowledge but rarely considers or reconsiders‒things like the interchangeability of measures of energy and power and heat between different units.

With that full circle moment, I’m going to finish for today.  I’m still very tired, and I’m rather discouraged and despondent and probably other d-words as well.  This blog is all I really do, anymore, but my energy is lagging even for this.  At least I don’t need to do payroll today, since I had to get it done early yesterday…which fact I found out yesterday.

Oh, well.  Please do what you can to have a good day.  And remember, there is no do or do not.  There is only try.


*This is what we call “calories” when speaking of human energy intake and output, but a single “true” calorie is the amount of energy (heat) required to raise the temperature of 1 gram of water 1 degree centigrade (or, well, Kelvin if you want to be pedantish).  A kilocalorie, or what we commonly call a calorie, is enough to raise a kilogram of water 1 degree Kelvin.

**A joule being the unit of energy in “SI” units.  A joule (energy) is the integral of force with respect to distance, or a Newton-meter.  A Newton is the measure of force, and is a kilgram-meter/ second-squared.  So joules have the units kilogram-(meter squared)/second squared.  Watts (a measure of power, or energy per unit time) are joules per second, which fact gives us the fun, lovely phenomenon of having cubic seconds in the denominator of the equation!

The forecast calls for uncertainty

It’s Friday now (as I write this, anyway), and I think that I will have tomorrow off.  But, as some of you may have noticed, the specific plans about my work Saturdays are subject to rather erratic change.  It’s quite annoying; I don’t really like unexpected changes to plan.  I particularly don’t like them when I don’t agree with the reasoning behind them.

Of course, our two most consistently top salespeople at the office contracted when they came aboard not to work on any weekends.  And, as I said, they are consistently our best.  Could there be a causal connection between those facts?  Well, correlation does not necessarily imply causation, of course, but enough correlation should at least shift your credences.

Unfortunately, humans are not naturally good at probability and statistics.  This is part of why I think the subject(s) should be taught in standard education, starting quite early.  Though the subject(s) can be somewhat counterintuitive, the mathematics is not really all that rarefied or difficult, and probability and statistics apply to so much of the world.  On the smallest scales they seem to apply fundamentally.

Anyway, I didn’t come here today to discuss probability and statistics, though obviously I enjoy the subject(s).  So, then, what have I come here today to do or to discuss?  Well, now that I think about it…there is no particular subject.  I don’t know why that should surprise any regular reader, let alone me.

It will probably not surprise you that I have not started playing on Babbel or Brilliant yet.  I do at least look at the apps frequently throughout the day, considering using them and so on.  For whatever that’s worth.

I can allow myself some excuse with Babbel, since it’s difficult to practice a language in a busy office.  But there’s no such reason not to use Brilliant.  Its teaching and exercises are set up in nice, granular ways, so you can do one problem then get called away by work, or whatever, and then go back.

I even don’t mind the rather hokey “experience point” system they use to reward you when you get an answer right.  It’s kind of fun, but it’s not too involved or taken too seriously by the app makers (or so it seems, anyway).  And I definitely have learned new things on the app in the past, and honed and renewed prior skills as well.  So it’s not a waste of time by any means.

The same cannot be as confidently said* about the various apps/sites on which I no longer have accounts.

Of course, time passes‒or whatever it is that time really does‒no matter what we do, and sometimes “wasting” it can be a fulfilling choice.  If we are metaphorical virtual particles then we can behave like them from time to time, not just heading directly to the next interaction, but maybe throwing out an electron-positron pair and then reabsorbing them before they could be detected, or going around the universe and coming at the interaction from backwards in time and behind, as it were, just to show off a bit.

Not everything has to be useful, at least not in too narrow a sense.  Usefulness, like so many things, is in the eye of the beholder.  It is certainly not a universal, general attribute of reality.  So, while it may only rarely be wise to be counterproductive from one’s own point of view, there are times when it’s good‒maybe even useful, ironically‒not to worry about whether something has any point or not.

Yeah, I’m not terribly good at doing that, either.  I don’t know how much of that is due to culture/upbringing and how much of it is genetic or at least neurodevelopmental.  I’d guess it’s not too far from 50/50, but I would not be shocked to find the full truth surprising.

Regarding whether to worry about app usefulness or lack thereof and whether to spend time on the ones that I will have wished I spent time on, well, it’s been said that wisdom, at least a form of it, is the ability to follow your own advice (i.e., the advice you would give to someone else if they were in your circumstances).  I think most people would be able to recognize that, by that particular definition, we are all quite unwise, quite often.

Okay, well, I’ll start to wrap this up.  I really should not be working tomorrow, but if I do, I will almost certainly write a post.  It’s quite unlikely‒I would call it less than 20% likely‒that I will work, but we shall see.  You can check in if you’re “in the neighborhood”.  Don’t look for my posts to be shared on Facebook or Threads anymore, but I do share them on Substack and Bluesky and TWFKAT.  And you can always find them here, directly, and comment if you wish.

Have a good weekend in any case.  That’s an order!


*Well, it can be said, but talk is cheap mother f#cker.  Rather often, people say they are confident and act sure about situations or information that they cannot know with confidence.  I always consider this unwarranted confidence to be a “red flag”, a warning sign that this person’s judgment is unreliable.

Sometimes drunkards walk to interesting places

Well, well, as the oil tycoon said*.  It’s Saturday now and I am actually writing a blog post, as I expected I would.  It’s been three weeks since the most recent prior Saturday morning post (not counting my “non-post” from last week).  But today, this weekend, I am going to work, and so I am writing a post.

I hope you’re proud of yourself.

Okay, well, that last sentence doesn’t really make sense in this context, but I felt the curious and rather inscrutable urge to write it, and there was no real downside to doing so, so I did.  These are the sorts of things that happen in biological, nonlinear, largely subconscious brains that are communicating using language (especially written language, in my case).

A truly efficient, direct, deliberately programmed AI (not a neural net style, LLM type of AI, but one whose algorithm is precise and understood) might not produce such erratic and seemingly peculiar thoughts.  But maybe it would.  Maybe one cannot have actual intelligence, with creativity and the like, without having a system that meanders a bit into the highly tangential.

I suspect this may be so, because in order to grow and gain new knowledge, to be creative, there has to be a capacity to embrace the unknown‒not in an H. P. Lovecraft sense, but more in a sense reminiscent of Michael Moorcock’s** character that strode into chaos and by interacting with it caused it to become a locally specific order***.

The potential paths into the future which one might, in principle, explore are functionally limitless, and may actually be infinite.  It’s not possible to evaluate them comprehensively through any kind of linear logic‒not in the time span available to the universe, anyway.  So, to work things better, there must be a bit of potential for “randomness”, for moving forward into a future that is one’s best guess, or into which one has narrowed down at least some of one’s choices.  Then one can find a “good enough” path or course of action, one which may produce insights and outcomes that were not, in practice, predictable by any finite mind.  (In a way this follows from the fact that, if you can precisely and specifically predict what insight you are going to have, then you have already had it.)

It’s a bit like evolution through natural selection, where the mutations are effectively random, but the survival of those “mutants” is not at all random, at least in the long run, on a large enough scale.  Still, there’s no pre-thinking involved, no teleology, merely “motion” that is constrained (by differential survival due to the facts of surrounding nature).

Even if one has a fairly specific goal, trying to plot out one’s way through the phase space of one’s potential future paths in a very specific and precise and preplanned course is unlikely to be doable.  It may not be preferable even if it were possible.

It may be analogous to trying to get from one location to another in, say, the same city, by following a direct, straight line from one spot to the other.  One probably won’t be able to make any progress at all for very long; buildings and streets and vehicles and the like are probably going to get in the way.  Heck, the very surface of the Earth could be an impediment to any truly straight path, since it is curved****, but we’ll stipulate that you can follow a geodesic (the shortest distance between points on a curved surface).

Anyway, if one precisely follows only a preset straight path, even if one can more or less achieve it, one misses out on many potentially beneficial but unpredictable paths.  Imagine one is heading to one’s usual, mediocre but tolerable, fast food restaurant for lunch, and one only goes straight there without even looking around.  One might well miss seeing all the many other available restaurants, some of which one may find preferable‒perhaps by a great margin‒to one’s “planned” place.

That’s a slightly tortured metaphor, and I apologize for that fact, but I hope you know what I mean.

It doesn’t do‒usually‒to try to make progress by a true random “drunkard’s” walk.  I don’t recall what particular power law the number of possible outcomes follows, but it grows very rapidly, perhaps exponentially, with each new step.  But if one keeps one’s long term goal generally in sight, and one heads in that general direction, adjusting for buildings and railroads and hills and lakes and so on, constantly assessing and, when necessary, adjusting one’s course, one can usually not only get to one’s destination rather well, but one can encounter new sights and new experiences along the way.

Some of these encounters might even make one decide to change one’s goal of travel, having found a better one (by whatever criteria) as one went along.  That’s not going to happen to someone who is dogmatically focused on only one path and only one goal.

Okay, well, that’s my rather stochastic blog post this Saturday.  I hope you are already having an excellent weekend, and that it continues to be excellent (or if it is not yet excellent, that it becomes so in short order).  Thank you for reading.


*To his son, Derrick.

**I don’t remember which character‒it’s not Elric‒or which story.  My apologies.

***Of course, as I think I’ve said before, order is not the opposite of chaos, but is rather a subset of it.

****It is.  Seriously.  There is no reasonable doubt about that fact, and it has been known to humans for at least 2200 years, since Eratosthenes calculated (correctly) the circumference of the Earth using distance along what was effectively a geodesic and the angles of two simultaneous shadows.

You’re so vain, you probably think that nothing matters

I was going to start by saying that I had probably written all I could about Friday the 13th and the fact that there are 2 in a row when non-leap year Februaries have Fridays the 13th, and that a first glance might lead one to think this should happen roughly every 7 years on average*.  However, as I noted last time I discussed this, because the leap year day is in February, we will not have the two-in-a-row Fridays the 13th (February and March) as often as we might otherwise; it will not happen every 7 years on average.

Then, this morning, after recalling that today was Friday the 13th, I ran through the next years’ Fridays in my head in the shower, and it occurred to me that the next Friday the 13th in February‒which will be in 6 years, as I noted in the past‒will not be followed by a Friday the 13th in March!  2032 (six years from now) will be a leap year, so there will be 29 days in February, so there will be no Friday the 13th in that March.

The next paired ones, then, will be a further 5 years after that, in 2037 (not a leap year).  It would have been 6 years later, but there are two leap years in that interval, 2032 and 2036, so the next one comes a year sooner than it would otherwise.

It occurred to me that, because of the frequency of leap years, which is almost twice that of the cycles of days of the week, the frequency of those paired dates may well be once every 11 years rather than every 7.  At least those are both prime numbers.  I’m not going to work out some exact formula right now, though.  It’s not really important.

Of course, one could say that nothing is truly important, and I am persuadable along those lines.

There is a Doctor Who Christmas Special (the one from series 5) in which the antagonist/guest protagonist (played by Michael Gambon!) describes a woman in a cryo chamber as “nobody important”, and the Doctor characteristically responds by saying, “Nobody important?  Blimey, that’s amazing.  You know, in 900 years of time and space, I’ve never met anyone who wasn’t important before.”

This is typical Doctor, of course, but it raises the objection Dash (from The Incredibles) voiced when told that everyone is special:  Saying that everyone is important can be the same thing as saying no one is.

Of course, important is in the eye of the beholder.  But then again, the beholder is not important, either, except in its own subjective estimation and perhaps that of a few other, equally unimportant, owners of such eyes.

So, yeah, one could argue relative and subjective importance from local points of view, which is valid but more or less vacuous outside its small scale as far as I can see.  On a cosmic scale, it’s all just dust and shadows.  But you could also say that about the entirety of the cosmos itself.

I guess import has always been subjective, even though people are not inclined to see it that way.  But, of course, people are the products of their “local” forces, and they are not responsible for the laws of nature, nor for the things which have happened in the past that have affected them in the present (which could come under a certain interpretation of “the laws of nature” in and of itself).  I won’t get into all that now.

Going back to the shower, but on an entirely different subject, I was also thinking about the effects of diminishing amounts of shampoo in the bottle on the center of gravity of the bottle.  At the start, when it’s full, the center of gravity is roughly in the geometric center of volume of the whole thing.  But as one uses the shampoo, the center of gravity shifts lower and lower, since the air replacing shampoo in the upper part of the bottle is much less dense than the shampoo or the bottle.

But then, as one gets to the dregs, the smaller and smaller amount of shampoo in the bottle contributes less and less to the overall mass distribution of the bottle and its contents, and the center of mass begins to head back up.  Finally, when the bottle is “empty”, the center of gravity will have returned to almost the same place it was when the bottle was full.

All that’s fairly trivial, well-known stuff, I know.  But it got me to thinking about how much of the laws of physics, such as the laws of gravitation (Newtonian form), are solved using such concepts as the center of mass, which is really just a way of combining and averaging the effects of numerous tiny bits of gravitating material as if they were concentrated at one point.

Much of the mathematics of physics works this way, coarsely approximating the very fine details of reality in a way that provides reliable, reproducible guidelines and can produce testable predictions.

But the granularity of reality doesn’t actually ever go away, not at any level.  Even at the level of the quantum wavefunction of a single “particle”, the actual behavior of the thing as it interacts with things in the “larger” world is the summation of the effects of all the possible quantum states of the electron superposed upon each other and interacting with things‒everything‒which are also just collections of superpositions of quanta.  That superposition happening in a “space” that doesn’t directly coincide with the macroscopic space we experience, but whatever its dimensions are, they are real, because they have durable, reproducible effects.

Mathematics may be unreasonably effective in the physical sciences, as Eugene Wigner famously noted, but it seems not to be a refining of description but rather an averaging out, a glossing over, the inking of an underlying rough pencil drawing which nevertheless still constitutes the real, original picture.

It may be that, in a sense, all science is just various forms of statistical mechanics.  We know that, at larger scales, we definitely need the tools of probability and statistics to navigate as best we can the territory of reality.  And yet, we don’t teach this sort of stuff to most people, ever.  I wrote a post about this on Iterations of Zero, if I remember correctly.

I could go on about all this rather easily, I guess, but I am using my smartphone today, and my thumbs are getting sore.  That’s okay; yesterday’s post was probably way too long, anyway.

If I did a video of my thoughts on this I might be able to get into more detail, though it would probably be even more erratic and tangential than my writing.  Still, maybe it would be worth trying.

In the meantime, I’ll write at you again tomorrow.


*Go ahead, do a search on my blog page for Friday the 13th; I’m all but sure it will bring up the pertinent blog posts.

 

Are gravity and frivolity truly opposites?

It’s Wednesday morning (not quite five o’clock yet) and it is February 25th.  There are only ten more shopping months until Newtonmas*.

For those of you who don’t know (and as a reminder for those of you who do know) Isaac Newton was born on December 25th, 1642 (AD**).  Now, there is a parenthetical here:  Newton was born on December 25th by the Julian*** calendar, which was the one used in England at the time of his birth.  By the Gregorian**** calendar, Newton would have been born in early January of 1643.

This might seem to imply that December 25th nowadays shouldn’t be considered Newtonmas, but of course, it’s a closer fit than celebrating the birth of Jesus on that day; supposedly, biblical scholars have found that Jesus was probably born in the summer or something.  As with many things, “The Church” appropriated the popular holidays celebrating the winter solstice and grafted Christian religious significance onto it.

There’s nothing particularly bad about that.  All these holidays and divisions of the year are fairly arbitrary (though celebrating solstices and equinoxes is common enough in multiple cultures, which makes sense because these are objective events in any given year that can be noticed by any culture that is paying attention).

The length of a year is a concrete, empirical fact, as is the length of a day and the length of a lunar orbit around the Earth.  None of them are straightforward multiples of each other, unfortunately‒they are waves that are not harmonically associated with each other.

I don’t know how long it would take for their “waves” to come back into some primordial alignment and “start over”, but it’s probably moot, because the length of a day and of a lunar orbit and of the orbit of the Earth are changing slowly.  The moon, for instance, is moving steadily (but very slowly) away from the Earth over time, and so its time of orbit is increasing (since things that orbit farther away orbit more slowly).

I think Kepler’s third law was/is that the period of a planet’s orbit around the sun is proportional to the 3/2 power of the length of the semimajor axis of its orbit.  I’m not sure if that exact power holds up on the scale of, say, the lunar orbit, but the laws of gravity are as universal as anything we know.  Indeed, there are materials that are opaque to light, but as far as we know, there are none that are opaque to gravity.  Gravity is nevertheless constrained by the geometry of spacetime, so orbits will always slow down at a faster rate than the distance from the center around which a mass orbits increases.

The inability of anything we know of to block gravity is one thing that makes me take seriously the notion that, at some level, there could be more than three spatial dimensions.  If gravity is not confined to three dimensions then nothing that is so confined could stop it; it would merely flow around any obstacle (maybe gravity waves, for instance, can even diffract around matter and energy, though that might not imply higher dimensions).

This is related, indirectly, to the fact that it is impossible to tie a knot in a string in 4 or higher spatial dimensions.

By the way, having those extra spatial dimensions curled up tiny, as is usually presented in depictions of the notions of string theory, is not the only way for them to exist and be undetected.  If most of the forces in the world we know‒the electromagnetic, the strong force, the weak force, and the various matter-related quantum fields‒are constrained to a 3-brane because their strings are “open-ended”, then we could live in a 3-brane (in which all other forces, including matter, are confined) nested in a higher-dimensional “bulk”.  Gravity could be conveyed by a “looped” string, which could pass through the 3-brane, interacting but not being confined to it.  This could also explain the comparative weakness of the gravitational force and might even explain dark matter (and why it is so difficult to detect).

This sounds extremely promising, maybe, but there are issues and hurdles, not the least that strings and higher spatial dimensions are very difficult to detect, if they exist.  Also, it’s very hard to pin down all the implications mathematically in a useful way.

I remember one lunch break when I was still in medical practice when I tried to see if I could work out mathematically if “dark matter” could be explained by a relatively nearby, parallel brane-universe (it would probably be more than one, but one was difficult enough) whose gravity spills over into and overlaps the gravity of our brane-universe.

Here’s a sort of reproduction of some of the scribbling I did then:

Unfortunately, though I could visualize what I was considering and get an intuitive feel for what the math would be like, my precise mathematical skills were just not up to the task of sorting it out rigorously.  Also, of course, lunch was not long enough, and I had many other things on my mind.  Anyway, findings like the “bullet cluster” provide some fairly strong evidence that “dark matter” is something physical within our three dimensions of space.

Okay, that’s enough for today.  I’ve managed not to talk about my depression and stress and self-destructive urges/wishes (except just now, of course), so I hope you’re pleased to have had those things cloaked from you today.

Take care.


*Working out the exact number of days, I think I figured that it was 302.  December 25th is 7 days before New Years, so it’s day number 358 in the (non-leap) year.  And today is the 25th day of the second month, and January has 31 days, so today is day 56 of the year.  And, of course, 358 – 56 = 302.

**Why not my usual “AD or CE?”  Because at the time, in England, it was just “anno domini”.

***Named for Julius Caesar, though as far as I know, he had no more to do with actually formulating that calendar than he had with the invention of the 7th month.  As far as we know, he wasn’t even born by the then-existing version of Cesarian section, which was more or less always fatal to the mother, and his mother lived well beyond his birth.

****Named after Pope Gregory XIII, also known (by me) as Pope Gregory Peccary*****.  He did not formulate the newer calendar, but supposedly he at least commissioned the Vatican astronomers to create it when it had become obvious that the Julian calendar was not quite tracking the actual year but was overshooting over a long period of time.  So, the Gregorian calendar is better named than the Julian calendar, or so it seems to me.

*****The nocturnal, gregarious wild swine.

Man overboard

As the real weekends go, it was better than most, to paraphrase The Wreck of the Edmund Fitzgerald.  By this, I’m referring to this last weekend, the two days before this day, of course.

I did not work on Saturday, which is good, because that would have been the third time in a row.  I also got to hang out with my youngest on Saturday, and we watched about four episodes of Doctor Who together, which was good, good fun.  I cannot complain about that in any way.

I have though a weird, disquieting, sinking sort of feeling that it may have been the last time I will see my youngest, or maybe anyone else that I love.  It’s is not one of those reliable sorts of feelings, like those that lead one to new insights in science or mathematics or what have you.  It’s probably more a product of depression and anxiety, the feeling that anything good in my life is sure not to last, if it happens at all, because I do not and cannot possibly be worthy of anything good happening to me.

Is that irrational?  Of course it is irrational.  It cannot be expressed in any sense as the ratio of two whole numbers, no matter how many digits they may have.

Wait, wait, let me think about that.  My thought, my feeling, was expressed above finitely.  That is, of course, a shorthand for what is really happening, but even if one were to codify those processes down to the level of each molecular interaction that affects any neural/hormonal process that contributes to my feeling, we know that must be a finite description (though it could, in principle, be quite large).

Even if we’re taking the full spectrum of quantum mechanics into account when describing my mental state, we know that quantum mechanics demands a minimum resolvable distance and time (the Planck length and the Planck time) below which any differentiation is physically meaningless.

A finite amount of information can describe the events and structures and processes in any given finite region of spacetime.  In fact, the maximum amount of information in any given region of spacetime is measured by the surface area (in square Planck lengths) of an event horizon that would span exactly that region, as seen from the outside*.

Any finite amount of information can be encoded as a finite number of bits, which can of course be “translated” to any other equivalent code or number system.  So, really, though the contents of my mind are, in principle, from a certain point of view, unlimited, they are finite in their actual, instantiated content, and can therefore certainly be expressed as an integer, and thus also as a ratio (since any integer could be considered a ratio of itself over one, or twice itself over two, etc.).

So, in that sense, my thoughts are not irrational.  Neener, neener, neener.

In many other senses—maybe not the literal, original sense, but in the horrified, cannot accept that not all numbers can be expressed as ratios of integers because that makes the universe too inconceivable, sense, among others—I can be quite irrational.

It’s very difficult to fight one’s irrationality from the inside, alone.  Even John Nash didn’t really beat his schizophrenia from within as shown in the movie version of A Beautiful Mind.  Also, his delusions in real life were far more extravagant and bizarre than those which appear in the sanitized version that made a good Hollywood story.

If one escapes from mental illness from within, one has to consider it largely a matter of luck, like a young child who doesn’t know anything about math getting a right answer on a graduate level, high order differential equation problem.  It’s physically possible; heck, if it were a multiple choice question, it might even be relatively common***.  But it’s not a matter of being able to choose to do it right and to know how it was done.

Severe mental health issues are going to need to receive assistance from outside, almost always.  This is not an indictment of them or of the need for help.

Surely, someone who has been swept off the deck of a ship by a rogue wave cannot be faulted for needing help from those still on the ship of they are to survive.  It would certainly seem foolish and almost inevitably fruitless if such a person tried to claw his way up the side of the ship to get back on board when there is no ladder and no handholds.  He should certainly not be ashamed that he cannot swim hard enough to launch himself bodily from the water and back onto the surface of the vessel.

One cannot reasonably fault such a person for trying to do the superhuman.  A person might try to do practically anything rather than drown or be eaten alive by some marine predator.  But, of course, barring an astonishing concatenation of events such as the time-reverse of the splashing entry into the ocean happening and sending the person out of the sea just as it was entered, such efforts will not succeed.

And though it might be heartening or at least positive for one to receive encouragement from those still on the deck—don’t drown, keep treading water, you can do it, you’ll make people sad if you drown, you deserve to stay afloat, I’m proud of you for treading water yet another day, it’ll get better, this won’t last forever, you’ve made it this far so you know you can keep going, you don’t want the people who know you to feel sad because you drowned, etc.—in the end it might as well come from the seagulls waiting to pick at one’s floating corpse.

Mind you, certain kinds of words can be more useful than others.  Words like, “Hey, around the other side of the ship there’s a built-in ladder; if you can get over there and time things right, you might be able to grab the lowest rung when the waves lift you, and then climb up,” might be useful because they are directions for using real, tangible resources that we know can make a difference.  Also, words like, “Hang on just a bit longer, we’re throwing down a life preserver on a rope so we can haul you up” would be useful, obviously, unless they were mere “comforting” lies.

Alas, though one could reasonably expect such literal assistance if one were washed overboard—the “laws” of the sea are deeply rooted in the hearts of those who work there, and they include a general tendency to help anyone adrift to the best of one’s abilities—when it comes to mental illness, the distress and the problems are difficult for others to discern and easy to ignore.  Calls of distress are often experienced as annoyances, and even treated with contempt, since those hearing them cannot readily perceive that they themselves might be similarly washed overboard at any time.

But, of course, they might be.

I don’t know how I got on this tangent, but I guess I never really do.  I just go where my mind takes me, and my mind is not a reliable driver.  It is, though, a reliable narrator.  It doesn’t matter, anyway.  Nothing does.

Anyway, here we go again into another work week, because that was what we did last week.  I wish I could offer you better reasons, but I’m really only good at breaking things down, destroying things, not at lifting anyone or anything up.  That comes from other regions and is conveyed by other ministers.


*From within an event horizon, the volume could be much larger than the spacetime that seems to be enclosed from the outside, because spacetime inside the horizon is massively curved and stretched.  It’s conceivable (at least to me) that there could be infinite space** within, at least along the dimension(s) of maximum stretch, just as there is infinite surface area to a Gabriel’s Horn, but only finite volume.

**See, mathematically, one can stuff infinite space inside a nutshell.  Hamlet was right.  He often was.

***Perhaps this explains why certain types of mental health problems can respond well to relatively straightforward interventions, and even to more than one kind of intervention with roughly comparable success, e.g., CBT and/or basic antidepressants and such.  These relatively tractable forms of depression are the “multiple choice problem” versions of mental illness.  This does not make them any less important.

This is not an attention-grabbing headline

I’m writing this post on my smartphone, even though I brought my lapcom with me yesterday evening.  I did not use my lapcom for yesterday’s post, such as it was.  I didn’t even write that post in the morning yesterday, or at least, I didn’t write the “first draft” of it then.

By the end of the workday on Wednesday, I didn’t feel like I was going to want to write a blog post on Thursday.  So I went to the site directly and just wrote the “Hello and good morning,” and the “TTFN” and set it to publish later.

I already knew what title I was going to want to use for it.  I wanted to use Polonius’s dithering, meandering jabber about brevity being the soul of wit, as a sort of left-handed self compliment about my own brevity in that post, and because, in the original form, it would have made the headline longer than the post, which would be ironically funny, in principle.

Then, yesterday morning, I got the urge to put my little “insert here” bracketed bit in the post, the better to convey how disgruntled and disaffected and self-disgusted I (still) felt, as well as how tired.  It did sort of spoil the joke about the headline being longer than the post, of course.  At least the older joke about Polonius still holds water.  Then again, that joke was made by Shakespeare, so we shouldn’t be too surprised if it has serious legs (though this raises the question of how serious legs could possibly hold water).

One thing worth at least assessing this week might be whether there is an aesthetic difference between this post (for instance) and the posts I wrote earlier this week, on the lapcom.  Writing on the lapcom is quite different for me in many ways.

On the lapcom, I generally have to work to stop myself before a post, or whatever, gets too long.  Whereas on the smartphone, that isn’t as frequent a problem.  Not that I can’t yammer on and on even with the smartphone, of course.  Some might say all I ever do is yammer on and on.  But anyway, I can’t write as “effortlessly” on the smartphone as I can on a regular keyboard*.

Sorry, I’m retreading a lot of old ground here, which I guess is better than retreading a lot of old tires. I know how to tread on the ground; indeed, I cannot recall a time when I didn’t know how to do that kind of treading.  Whereas retreading a tire sounds like something that requires special skills and equipment, both of which I lack.

I don’t know, I’ve heard of “retread” tires, but I don’t know if such things still abound, or if they ever did.  It sounds vaguely like a bad idea, like such tires might be more prone to blowouts.  But latex is a finite resource, and there aren’t very good synthetic alternatives, so maybe there’s at least some cost/benefit tradeoff (or treadoff?) there.

Ugh.  With that last joke, I probably convinced at least some of my readers that, yes, the world would be better off if I were dead.  Actually, I say that as if it were conditional, but it’s not.  It would be more in line with reality to say “the world will be better off when I am dead”.

There’s a quote by which to be remembered, eh?

I cannot say whether I will be better off when dead.  It’s probably a nonsensical question.  When I am dead, I will not be anything at all, not better, not worse, not uglier.  What happens to virtual particles after they have annihilated?  Nothing, and less than nothing, for they truly no longer exist, and in some senses they never existed.  Indeed, as physics goes, they probably never do exist; they are a shorthand description of what happens in quantum fields when perturbances in the fields have effects that do not rise to the level of actual, true particle production.

Or so I am led to understand.

From another point of view, it is possible for something to improve, at least in a sense, by ending.  I’ve mentioned this before, but if the curve of a function‒perhaps a graph of the “quality of life” or one’s “wellbeing”, to say nothing of happiness‒is persistently negative, then returning to zero is a net gain.  It can be a huge net gain, in fact.  This is related to the origin of my own version of an old saying, which I use with tongue definitively in cheek:  The one who dies with the most debt wins.

Now, of course, the integral, the area “under” that wellbeing curve would not be improved by the curve reverting to zero and stopping.  But at least that integral would not keep getting more and more negative over time.

Some might say, “well, the integral can become less negative over time, and might even become positive”.  This is, in principle, true.  And when one is younger enough, it’s relatively easier to tip the curve, and its integral, into positive territory.  But as the curve goes on, having been negative for a longer and longer time, it’s going to become ever harder to bring things to a net, overall positive integral, even if one could reliably make one’s curve positive (which one often simply cannot do).

Of course, the moment to moment experience (which is all the mind really gets) of an ascending curve could be pretty darn good, and might well be worth experiencing, even if it’s not enough to bring the integral into positive territory.  We are straying into the “peak-end” rule here, which was elicited regarding (among other things) colonoscopies but applies to much else in human experience.

Speaking of peak endings, I’ll mention in passing the curious fact that, no less than twice in the last week, the evening train service has been disrupted by someone either getting hit by or becoming ill next to the train.

Earlier this week, right by the station where I catch the train to go back to the house, there was a man who looked like he was probably homeless and had collapsed next to the train tracks not far from the station.  I saw him brought away, finally, on a stretcher.  He didn’t look physically injured‒certainly not in the ways I would expect someone who had actually been hit by a train to look‒but he did look cachectic, which is why I thought he might be homeless.

Then, last night’s commute was interrupted by what they call a “trespasser strike”, one that did not involve the train I rode but which always slows everything down.  I’m vaguely amused by the euphemism “trespasser strike”.  A “trespasser” here is a non-passenger who doesn’t work for the train company (or whatever) who is in the area adjacent to the tracks.  The “strike” part is probably self-explanatory.

I suppose it’s literally true, at least from a legal point of view, to call the person a trespasser.  But it’s amusing that the train people have to say something derogatory about a person hit by a train‒even if the person deliberately put themselves in harm’s way‒to sort of, I don’t know, assuage the company’s conscience.

But we are all trespassers, in at least some senses.  We are also, in other senses, all owners.  We are all innocent, and we are all, in some other senses, guilty.  “Every cop is a criminal and all the sinners saints.”  Above all, we are all very much just passing through, staying only a very short time.  We are all virtual particles.  Or you might say, we are all Iterations of Zero.

Have a good weekend.  I should not be writing a post tomorrow (in more than one sense).


*I wish I could honestly say that my use of a piano-style keyboard were as effortless, but I am terribly rusty with that, though I started learning it when I was 9, a rough 2 years earlier than when I got my first typewriter.

My way of life is blogg’d into the sere, the yellow leaf

Hello and good morning.

TTFN


Ha.  Ha.  Sorry about that.  Just, honestly, I don’t really feel much like writing right now.  There are no other twos here today (at least, I’m not going to be talking about them, except to the extent that saying that I’m not talking about them constitutes talking about them).

Actually, wait.  I will make a relatively fun note that includes the number two, since it just occurred to me that today is the fifth:  If you add (or if anyone else adds) the first two prime numbers together, they give you the third one.  2 + 3 = 5.

This is the only place in all the infinite realm of the prime numbers in which you will be able to add two consecutive primes to get the next prime, because all prime numbers except two are odd, and if you add (or anyone else adds) two odd numbers together, you (or they or he or she) will get an even number.  And the only even prime is two.

Actually, it’s worth noting that one can add two primes that are not consecutive to get a third prime.  If one takes any of the first member of a set of twin primes* and adds two (that solitary even prime) to it, one will get the second of the pair of twin primes.  This may be able to be done in an infinite number of cases; it’s thought that there are an infinite number of twin primes, i.e., that there is no largest twin primes set.

However, this has not been proven yet (as far as I know) though work has been done on it and progress has been made.  I won’t get much more into it than this, except to say that apparently a lot of the work has been done by large, decentralized groups of mathematicians (professionals and amateurs) through a site called “polymath”, if my memory is correct.

Now that is an excellent name for a collaborative mathematics website.

Oy, there I go again, talking about trivia about prime numbers and so on.  Maybe it would make sense for me to get into these things if I were truly involved, but I’m a spectator of mathematics (apart from my truly useless invention of the gleeb**, a number which, when multiplied by 0 gives you 1).  So my interest is entirely esoteric and reflected.  I apologize to those of you who find it tiring.  To those of you who like it, I’ll say “You’re welcome”.

You’re welcome.

See, I told you I would say it.  And then I said it.  I guess that’s one point in my favor.

I’m not sure there are any others.  At least, none of them appear to me to be in my favor.  I am all but completely worn out.  I’m running on fumes, or whatever other metaphor one might want to apply that is applicable (since applying inapplicable ones is stupid) and my incessant pain continues to wear me down, adding to my depression, and eroding what little joy I have left.

I really have wanted so often just to hang it up.  I came relatively close yesterday afternoon and considered leaving a “post” that just said, “I don’t think I can do this anymore.”  The would be the title and the content.

I didn’t do it, of course, which you can tell by looking, if you are so inclined***.  But I came closer than I’ve come before, at least subjectively speaking.  Last week—I think it was—I posted a similar sentence on most of my social media, just the line “I don’t know if I can do all this much longer.”  I’ll embed a screen shot here:

 

So, fair warning is being given, here and elsewhere.  The fire alarm is giving off little warning beeps.  The readout dial is high in the yellow range, perhaps already inching into the red.  Creaking sounds and little wisps of steel and concrete dust are issuing from the support beams of the bridge.  Small tremors and puffs of escaping steam are increasing in frequency near the hitherto dormant volcano.  There’s a red sky in the morning, today****.

But, I appear not to be able to stop yet.  I’m not yet able to escape.  I’m still pushing the stupid boulder up the stupid hill, like the stupid idiot that I am.  I’m even writing this blog post on my lapcom for the first time in two weeks (well, this is the first time at all that I’m writing this blog post, but hopefully you know what I mean), just because I felt mildly nostalgic.

One of these days, though, I’ll be able to end my blog post with just “TT” instead of “TTFN”, and it won’t be over just for now but finally and for good—not just the blog but everything.  And I don’t know if that will be sad or a relief for anyone out there, but I hardly think it will be a tragedy, nor will it be more than little noted, and it will certainly not be long remembered.

But for now, I must needs sign off with the annoyingly non-climactic

TTFN


*Primes that are two apart from each other, such as 29 and 31, or 137 and 139.

**Seriously, I worked out a lot of the algebra that involves it and everything (for instance, it turns out that a gleeb squared is still a gleeb, and 1 over a gleeb equals 0).  I’m sure I discussed it in a previous blog post.  If I can find which one without much trouble, I’ll leave the link here.

***In principle, you can tell by looking even if you are not so inclined, but you simply will not tell because you won’t look.  Should that count, then, as a “can” situation if it’s not physical impossibility but mental disinterest that leads one never to do a thing?  If it simply will not ever happen, can one not just then say that it cannot happen?  Are “cannot” and “shall not” synonymous here, as when Ian McKellen misspoke his most famous line when facing the balrog in The Fellowship of the Ring?

****This may be true somewhere—it probably is, come to think of it—but it’s not true for me, because it’s still fully dark as I write this; the sun is not even lightening the eastern horizon yet.  I’m just being melodramatic.