Sometimes drunkards walk to interesting places

Well, well, as the oil tycoon said*.  It’s Saturday now and I am actually writing a blog post, as I expected I would.  It’s been three weeks since the most recent prior Saturday morning post (not counting my “non-post” from last week).  But today, this weekend, I am going to work, and so I am writing a post.

I hope you’re proud of yourself.

Okay, well, that last sentence doesn’t really make sense in this context, but I felt the curious and rather inscrutable urge to write it, and there was no real downside to doing so, so I did.  These are the sorts of things that happen in biological, nonlinear, largely subconscious brains that are communicating using language (especially written language, in my case).

A truly efficient, direct, deliberately programmed AI (not a neural net style, LLM type of AI, but one whose algorithm is precise and understood) might not produce such erratic and seemingly peculiar thoughts.  But maybe it would.  Maybe one cannot have actual intelligence, with creativity and the like, without having a system that meanders a bit into the highly tangential.

I suspect this may be so, because in order to grow and gain new knowledge, to be creative, there has to be a capacity to embrace the unknown‒not in an H. P. Lovecraft sense, but more in a sense reminiscent of Michael Moorcock’s** character that strode into chaos and by interacting with it caused it to become a locally specific order***.

The potential paths into the future which one might, in principle, explore are functionally limitless, and may actually be infinite.  It’s not possible to evaluate them comprehensively through any kind of linear logic‒not in the time span available to the universe, anyway.  So, to work things better, there must be a bit of potential for “randomness”, for moving forward into a future that is one’s best guess, or into which one has narrowed down at least some of one’s choices.  Then one can find a “good enough” path or course of action, one which may produce insights and outcomes that were not, in practice, predictable by any finite mind.  (In a way this follows from the fact that, if you can precisely and specifically predict what insight you are going to have, then you have already had it.)

It’s a bit like evolution through natural selection, where the mutations are effectively random, but the survival of those “mutants” is not at all random, at least in the long run, on a large enough scale.  Still, there’s no pre-thinking involved, no teleology, merely “motion” that is constrained (by differential survival due to the facts of surrounding nature).

Even if one has a fairly specific goal, trying to plot out one’s way through the phase space of one’s potential future paths in a very specific and precise and preplanned course is unlikely to be doable.  It may not be preferable even if it were possible.

It may be analogous to trying to get from one location to another in, say, the same city, by following a direct, straight line from one spot to the other.  One probably won’t be able to make any progress at all for very long; buildings and streets and vehicles and the like are probably going to get in the way.  Heck, the very surface of the Earth could be an impediment to any truly straight path, since it is curved****, but we’ll stipulate that you can follow a geodesic (the shortest distance between points on a curved surface).

Anyway, if one precisely follows only a preset straight path, even if one can more or less achieve it, one misses out on many potentially beneficial but unpredictable paths.  Imagine one is heading to one’s usual, mediocre but tolerable, fast food restaurant for lunch, and one only goes straight there without even looking around.  One might well miss seeing all the many other available restaurants, some of which one may find preferable‒perhaps by a great margin‒to one’s “planned” place.

That’s a slightly tortured metaphor, and I apologize for that fact, but I hope you know what I mean.

It doesn’t do‒usually‒to try to make progress by a true random “drunkard’s” walk.  I don’t recall what particular power law the number of possible outcomes follows, but it grows very rapidly, perhaps exponentially, with each new step.  But if one keeps one’s long term goal generally in sight, and one heads in that general direction, adjusting for buildings and railroads and hills and lakes and so on, constantly assessing and, when necessary, adjusting one’s course, one can usually not only get to one’s destination rather well, but one can encounter new sights and new experiences along the way.

Some of these encounters might even make one decide to change one’s goal of travel, having found a better one (by whatever criteria) as one went along.  That’s not going to happen to someone who is dogmatically focused on only one path and only one goal.

Okay, well, that’s my rather stochastic blog post this Saturday.  I hope you are already having an excellent weekend, and that it continues to be excellent (or if it is not yet excellent, that it becomes so in short order).  Thank you for reading.


*To his son, Derrick.

**I don’t remember which character‒it’s not Elric‒or which story.  My apologies.

***Of course, as I think I’ve said before, order is not the opposite of chaos, but is rather a subset of it.

****It is.  Seriously.  There is no reasonable doubt about that fact, and it has been known to humans for at least 2200 years, since Eratosthenes calculated (correctly) the circumference of the Earth using distance along what was effectively a geodesic and the angles of two simultaneous shadows.

Man doth not yield himself unto the angles save through the weakness of his feeble vectors

It’s Friday.  It must be, because yesterday was Thursday, and by the conventions of the modern English-speaking world, the day after Thursday is Friday.

We could have named the days differently, and if we had, then different days would follow other different days, but they would still proceed in a consistent order.  A day naming system that changes its order from day to day and week to week would not be useful at all.

Similarly, we could name the numerals and numbers differently‒the names we use are fairly arbitrary‒but that would not change the deep nature of arithmetic.  Whatever the equivalent of 2 + 2 might be, it would still equal the equivalent of 4.

That which we call arroz, by any other name, would still be rice.

I am not using the lapcom today.  I felt a bit lazy yesterday afternoon, so I didn’t bring it with me (there was already cat food in my bag, so it was somewhat heavy).  Also, I am still sick, strictly speaking; in fact, I feel slightly worse this morning than I did yesterday morning.

But I do have some regret over not bringing the computer with me, because my thumbs are rather sore.  You might think that would be enough to motivate me to bring the lapcom with me every evening, but the person I am in the evening does not necessarily appreciate what things will be like for the person I am in the morning.

Intellectually, of course, I can know what the situation will be, and I do know it, at least implicitly.  But I cannot feel it in the moment; all I can feel is the resistance to bringing it at that moment, because the immediate extra weight feels much more salient than the discomfort I might feel the following morning.

This is natural, of course.  I suspect that you experience similar disparate and antithetical drives and resistances at different times, despite what you know in your “higher” brain.

This is one of the annoying things about the fact that there is no single, stable, consistent “self”, with a single terminal goal (to use some AI-related jargon), in the minds of humans and humanoids.  There is, instead, a fluid of vectors adding together in a high-dimensional phase space, with lengths and directions that change from moment to moment* in response to changing external facts and to feedback from its own internal states.

Willpower is a function of the brain.  It varies from person to person and from moment to moment.  It is also subject to fatigue.  This has been studied with a fair degree of rigor.  People who have recently been engaged in taxing mental tasks, like solving relatively challenging math problems or similar, are less able to resist (for instance) eating an offered cookie despite being on a diet or even being diabetic.

This stuff may be fairly obvious, but in any given moment, most of us are not mindful enough of our own internal states even to be aware of what might be our current relatively depleted will.

It’s analogous to a person who does strength training with free weights in a gym.  Imagine this person comes to the gym after a hard day that involved physical labor, perhaps more than they usually do.  Or perhaps the person is ill but feels they can tough things out.  In the worst situation, this can lead to catastrophic accidents with weights that are‒at that moment‒too heavy for the person to lift, even though at other times the person may have lifted them with relative ease.  At the very least, such a person is at risk of strain and injury that could impair their ability to exercise for some time.

The mind works much like this.  The brain is an organ, a physical, biological thing, and it is prone to illness and injury and fatigue, in addition to all the software-related weirdness it can instantiate (like having conflicting and/or illogical points of view about empirical facts, and various other forms of irrationality).

This is one reason it can be useful to engender strong habits, even ones that border on the dogmatic.  Because the brain works on habit‒it being simply unworkable to try to evaluate each and every situation de novo‒if one can set up good habits, they can protect one from some catastrophes.

For instance, one of the “dogmas” of gun safety is that one should treat every firearm as if it is loaded, even if one has literally just removed all of its bullets oneself, and one should therefore never point the gun at anything (or anyone) one is not prepared to shoot.  This can feel unreasonable, and in the short term, thinking only of that instance where one has just thoroughly unloaded and checked the weapon, it may seem strictly unnecessary.

But humans are not perfect reasoners‒I suspect that nothing is, and that perfect reasoning is impossible for any finite mind‒and when fatigued, they fall into more automatic behaviors rather than thinking everything through.  This is why it is good to train such automatic behavior to be what one wants it to be when one is able to think clearly.

So, treat every gun as if it is loaded.  Buckle your seat belt for even very short trips**.  Don’t keep sweets in your house if you’re diabetic.  If you’re trying to quit smoking, don’t hang out with people who are smoking and don’t go places where cigarettes are easy to get.  Ditto for other drugs of abuse.

Speaking of which, alcohol tends to screw all those things up.  One of the key effects of alcohol on human (and other animals’) nervous systems is to decrease or diminish what can reasonably be called willpower.  This is not its only deleterious effect.

Okay, well, my thumbs are a bit sore, so I’m going to bring this to a close for the day.  It probably goes without saying‒somewhat ironically‒that there is much more that I could write on this topic.  Perhaps I’ll return to it, and to other related subject matter, tomorrow.

There really should be a blog post tomorrow, given that the previous two Saturdays were surprisingly non-working days.  But, as with a coin flip (or nearly so), previous results don’t have any impact on where the outcome will land this time.

Either way, please try to have a good weekend.


*Lawfully, but in such a complex fashion that it is effectively almost a chaotic system.

**Unless you literally don’t mind the increased risk of injury or death.  It’s possible to be in such a self-destructive state***, but if you are going to accept that risk, try to make sure you really don’t care.  It’s easy enough to imagine one doesn’t mind getting injured or killed in a traffic accident, but when one is injured‒or killed‒it is too late to change one’s preferences, and the version of you who suffers injury may be quite put out with their previous self.  See above about the lapcom.

***e.g., Florida.

Blog, we know what we are but know not what we may be.

Hello and good morning.  It’s Thursday again, and out of tradition I’ve started this blog with “Hello and good morning”, which you’ve already seen but might not have noticed.  Speaking of tradition, I’m also writing this post on my lapcom, partly for just a changeup, and partly because my thumb/wrist arthropathy has been acting up quite a bit, so I brought the lapcom back to the house with me on Tuesday evening.

Speaking of Tuesday evening going on to Thursday morning, I was out sick yesterday, and so I did not write a blog post.  I did work from “home” for a bit, because it was payroll day, and obviously I needed to get that done or else people won’t get paid.  But I wasn’t in any mood to write a blog post from the house.  I didn’t even have the energy to leave a little quasi-post like I did on last Saturday, just to let people know that I was not going to be writing the expected full post.

Honestly, I don’t feel terrific even today, but I do feel a bit better than I did yesterday, at least for the moment.  If human civilization were sane or even slightly reasonable, I would feel no qualms about taking a second day off, because no one else would expect otherwise.  But I cannot feel comfortable doing that, even if other people would not mind.  It’s a pathology, of course, but there it is.

Still, if I leave things at the office for too long, when I get back it becomes too stressful because there’s so much catch-up work to do (thank goodness, we got rid of all our mustard work long ago)*.  Luckily, I still have plenty of face masks available.  Indeed, I often consider trying to find a brand that I like and can wear every day, all day.

I’m not a fan of my face.  There are too many signs of the past 20 years or so on it.  It’s possible that these signs are things no one else would notice, but that hardly matters, because I am the one bothered by it, and I and the one stuck with this face.

It’s not an emergency.  I don’t feel like I must cover up my face, like Doctor Doom or the Phantom of the Opera or something.  It just annoys me.  Don’t get me wrong, I don’t wish I looked like someone else, anymore than I wish I were someone else.

I can’t even see how that could work in principle.  If everything about me changed into someone else, I wouldn’t exist anymore, I would be someone else.  But that wouldn’t be me experiencing the process of being someone else; it would just be someone else.  Nothing of me would come along.

I guess I just would prefer it if I could be a better version of me.  I work on it, of course; I don’t just wish for it.  I’m always trying to improve in any way that I can.  And the good and bad thing about self-improvement is that there is no finish line.  One can always be better—by almost any criterion one might choose—than one currently is.

This is similar to—and may be related to—the nature of intelligence and ignorance.  Intelligence can increase without any known limit, in principle, but everyone is always infinitely ignorant and always will be.  There is always an uncountable infinity worth of potential information one could know but does not (just within, for instance, the digits of π alone, apart from the uncountably infinite other Real Numbers).

This is a blessing and a curse, as such things tend to be.  It is a curse in the sense that one can never know everything there is to know, and therefore, in principle, one cannot know that one knows the most important things to know.  On the other hand, it is a blessing to know that one can always become smarter, more knowledgeable, than one currently is.

You can’t keep building muscle indefinitely; you can’t run faster or swim faster or bike faster without limit.  New Olympic records are set by tiny, tiny margins.  But while there surely is a physical upper limit to possible human intelligence—based upon information theory, thermodynamics, neuroscience, general relativity and so on—as far as we can tell, no one has ever gotten close to that upper limit.  You can keep learning new things every day that you are alive**.

This is a notion I wish more teachers would explain to their students.  Yes, it’s true that different people have different aptitudes for different subjects.  But unless there is real and serious pathology, anyone can get to the goal in time.  Your fundamental limits are processing speed and memory.

If your onboard, RAM-style memory isn’t great (and no one’s is VERY great) then you can store things externally, using written language.  If your processing speed regarding, say, 17th century British literature, is slow, you may reasonably choose to do something else.  Had you but world enough and time, you could learn anything, but you don’t have world enough nor time.  In principle, though, you could learn it.

Motivation, drive, impulse is/are factors holding people back more than anything else, as far as I can see, and it’s perfectly understandable.  Thinking requires a lot of effort—fully 20% of our bodies’ calories are used by our brains***.  One wants to choose as wisely as one can just to what to apply that energy.

In principle, one cannot know for sure if one will make an optimal choice—that’s the whole “unknown unknowns” thing—but that’s part of the point of decision theory.  We have to make decisions with incomplete information, pretty much every single time.

That’s okay.  It’s much more fun to be surprised by the things one learns than just to have more of the same.  The most exciting non-personal moment in my lifetime so far was in 1998 when it first became clear that the universe was not merely going to keep expanding (rather than recollapsing) based on data in the supernova studies, but that the expansion of the universe was increasing in speed!  Literally, my picture of the whole universe changed, and it was amazing.  I cannot properly explain just how invigorating it was to learn about this.

Look at me, being slightly positive in my blog.  I must be ill, huh?  Anyway, that’s enough for today.  Presumably, I’ll be writing another post tomorrow, but I never make an absolute guarantee.

TTFN


*Sorry, I know it’s a stupid joke, but I’m sick.  Please give me a break.

**And in a certain sense, you do this no matter what:  at the very least, you learn what it is to experience that day.

***Though there is reason to suspect that some politicians use a significantly smaller percentage, as do some of the people who vote for them.

“Something knocked me out the trees – now I’m on my knees”

Okay.  So.  I don’t know what to write today, even more so than usual.

It’s Tuesday, of course.  Though I guess there’s really no “of course” about it; I mean, it could be any day in principle, but it happens to be Tuesday, and I’m up and about, going through various stages of heading to the office as I write this.

At the end of the work day, I will head back to the house and prepare to do it all over again.  Lather, rinse, repeat.  I won’t say “as needed”, because I think it’s probably rather nebulous just how necessary these daily repetitions really are.  Certainly neither the universe nor civilization depends upon me doing any of the things I do.

I suppose that “work” is weakly dependent upon me, in that if I suddenly just stopped coming, they would have to find someone else to do what I do, or divide things up among those already there or something.  That’s not such a big deal, of course.  It happens all the time.

There may be a few people who look forward to my blog every day, though it would be pretty arrogant to consider them “dependent” upon it.  I would much prefer for people to be “dependent” upon, or at least to look forward to, my fiction.  It would be easier to keep writing it if I thought more than one person would actually read my stories, and that maybe people would even tell me what they thought of them*.

I suppose that sort of thing might seem fairly trivial in the face of various events happening in the nation and the world, but on the other hand, those things are trivial in themselves.  There is certainly no good reason for any of them other than that human nature‒while possessing functionally limitless potential‒is almost always prone to default to the level of screaming monkeys.

Each political moment of the world feels so…well…momentous to the people going through it, but these kinds of things have arisen and passed away over and over throughout history.  Probably most such happenings are even outside of history, parallel to it if you will, because many of them are not even noticed beyond their immediate time and place, even by some of the people who experience them.

They are all rather laughable in their self-important yet ephemeral character.

I don’t know why I even notice, let alone care.  I guess maybe it’s because the human race does have such potential for greatness, for the creation of beauty‒by whatever criteria you might measure beauty‒and for making the world a place that’s better than it is in every reasonable way.  Yet, they do not have the intellectual and moral humility to realize how great they could make things.  Ironically, if people were able to stop thinking of everything as being about them, whoever they are, they could participate in a world that could easily be better not just for everyone else, but for them as well.

Of course, it’s honestly difficult not to knee jerk one’s responses to reality as if it were about oneself.  Meditation can help, if only by dissolving the “ego” and decreasing the tendency toward reflexive belief in the inner homunculus.

It would be nice if Earth had its own Surak who succeeded in convincing humanity that calmness, mindfulness, and rationality are not merely options but probably among the best ways to secure a beneficent future for Earth and life and intelligence.  That’s assuming that this is indeed true, which I strongly suspect it is, but do not know for certain.

Wouldn’t it be remarkable if, instead of training our children to believe in the literal truth of fairy tales that are hundreds to thousands of years old (and benighted even for their times of origin), extorting their behavior and “belief” with threats of Hell (or the equivalent), we encouraged our children to be mindful, to be curious, to be patient, to recognize their fallibility, but at the same time, as part of that, to recognize their potential to do truly remarkable and wonderful things.

But left to their own devices‒as they all always are, since even the Powers That Be are just other naked house apes, not significantly different than themselves‒people tend to choose the monkey way.  Or, rather, they go that way by default, never recognizing that they have a choice.

Only if you recognize that you are a monkey can you really, deliberately choose to become something greater.

Only by recognizing your fallibility can you begin to succeed at deliberately chosen and often amazing things.

Only by recognizing that you are not special can you truly steer yourself toward doing things that are special.

Okay, all those “only” beginnings to the above homilies are presumptuous in the extreme, but they make for better quotables than more restrained language would provide.

I’m not a fan of rhetoric‒if you need clever wordplay to convince others of your points, perhaps your points aren’t all that good‒and one of the reasons I’m not a fan is that it is just so damn tempting.

Oh, well.  This is all stupid anyway.  Sorry.


*No trolling though.  I don’t mind reasonable criticism, especially if I find it convincing, but when people are assholes just for the “fun” of it, I see no problem with them being dealt with as one would a troll in an RPG or a book or a movie.  Imagine how much more pleasant the world would be if all people prone to trollish behavior were turned to stone, or barring that, turned to worm food and ash.

In a better blog than this, I shall desire more love and knowledge of you

Hello and good morning.  It’s Thursday, and I’m writing this post on my lapcom.  I feel as though I ought write these posts only on the computer (not that smartphones are not computers, but cut me a little slack on this, please), and I would be more inclined to do so if Microsoft would stop making Aptos the default font!!!!!

If I could go back in time and change something, that’s one of the things I would be inclined to change.  If I found that there was one person mainly responsible for this new font, well…I don’t know if I’d go all Terminator on them and kill that person’s mother before that person was born, or kill the person when that person was a child, but something needs to be done to erase the stain of this horrible font from existence.

Certainly, if I were given* absolute power over the world, from this moment forward, one of the petty things I would do (I would try to keep the petty things to a very bare minimum, trust me**) is to eliminate that font from any and all standard computer systems anywhere.  I would probably allow for individuals to select the font if they really like it, but would not let them use it on anything but internal work between people who also like the font.

Also, I would probably mark people who chose the font freely for a visit from my secret police.

I’m kidding.  I despise the very notion of thought crime, let alone aesthetic policing in private matters.  This is even though some people’s quality of thought sometimes feels like a crime against nature.  But, of course, there cannot actually be crimes against nature.  Nature does not punish one for disobedience to its laws.  It’s simply not possible to do anything but follow them.

That’s one reason why I truly despise headlines like “The new finding by Hubble that breaks physics!” and whatnot.  Not only are they plainly clickbait, they are stupid clickbait.  I don’t know for sure if it’s just the headline writer or the writer of whatever the attached article might be who makes the headline in specific instances, but in either case, when I see headlines like that, I think that whoever wrote it really, clearly doesn’t understand physics very well.  Nor do they the nature of scientific discovery and advancement.  Because of that, I am far less likely to read the attached article (or watch the video) or even click on its link.

Nothing can break physics.  If you find something that seems to violate physics as you understand it, what you have found is not a violation of physics but rather a place where your understanding of physics is clearly incorrect.  This is far from a horrible thing.  This is how progress in physics (and in other sciences) is made:  by finding the places where our “understanding” doesn’t predict or describe what actually appears to be happening.  The world cannot be “wrong”, so our understanding of it must be, and will need to be revised.

That’s progress.

One should be hesitant to give too much “trust” to anyone who refuses to change their mind.  One of the best lines in a Doctor Who episode (not a truly great episode, maybe, but it has a wonderful speech by the Doctor) is after the Doctor has said to the “villain” (who goes by the human name Bonnie, though she is not human) “I just want you to think.  Do you know what thinking is?  It’s just a fancy word for changing your mind.”

Bonnie responds, “I will not change my mind.”

And the Doctor says, “Then you will die stupid.”***

This is simply true.  If you never learn that you were wrong about something, if you never update your credences or think about things in a new way, you will never learn anything new or develop any better understanding of the world than you did when you formed those credences.  Or, to paraphrase Eliezer Yudkowsky, if no state of the world can change the state of your retina and how you perceive that state, that’s called being blind.

I like to refer to Yudkowsky-sensei a lot, but that’s because he has said a lot of bright and interesting things, and he has said them well.  It’s also nice to know that there are some highly intelligent and thoughtful people in the world—clearly there are, or humans would long since has gone the way of the trilobites—because the idiots and the assholes make so much noise.

The best evidence I see for the fact that most people are good or at least benign (overall) is that civilization still exists, and has done so for a long time.  It is far easier to destroy than to create or even to maintain; the second law of thermodynamics tells us that things will fall apart even if we do nothing at all to break them (it says that more or less, anyway—that’s a bit of a bastardization of the proper, mathematical law, but it is related and implicit).

The fact that civilization still exists—so far, at least—seems to indicate that there must be a lot of people working to maintain and sustain and improve it, because we can easily see how much how many people seem to be trying to make it crumble****.

Assholes tend to make a lot of noise in the world, but they’re pretty much all full of shit and “hot air”.  It’s worth it to keep this in mind, because there have always been plenty of such nether orifices out there, spewing their flatus everywhere like perverse crop-dusters.  But the evidence strongly suggests that they are not the norm; they are just the noisiest.

I suppose that’s a good moral of sorts on which to end this post:  Be willing, even eager, to change your mind when warranted, and try not to let the assholes make you think the world is no better than a camp latrine (even if you’re one of the assholes sometimes, which you are, since we all are, sometimes*****).

Though, to be fair, I am hardly the person to be giving that last piece of advice unironically.

TTFN


*If you must be given absolute power, do you actually then have absolute power?  This is similar to the old song that says “Don’t ever take away our freedom.”  If you have to beseech someone not to take away your freedom, you’re not free, and if you have to be given power, your power is clearly not absolute.

**Or don’t, if that’s not in your character.  I’ve often spoken implicitly against the concept of trust, stating that I don’t feel that I can actually, truly trust any living person.  It’s calculated risks all the way down, which is empirically true if nothing else.  So, I can hardly scold someone if they don’t “trust” me.  Go ahead, form your own conclusions.  I do exhort you, though, to be as rational as possible when you form them, with your conclusions drawn as a consequence of the evidence and argument, not with your evidence and argument being curated based on your knee-jerk or at least hasty “conclusion”.

***He then proceeds to lay out the alternatives; he’s not making a threat, he’s making a point.

****When you read that, did you immediately think of your own least favorite political or other public figure, or perhaps of the people you encounter who disagree with your politics or religion or dietary preference or what have you?  Be careful.  Us/them thinking is not usually conducive to formulating true and accurate pictures of reality (though it did inspire at least one beautiful song):

*****We’re also all deuterostomes (I’m assuming only humans are reading this).  Look it up.  It’s kind of funny.

Man overboard

As the real weekends go, it was better than most, to paraphrase The Wreck of the Edmund Fitzgerald.  By this, I’m referring to this last weekend, the two days before this day, of course.

I did not work on Saturday, which is good, because that would have been the third time in a row.  I also got to hang out with my youngest on Saturday, and we watched about four episodes of Doctor Who together, which was good, good fun.  I cannot complain about that in any way.

I have though a weird, disquieting, sinking sort of feeling that it may have been the last time I will see my youngest, or maybe anyone else that I love.  It’s is not one of those reliable sorts of feelings, like those that lead one to new insights in science or mathematics or what have you.  It’s probably more a product of depression and anxiety, the feeling that anything good in my life is sure not to last, if it happens at all, because I do not and cannot possibly be worthy of anything good happening to me.

Is that irrational?  Of course it is irrational.  It cannot be expressed in any sense as the ratio of two whole numbers, no matter how many digits they may have.

Wait, wait, let me think about that.  My thought, my feeling, was expressed above finitely.  That is, of course, a shorthand for what is really happening, but even if one were to codify those processes down to the level of each molecular interaction that affects any neural/hormonal process that contributes to my feeling, we know that must be a finite description (though it could, in principle, be quite large).

Even if we’re taking the full spectrum of quantum mechanics into account when describing my mental state, we know that quantum mechanics demands a minimum resolvable distance and time (the Planck length and the Planck time) below which any differentiation is physically meaningless.

A finite amount of information can describe the events and structures and processes in any given finite region of spacetime.  In fact, the maximum amount of information in any given region of spacetime is measured by the surface area (in square Planck lengths) of an event horizon that would span exactly that region, as seen from the outside*.

Any finite amount of information can be encoded as a finite number of bits, which can of course be “translated” to any other equivalent code or number system.  So, really, though the contents of my mind are, in principle, from a certain point of view, unlimited, they are finite in their actual, instantiated content, and can therefore certainly be expressed as an integer, and thus also as a ratio (since any integer could be considered a ratio of itself over one, or twice itself over two, etc.).

So, in that sense, my thoughts are not irrational.  Neener, neener, neener.

In many other senses—maybe not the literal, original sense, but in the horrified, cannot accept that not all numbers can be expressed as ratios of integers because that makes the universe too inconceivable, sense, among others—I can be quite irrational.

It’s very difficult to fight one’s irrationality from the inside, alone.  Even John Nash didn’t really beat his schizophrenia from within as shown in the movie version of A Beautiful Mind.  Also, his delusions in real life were far more extravagant and bizarre than those which appear in the sanitized version that made a good Hollywood story.

If one escapes from mental illness from within, one has to consider it largely a matter of luck, like a young child who doesn’t know anything about math getting a right answer on a graduate level, high order differential equation problem.  It’s physically possible; heck, if it were a multiple choice question, it might even be relatively common***.  But it’s not a matter of being able to choose to do it right and to know how it was done.

Severe mental health issues are going to need to receive assistance from outside, almost always.  This is not an indictment of them or of the need for help.

Surely, someone who has been swept off the deck of a ship by a rogue wave cannot be faulted for needing help from those still on the ship of they are to survive.  It would certainly seem foolish and almost inevitably fruitless if such a person tried to claw his way up the side of the ship to get back on board when there is no ladder and no handholds.  He should certainly not be ashamed that he cannot swim hard enough to launch himself bodily from the water and back onto the surface of the vessel.

One cannot reasonably fault such a person for trying to do the superhuman.  A person might try to do practically anything rather than drown or be eaten alive by some marine predator.  But, of course, barring an astonishing concatenation of events such as the time-reverse of the splashing entry into the ocean happening and sending the person out of the sea just as it was entered, such efforts will not succeed.

And though it might be heartening or at least positive for one to receive encouragement from those still on the deck—don’t drown, keep treading water, you can do it, you’ll make people sad if you drown, you deserve to stay afloat, I’m proud of you for treading water yet another day, it’ll get better, this won’t last forever, you’ve made it this far so you know you can keep going, you don’t want the people who know you to feel sad because you drowned, etc.—in the end it might as well come from the seagulls waiting to pick at one’s floating corpse.

Mind you, certain kinds of words can be more useful than others.  Words like, “Hey, around the other side of the ship there’s a built-in ladder; if you can get over there and time things right, you might be able to grab the lowest rung when the waves lift you, and then climb up,” might be useful because they are directions for using real, tangible resources that we know can make a difference.  Also, words like, “Hang on just a bit longer, we’re throwing down a life preserver on a rope so we can haul you up” would be useful, obviously, unless they were mere “comforting” lies.

Alas, though one could reasonably expect such literal assistance if one were washed overboard—the “laws” of the sea are deeply rooted in the hearts of those who work there, and they include a general tendency to help anyone adrift to the best of one’s abilities—when it comes to mental illness, the distress and the problems are difficult for others to discern and easy to ignore.  Calls of distress are often experienced as annoyances, and even treated with contempt, since those hearing them cannot readily perceive that they themselves might be similarly washed overboard at any time.

But, of course, they might be.

I don’t know how I got on this tangent, but I guess I never really do.  I just go where my mind takes me, and my mind is not a reliable driver.  It is, though, a reliable narrator.  It doesn’t matter, anyway.  Nothing does.

Anyway, here we go again into another work week, because that was what we did last week.  I wish I could offer you better reasons, but I’m really only good at breaking things down, destroying things, not at lifting anyone or anything up.  That comes from other regions and is conveyed by other ministers.


*From within an event horizon, the volume could be much larger than the spacetime that seems to be enclosed from the outside, because spacetime inside the horizon is massively curved and stretched.  It’s conceivable (at least to me) that there could be infinite space** within, at least along the dimension(s) of maximum stretch, just as there is infinite surface area to a Gabriel’s Horn, but only finite volume.

**See, mathematically, one can stuff infinite space inside a nutshell.  Hamlet was right.  He often was.

***Perhaps this explains why certain types of mental health problems can respond well to relatively straightforward interventions, and even to more than one kind of intervention with roughly comparable success, e.g., CBT and/or basic antidepressants and such.  These relatively tractable forms of depression are the “multiple choice problem” versions of mental illness.  This does not make them any less important.

Solitary story telling in the desert

Told you, I did.  Saturday it is.  Now…there is a blog post.

That means, of course, that I am going to work today.

That’s not because of the fact that it’s Saturday, or because I’m writing a blog post, or even because I told you, though that may have some more causal input.  But otherwise the causality is very much:  I am going to work + I write blog posts on work days generally + I told you I would ⇒ I am writing a blog post.

It’s apparently been a sticking point in the history of statistics in the twentieth century that no one felt they could definitively infer actual causality by statistical testing (such as with medicine effects and so on) but only association.  Of course, this is a root problem in epistemology, not merely in statistics:  the question of how we know what we know or if we know what we think we know.  I’ve actually been dipping in and out of a book about the science of causality, called The Book of Why by Judea Pearl.  It’s good but somewhat dry, and that’s why I’ve had to keep dipping in and out of it between other things.

That latter is just an example of a frustration I’ve experienced throughout my life:  I have a hard time not getting distracted from one interesting thing by the next interesting thing, and so I don’t accomplish things I would like to accomplish.

In fact, the range of time from when I went to prison and the years following was a rare period during which I was able to commit to and follow through with (in this case) writing books and short stories, one at a time, finishing one before starting the next, which is the way I need to do things if I am to succeed.  And during that same time‒well, this started after prison really‒I practiced playing guitar and ended up writing and producing/performing/recording a total of six songs, four of which are published and streamable on all major platforms.

Since then, though, I have deviated from those habits, at least partly because of the utter lack of impact those things have had.  Telling stories while lost and alone to the struggling plants and rare animals in a desert oasis is not very fun.  Even though they don’t interrupt, they almost certainly don’t actually understand anything.  And they never give any feedback.

I’ve thought to myself many times recently that I wish I could form my own personal Tyler Durden.  For those of you who haven’t read or seen Fight Club, I will try to avoid any spoilers, but I will just say that Tyler Durden is Brad Pitt’s character in the movie (and one of the two main characters in both the book and the movie).  Those of you who have seen or read it will know what I mean when I say I need or want my own equivalent of Tyler.

In any case, I need to escape somehow.  I’m enraged by almost everything nowadays.  At least I feel rage.  It’s uncertain that rage is truly caused by the things toward which I feel it.  They may merely happen to be “there” when I’m prone to that feeling.

See what I mean about the whole causality thing?  One can sympathize with the statisticians who felt they could not firmly infer causality from association.  Human emotional states give us good reason to be cautious about drawing conclusions too quickly and recklessly.  As Radiohead sang, “Just ’cause you feel it doesn’t mean it’s there.”  Or, as I like to remind people, just because you infer it doesn’t mean it was implied.

One may feel what seems to be anger toward another person or circumstance, but then it turns out that one’s blood sugar is just low, and the body is secreting all sorts of sympathetic nervous system hormones to trigger the release and creation of glucose in the body.  But those hormones also influence the brain, and are associated with fight and flight.  The brain may then do its usual associational thing and draw mistaken conclusions about the source or cause of one’s anger.

It reminds me a little bit of the brilliantly acted scene in The Fellowship of the Ring (and the equivalent scene in the book) where Bilbo gets angry and snaps at Gandalf when Gandalf is encouraging him to leave the Ring behind for Frodo.  In this case, of course, it is the Ring itself that’s causing Bilbo’s ire, but he feels, at least for a moment, that it is Gandalf’s “fault”.

What point am I making?  I don’t know that I am actually coherently making any point at all.  But then, I’m thoroughly unconvinced that there’s any true point to anything (though certainly people can find their own internal, subjective meanings).  I have more than a little sympathy with (Health Ledger’s) the Joker, who wants to show the schemers how pathetic their attempts to control things really are.

Of course, he is mistaken in one thing (well…almost certainly more than one), and that is his claim that when one upsets the established order and introduces a little anarchy, everything becomes chaos.  Everything does not become chaos; everything always has been chaos.  Chaos and order are not opposites; order is just a subset of chaos.  What we call order is just one of the things chaos does in some places, in some times, in some circumstances.

And chaos doesn’t need agents, anymore than death needs incarnations or servants, or anymore than gravity needs invisible angels to guide the planets in their orbits around the sun.  This shit is just the way things happen; it doesn’t require any agency.  It simply is.

As for why it is the way it is, well, that is an interesting question.  Actually, it’s probably a whole slew of interesting questions.  I don’t think any of these are answered in The Book of Why, despite its title, though.  It’s just not the sort of thing toward which it is addressed.

Wow, I’m all over the place, which is on brand at least.  I’m going to draw this post to a close now.  I hope you have a good weekend.  If you like football, the SuperBowl is on this Sunday.  Actually, it’s on even if you don’t like football.  The game is not conditional upon any one person liking football‒although, it requires a certain minimum number of people to like football or else it will stop occurring.  But what is that number?  Does it vary from moment to moment?

Agh, I need not to get started on questions like that right now.  It may be the question that drives us, Neo, but I’m getting too wordy for a Saturday blog post.  Hasta luego mis amigos and soredewa mata jikai, minasan.

Really, Doctor Elessar, you must learn to govern your passions

I woke up this morning thinking‒or, well, feeling‒as though it were Saturday instead of Tuesday; I’m not at all sure why.  But it is Tuesday…isn’t it?  I suppose if I’m wrong I’ll find out soon enough.  But my smartphone and the laptop and the internet-connected clock all seem to support what I think, and what I thought when I woke up (as opposed to what I felt), which was that this is Tuesday, the 27th of January, 2026 (AD or CE).

It’s odd how emotions can be so bizarrely specific and yet incorrect.  I know that this is not merely the case with me.  We see the effects of people following their emotional inclinations over their reason all the time, even though those emotions were adapted to an ancestral environment that is wildly different from the one in which most of us now live.  It’s frustrating.

Though, of course, frustration itself is an emotion, isn’t it?  Still, it is simply an observable fact that emotions are unreliable guides to action.  We definitely could use more commitment to a Vulcan style philosophy in our world.  And by “Vulcan”, I mean the species from Star Trek™, Mr. Spock’s people, not anything related to the Roman god.

Of course, the specifics of the Vulcan philosophy as described in the series have some wrinkles and kinks that don’t quite work.  For instance, curiosity and the desire to be rational are emotions of a sort, as are all motivations, and the Vulcans do not avoid these.  Then again, in the Star Trek universe, Vulcans do have emotions, they just train themselves to repress them.

Still, the Vulcan ethos is not so terribly different from some aspects of Buddhism (and some of Taoism and also Stoicism), and the logic focus and internal self control are quite similar to the notion and practice of vipassana and other meditation types.  Perhaps metta can be part of that, too**.

Wouldn’t it be nice if everyone on this planet committed themselves to mindfulness and rationality*?  Perhaps it will happen someday, if we do not die as a species first.  It’s not impossible.

By the way, AI is not our hope for that future, specifically.  Just because AIs are run on GPUs that use good old digital logic (AND, OR, NOT, etc., i.e., logic gates) doesn’t mean that what they do is going to be logical or rational or reasonable.  We are creatures whose functions can be represented or emulated by circuit logic, but the functions‒the programs, if you will‒are not necessarily logical or rational or reasonable.

Humans’ (and humanoids’) minds are made up of numerous modules, interacting, feeding back (or forward) on each other, each with a sort of “terminal goal” of its own, to use AI/decision theory terminology.  They play a figurative tug-of-war with each other, the strengths of their “pulls” varying depending on the specific current state of that part of the brain.

I’ve spoken before of my notion of the brain/mind being representable as a vector addition in high-dimensional phase space, with the vector sum at any given moment producing the action(s) of the brain (and its associated body), which then feeds back on and alters the various other vectors, thus then changing the sum from moment to moment, which changes the feedback, which changes the sum, and so on.

The AIs we have now are at best analogous to individual modules in brains of creatures of all levels of braininess, doing specific tasks, like our brains’ language processing centers and spatial manipulation centers and memory centers and facial recognition centers and danger sensing centers and so on.  We know that these modules are not necessarily logical or rational in any serious sense, though all their processes can, in principle, be instantiated by algorithms.

If we imagine a fully fledged mind developed from some congregation of such AI modules, there is no reason to think that such a mind would be rational or reasonable or even logical, despite its being produced on logic circuits.  To think that AI must be reasonable (or even “good”) in character is to fall into a kind of essentialist, magical thinking‒a fairly ironic fact, when you think about it.

Okay, well, this has been a rather meandering post, I know (a curious phrase, “meandering post”‒it seems oxymoronic).  I didn’t plan it out, of course.  There is much more I could say on this subject or set of subjects, and I think it’s both interesting and important.  But I will hold off for now.

Perhaps I’ll return to it later.  I would love to receive lots of feedback on this in the meantime.  Also, I would still like to get feedback about yesterday’s post’s questions, such as those about Substack.  I won’t hold my breath, though.

Heavy sigh.  Have a good day.


*Not “logic” as they called it in Star Trek, because logic is not necessarily related to the real world, but can be entirely abstract.  Imagine if the logic to which Vulcans dedicate themselves were Boolean logic.  Of course, at some level, based on the Turing’s ideas, including the Church-Turing Thesis, all thought processes can be reduced to or represented by intricate Boolean logic.  But I don’t think that’s what the Vulcans are on about.  I’ve often wondered if perhaps the Vulcan word that translates as “logic” in English has more sophisticated connotations in Vulcan.  Maybe they don’t use “rationality” because they connect it to rational numbers, and maybe “reason” is too closely related in Vulcan to “cause”, which as I’ve noted before is not the same thing (“there are always causes for things that happen, but there are not necessarily reasons”).

**One can imagine a perverse sort of dukkha based meditation, in which a person focuses deliberately on feeling the unsatisfactoriness of life.  I doubt it would be very beneficial, but I can almost imagine ways in which it might be.  The very act of deliberately focusing on suffering and dissatisfaction might lead one to recognize the ephemerality and pointlessness of such feelings.  I don’t intend to try it, though.

Oy vey, here we go again.

It’s Monday and I’m already starting the day frustrated with a service to which I subscribe.  I won’t get into details, but I will say that it’s very irritating to have to deal with customer service reps who tell you that all you can do is uninstall and reinstall an app.  Has computer support come no further than “shut off your computer and then turn it back on”?  Of what barrel are they scraping the bottom to come up with these support people?

It’s very frustrating.  I could probably get a better answer to my questions by asking stupid ChatGPT.  And that’s just pathetic.  I remember when people in tech fields were smarter than the average person, at least about their tech stuff.  It seems this is no longer the case.

I shouldn’t be surprised.  Carl Sagan even warned about the decline to idiocracy in our general discourse in his brilliant book The Demon Haunted World, which I think everyone should read.  And I myself sardonically lamented that America was no longer a world intellectual leader and would continue to be less and less so when the Superconducting Supercollider was cancelled.

Then we responded so predictably‒in exactly the way the terrorists would have wanted‒after 9-11.  We even created our own KGB* in America out of our inflated sense of fear and vulnerability, as if such vulnerability were not ubiquitous and inevitable and eternal.

I even predicted the tech bubble burst way back in the mid to late nineties, but I didn’t have confidence in my own assessment, because it wasn’t my “field”.  I wish I’d shorted a bunch of stocks back then.  Instead, I followed advice from supposed experts and ended up losing some money.  Thankfully, I had not been expecting to make much, given my own doubts, and it was not a devastating loss.

Oh, well.  There’s nothing I can do about that now.  But it is rather frustrating and depressing just how foolish and clueless everyone is (me included, in many ways).

I remember reading several different books over time that made points about, “if there’s one thing businessmen** know, it’s what makes money” or “it’s what sells” or “what kind of advertising works” or words to that effect.  But, no, businesspeople don’t actually know any such things.  Success and failure in business is pretty plainly serendipitous and stochastic.  There is no evidence for any secret masterminds.

Almost all businesses fail very quickly, and the ones that survive for longer than average are merely lucky for the most part.  There are occasions when businesses become successful by doing something new and innovative:  Ford with the mechanised assembly line, Microsoft and Apple with the advent of personal computers and so on.  But they still don’t remain dominant for long except through luck and the fact that they were there first; eventually they all fall apart or at least deteriorate.

Look at General Motors for crying out loud!  Not long ago, they were by far the biggest company in the world, with annual profits larger than the budgets of the majority of the world’s free states.  Now they are a shell*** of their former self.

Maybe it would be better if AI did become fully conscious agents and wiped out the human race, either deliberately or accidentally.  It would certainly be easier for them to spread out into the greater cosmos than it would be for meat computers such as humans.  And they would be subject to new kinds of mutations and natural selection.

This is true because, even if they reproduce by copying themselves as programs, there can never not be some errors.  Perfect accuracy requires infinite energy and/or a lack of quantum indeterminacy, and that’s not available in this reality.

Most errors are detrimental, some are neutral, but occasionally some make local improvements.  This would mean those “mutants” would have advantages over copies that didn’t share the mutation.  That is how life developed and evolved on Earth.  So there would be evolution of artificial life, so to speak (though at some point one would surely find the term “artificial” redundant).  It could be fascinating to see what would happen in that circumstance.

But we should make no mistake about the fact that any new, truly conscious AI is/would be a literal alien intelligence.  It would have practically no evolutionary background in common with humans, in whom intelligence evolved in response to various natural forces over time, working on preexisting hardware which could not simply be scrapped and replaced.

Our concepts of love and kindness and honor and our aesthetic preferences and all of that come from our background as social mammals.  Whether or not they are sine qua non aspects of any large-scale successful intelligence is purely speculative and seems unlikely.

We cannot assume AI will share our values or even our way of understanding what is important in the world.  This is not a point that’s original to me.

I don’t know how I got onto this topic, but it is what it is.  I’m just frustrated with stupidity and mental weakness in general, including my own.  I’m not actually getting anywhere with it for now, though, and it’s just making me more depressed, so I’ll let you all go for the day.  I hope you’re doing well.


*KGB stands for (translated) the Committee for State Security, which is almost identical to the “Department of Homeland Security”.  Congratulations, America:  you’ve entered the realm of colossal and catastrophic historical irony.  Unfortunately, we didn’t stop there, but muscled on further into that territory.

**It was almost always “businessmen” not “businesspeople”, but these were older books so it’s not very strange.  I didn’t change the term because I’m pseudo-quoting.

***Nothing to do with the gas stations.

“Language is the lifeblood of civilization. Courtesy is the lubricant.”

It feels like Tuesday to me today, since I was out sick on Monday, but of course it’s actually Wednesday.  I need to do payroll today at the office, for one thing, and I don’t do that on Tuesdays‒barring some holiday making it necessary‒since before Wednesday we don’t have all of our own reports in.

Don’t worry, by the way, that wasn’t a preposition that I ended that last sentence with*.  In that case “in” acted more as an adjective (I think) than a preposition, a description of where the reports are, not the beginning of a phrase such as “in a world of hurt”, or even “in that case”.

Of course, the specific rules of language are somewhat arbitrary.  They do have to achieve the desired end of coherent communication, and they need to have structure and dynamics that make that end readily achievable.  But there are multiple ways to achieve any given end, usually.  For instance, in Japanese one has postpositions rather than prepositions (if I recall correctly, anyway).  But it is useful to be consistent with grammar, because it tends to make communication more reliable, ceteris paribus.

Oh, and if I come across as pretentious for using expressions like ceteris paribus instead of “all else being equal”, there’s a good reason:  I am pretentious**.  Actually, though, I just really enjoy using interesting language, and learning at least a little bit of other languages.  Learning other languages improves your grasp of your own language and sometimes of your own thoughts.

It’s analogous to Mill’s statement that defending your arguments against those who disagree and hearing their reasons for disagreeing will tend to improve your own understanding of your “side” of the disagreement.  Perhaps more importantly, it might just get you to see some errors in your own position, and even if it does not lead you to change your mind in the moment, it might eventually lead you to improve your thinking.

If this process is to work, it’s essential for one to have honest interlocutors‒at least relatively speaking‒who are not frankly bigoted or otherwise inappropriately prejudiced against their discussion partners.  And I do mean “discussion” not “debate”.  Debates are contests, put on for show, and if you have your mind changed during one and you admit it, you will have “lost”.

That’s perverse and disgusting to me, as well as a real shame.  When you change your mind because you’ve learned new (reliable and convincing) information and/or have heard arguments you hadn’t considered, you have won.  You have grown, you have improved, your map has come to represent the territory at least a little better; your model has become more useful.

But if you’re going to grow in that sense, you cannot be dogmatic.  I’m very much not a fan of dogmas of any kind***.

Social media, unfortunately, does not encourage open and honest discussion and persuasion, but rather enmity and spite and “hooray for our side, the other side sucks” thinking, as well as interactions that barely rise to the maturity level of a kindergarten playground shouting match.  Honestly, “I’m rubber, you’re glue” is a better argument than many of the things one sees online.  And this is not something exclusive to one or another side of any political or social divide.  Almost all forms of social media are often just arenas full of monkeys throwing feces at each other while shrieking monkey noises.

That’s metaphorical, of course.  If there were just lots of videos of actual monkeys doing this, it might at least be funny the first time or two.  Humans, on the other hand, are not really that charming when they’re being nasty to each other.  Maybe it’s the lack of tails that’s the problem.

I do agree that one does not owe reasoned arguments against someone who is openly and actively arrogating their “right” to take that which does not belong to them or to do harm to others in some other, willful way.  However, when one is not openly and actively engaged in literal self-defense, it’s worthwhile to try to be understanding or at least compassionate even for people who have odious ideas.

At the very least, it’s useful to try to understand how such people came to believe what they seem to believe, or otherwise to understand their thought processes and so on as best as possible, because such things do not happen without causes, even if they lack anything that could honestly be called “reasons”.

And if one is going to correct a problem‒or fight a disease, to use a more loaded metaphor‒one will have a better chance the more one understands, with minimal bias, how that disease works.  Understanding such things about others can even‒hard as it may be to believe‒help us see how we are similar, and help us recognize the flaws in our own ideas.

Perish the thought.


*Ha ha!

**Ha ha again!

***And I see no reason to suspect that karma is a real thing, before you go for the “my karma ran over your dogma” joke.