Really, Doctor Elessar, you must learn to govern your passions

I woke up this morning thinking‒or, well, feeling‒as though it were Saturday instead of Tuesday; I’m not at all sure why.  But it is Tuesday…isn’t it?  I suppose if I’m wrong I’ll find out soon enough.  But my smartphone and the laptop and the internet-connected clock all seem to support what I think, and what I thought when I woke up (as opposed to what I felt), which was that this is Tuesday, the 27th of January, 2026 (AD or CE).

It’s odd how emotions can be so bizarrely specific and yet incorrect.  I know that this is not merely the case with me.  We see the effects of people following their emotional inclinations over their reason all the time, even though those emotions were adapted to an ancestral environment that is wildly different from the one in which most of us now live.  It’s frustrating.

Though, of course, frustration itself is an emotion, isn’t it?  Still, it is simply an observable fact that emotions are unreliable guides to action.  We definitely could use more commitment to a Vulcan style philosophy in our world.  And by “Vulcan”, I mean the species from Star Trek™, Mr. Spock’s people, not anything related to the Roman god.

Of course, the specifics of the Vulcan philosophy as described in the series have some wrinkles and kinks that don’t quite work.  For instance, curiosity and the desire to be rational are emotions of a sort, as are all motivations, and the Vulcans do not avoid these.  Then again, in the Star Trek universe, Vulcans do have emotions, they just train themselves to repress them.

Still, the Vulcan ethos is not so terribly different from some aspects of Buddhism (and some of Taoism and also Stoicism), and the logic focus and internal self control are quite similar to the notion and practice of vipassana and other meditation types.  Perhaps metta can be part of that, too**.

Wouldn’t it be nice if everyone on this planet committed themselves to mindfulness and rationality*?  Perhaps it will happen someday, if we do not die as a species first.  It’s not impossible.

By the way, AI is not our hope for that future, specifically.  Just because AIs are run on GPUs that use good old digital logic (AND, OR, NOT, etc., i.e., logic gates) doesn’t mean that what they do is going to be logical or rational or reasonable.  We are creatures whose functions can be represented or emulated by circuit logic, but the functions‒the programs, if you will‒are not necessarily logical or rational or reasonable.

Humans’ (and humanoids’) minds are made up of numerous modules, interacting, feeding back (or forward) on each other, each with a sort of “terminal goal” of its own, to use AI/decision theory terminology.  They play a figurative tug-of-war with each other, the strengths of their “pulls” varying depending on the specific current state of that part of the brain.

I’ve spoken before of my notion of the brain/mind being representable as a vector addition in high-dimensional phase space, with the vector sum at any given moment producing the action(s) of the brain (and its associated body), which then feeds back on and alters the various other vectors, thus then changing the sum from moment to moment, which changes the feedback, which changes the sum, and so on.

The AIs we have now are at best analogous to individual modules in brains of creatures of all levels of braininess, doing specific tasks, like our brains’ language processing centers and spatial manipulation centers and memory centers and facial recognition centers and danger sensing centers and so on.  We know that these modules are not necessarily logical or rational in any serious sense, though all their processes can, in principle, be instantiated by algorithms.

If we imagine a fully fledged mind developed from some congregation of such AI modules, there is no reason to think that such a mind would be rational or reasonable or even logical, despite its being produced on logic circuits.  To think that AI must be reasonable (or even “good”) in character is to fall into a kind of essentialist, magical thinking‒a fairly ironic fact, when you think about it.

Okay, well, this has been a rather meandering post, I know (a curious phrase, “meandering post”‒it seems oxymoronic).  I didn’t plan it out, of course.  There is much more I could say on this subject or set of subjects, and I think it’s both interesting and important.  But I will hold off for now.

Perhaps I’ll return to it later.  I would love to receive lots of feedback on this in the meantime.  Also, I would still like to get feedback about yesterday’s post’s questions, such as those about Substack.  I won’t hold my breath, though.

Heavy sigh.  Have a good day.


*Not “logic” as they called it in Star Trek, because logic is not necessarily related to the real world, but can be entirely abstract.  Imagine if the logic to which Vulcans dedicate themselves were Boolean logic.  Of course, at some level, based on the Turing’s ideas, including the Church-Turing Thesis, all thought processes can be reduced to or represented by intricate Boolean logic.  But I don’t think that’s what the Vulcans are on about.  I’ve often wondered if perhaps the Vulcan word that translates as “logic” in English has more sophisticated connotations in Vulcan.  Maybe they don’t use “rationality” because they connect it to rational numbers, and maybe “reason” is too closely related in Vulcan to “cause”, which as I’ve noted before is not the same thing (“there are always causes for things that happen, but there are not necessarily reasons”).

**One can imagine a perverse sort of dukkha based meditation, in which a person focuses deliberately on feeling the unsatisfactoriness of life.  I doubt it would be very beneficial, but I can almost imagine ways in which it might be.  The very act of deliberately focusing on suffering and dissatisfaction might lead one to recognize the ephemerality and pointlessness of such feelings.  I don’t intend to try it, though.

“There are times I almost think I am not sure of what I absolutely know…”

Since yesterday was Monday, the 30th of June, it’s almost inevitable that today would be Tuesday, the 1st of July.  And, in fact, that is the case, unless I am wildly mistaken.

If I were to be wildly mistaken about such a thing, it’s rather interesting to consider just how I could come to be so wildly mistaken about something so prosaic and so reliably consistent.  It is from such speculations that—sometimes—ideas for stories begin.

This is not one of those times, however.  I’m not thinking about any kind of story related to that notion at all, though at times I might consider it an interesting takeoff for some supernatural horror tale.  If any of you find yourselves inspired to write a story—of any kind—based on my opening “question”, you should feel free to write that story.  I, at least, will give you no trouble.

These sorts of thoughts also remind me of a post that Eliezer Yudkowsky wrote, and which also appeared as a section in his book Rationality: From AI to Zombies.  I won’t try to recapitulate his entire argument, since he does it quite well, but it was basically a response to someone who had said or written that, while they considered it reasonable to have an open mind, they couldn’t even imagine the sort of argument or situation that could convince them that 2 + 2 for instance was not 4 but was instead, say, 3.

Yudkowsky, however, said that it was quite straightforward what sort of evidence could make him believe that 2 + 2 = 3; it would be the same kind of evidence that had convinced him that 2 + 2 = 4.  In other words, if it began to be the case that, whenever he had two of a thing and added two more, and then he subsequently counted, and the total was always three, well, though he might be puzzled at first, after a while, assuming the change and all its consequences were consistent and consistent with all other forms of counting, he would eventually just internalize it.  He might wonder how he had been so obviously mistaken for so long with the whole “4” thing, but that would do it.

This argument makes sense, and it raises an important point related to what I said last week about dogmatic thinking.  One should always, at least in principle, be open to reexamining one’s conclusions, and even one’s convictions, if new evidence and/or reasoning comes to bear.

That doesn’t mean that all ideas are equally up for grabs.  As Jefferson pointed out about governments in the Declaration of Independence, things that are well established and which have endured successfully shouldn’t be cast aside for light or frivolous reasons.

So, for instance, if you’ve come to the moral conclusion that it’s not right to steal from other people, and you’re pretty comfortable with that conclusion, you don’t need to doubt yourself significantly anytime anyone tries to justify their own personal malfeasance.  Most such justifications will be little more than excuse making.  However, if one should  encounter a new argument or new data or what have you* that really seems to contradict your conclusion, it would be unreasonable not to examine one’s conclusions at least, and to try to do so rigorously and honestly.

There are certain purely logical conclusions that will be definitively true given the axioms of a particular system, such as “If A = B and B = C then A = C”, and these can be considered reasonably unassailable.  But it still wouldn’t be foolish to give ear if some reasonable and intelligent and appropriately skilled person says they think they have a disproof of even that.  They may be wrong, but as John Stuart Mill pointed out, listening to arguments against your beliefs is a good way to sharpen your own understanding of those beliefs.

For instance, how certain are you that the Earth is round, not flat?  How well do you know why the evidence is so conclusive?  Could you explain why even the ancient Greeks and their contemporaries all could already tell that the Earth was round?

How sure are you that your political “opponents” are incorrect in their ideas and ideals?  Have you considered their points of view in any form other than sound bites and tweets and memes shared on social media, usually by people with whom you already agree?  Can you consider your opponents’ points of view not merely with an eye to puncturing them, but with an eye to understanding them?

Even if there’s no real chance that you’ll agree with them, it’s fair to recognize that almost no one comes to their personal convictions for no reason whatsoever, or purely out of perversity or malice.  At the very least, compassion (which I also wrote a little bit about last week) should dictate at least trying to recognize and consider why other people think the way they do.

Sometimes, if for no other reasons, it is through understanding how someone comes to their personal beliefs that one can best see how to persuade them to change those beliefs (assuming you are not swayed by their point of view).

This is a high bar to set when it comes to public reasonableness, I know, but I think it’s worth seeking that level.  Why aim to be anything less than the best we can strive to be, as individuals and as societies?  We may never quite reach our ideals, but we may at least be able to approach them asymptotically.  It seems worth the effort.

But I could be wrong.


*I don’t have any idea what such an argument or such evidence would be, but that’s part of the point.  Presumably, if I were being intellectually honest, and someone raised such a new argument, I would recognize it for what it was.