I woke up this morning thinking‒or, well, feeling‒as though it were Saturday instead of Tuesday; I’m not at all sure why. But it is Tuesday…isn’t it? I suppose if I’m wrong I’ll find out soon enough. But my smartphone and the laptop and the internet-connected clock all seem to support what I think, and what I thought when I woke up (as opposed to what I felt), which was that this is Tuesday, the 27th of January, 2026 (AD or CE).
It’s odd how emotions can be so bizarrely specific and yet incorrect. I know that this is not merely the case with me. We see the effects of people following their emotional inclinations over their reason all the time, even though those emotions were adapted to an ancestral environment that is wildly different from the one in which most of us now live. It’s frustrating.
Though, of course, frustration itself is an emotion, isn’t it? Still, it is simply an observable fact that emotions are unreliable guides to action. We definitely could use more commitment to a Vulcan style philosophy in our world. And by “Vulcan”, I mean the species from Star Trek™, Mr. Spock’s people, not anything related to the Roman god.
Of course, the specifics of the Vulcan philosophy as described in the series have some wrinkles and kinks that don’t quite work. For instance, curiosity and the desire to be rational are emotions of a sort, as are all motivations, and the Vulcans do not avoid these. Then again, in the Star Trek universe, Vulcans do have emotions, they just train themselves to repress them.
Still, the Vulcan ethos is not so terribly different from some aspects of Buddhism (and some of Taoism and also Stoicism), and the logic focus and internal self control are quite similar to the notion and practice of vipassana and other meditation types. Perhaps metta can be part of that, too**.
Wouldn’t it be nice if everyone on this planet committed themselves to mindfulness and rationality*? Perhaps it will happen someday, if we do not die as a species first. It’s not impossible.
By the way, AI is not our hope for that future, specifically. Just because AIs are run on GPUs that use good old digital logic (AND, OR, NOT, etc., i.e., logic gates) doesn’t mean that what they do is going to be logical or rational or reasonable. We are creatures whose functions can be represented or emulated by circuit logic, but the functions‒the programs, if you will‒are not necessarily logical or rational or reasonable.
Humans’ (and humanoids’) minds are made up of numerous modules, interacting, feeding back (or forward) on each other, each with a sort of “terminal goal” of its own, to use AI/decision theory terminology. They play a figurative tug-of-war with each other, the strengths of their “pulls” varying depending on the specific current state of that part of the brain.
I’ve spoken before of my notion of the brain/mind being representable as a vector addition in high-dimensional phase space, with the vector sum at any given moment producing the action(s) of the brain (and its associated body), which then feeds back on and alters the various other vectors, thus then changing the sum from moment to moment, which changes the feedback, which changes the sum, and so on.
The AIs we have now are at best analogous to individual modules in brains of creatures of all levels of braininess, doing specific tasks, like our brains’ language processing centers and spatial manipulation centers and memory centers and facial recognition centers and danger sensing centers and so on. We know that these modules are not necessarily logical or rational in any serious sense, though all their processes can, in principle, be instantiated by algorithms.
If we imagine a fully fledged mind developed from some congregation of such AI modules, there is no reason to think that such a mind would be rational or reasonable or even logical, despite its being produced on logic circuits. To think that AI must be reasonable (or even “good”) in character is to fall into a kind of essentialist, magical thinking‒a fairly ironic fact, when you think about it.
Okay, well, this has been a rather meandering post, I know (a curious phrase, “meandering post”‒it seems oxymoronic). I didn’t plan it out, of course. There is much more I could say on this subject or set of subjects, and I think it’s both interesting and important. But I will hold off for now.
Perhaps I’ll return to it later. I would love to receive lots of feedback on this in the meantime. Also, I would still like to get feedback about yesterday’s post’s questions, such as those about Substack. I won’t hold my breath, though.
Heavy sigh. Have a good day.
*Not “logic” as they called it in Star Trek, because logic is not necessarily related to the real world, but can be entirely abstract. Imagine if the logic to which Vulcans dedicate themselves were Boolean logic. Of course, at some level, based on the Turing’s ideas, including the Church-Turing Thesis, all thought processes can be reduced to or represented by intricate Boolean logic. But I don’t think that’s what the Vulcans are on about. I’ve often wondered if perhaps the Vulcan word that translates as “logic” in English has more sophisticated connotations in Vulcan. Maybe they don’t use “rationality” because they connect it to rational numbers, and maybe “reason” is too closely related in Vulcan to “cause”, which as I’ve noted before is not the same thing (“there are always causes for things that happen, but there are not necessarily reasons”).
**One can imagine a perverse sort of dukkha based meditation, in which a person focuses deliberately on feeling the unsatisfactoriness of life. I doubt it would be very beneficial, but I can almost imagine ways in which it might be. The very act of deliberately focusing on suffering and dissatisfaction might lead one to recognize the ephemerality and pointlessness of such feelings. I don’t intend to try it, though.

Well there is meditation on the body and how unreliable it is, how it gets decrepit, dies, decays, etc. and is ultimately impermanent and unsatisfactory, done by Theravada Buddhist monks. And there are yogis who meditate (and even live) in charnel grounds. But these practices are usually done by advanced practitioners because they would freak out ordinary people.
Yeah, that makes sense. And I’m still at least cautious and perhaps skeptical about how much good those practices do. But it’s very interesting.
You would only do those advanced practices under the guidance of a teacher, and only when you are ready, or you could literally go nuts. But regular mindfulness, breath meditation, chanting, etc can be done by anyone.
Yes. I have no doubt that it would be a significantly better world if everyone did it.
Temperance, Doc, temperance!