April, come she has. No contradictions allowed.

Well, it’s the first of April, so‒April Fools!  Except that, given that it is April Fools’ (Fool’s?) Day, to say April Fools about the fact that it is April 1st would be contradictory.  It’s rather like the self-paradoxical statement:  “This sentence is a lie”.  Because if that sentence is a lie, then it is not a lie, but that would mean that it is a lie, but that would mean that it isn’t, and so on.

Of course, one can write paradoxical things down any time one wishes.  That doesn’t constrain or harm actual reality in any way whatsoever.  Words‒and written language especially‒are the single greatest human invention, but they are not literally magical.  No matter how much hatred you try to put behind it, or what manner of “wand” you use, shouting Avada kedavra will never kill anyone or anything*.

And while we can imagine that the world would be much more polite if words could directly cause things to manifest‒including paradoxes‒I think we can all feel pretty glad that people can’t kill us just by telling us to drop dead.

So, make up all the paradoxical sentences that you might like; no actual paradoxes can exist.  If you come to a point of cognitive dissonance, you should probably focus on the fact of that discomfort and try to sort it out.  People can “believe” two or more contradictory things (sometimes before breakfast) but they cannot be right about more than one (though they can be wrong about all of them).

Anyway, enough of that nonsense.  It’s mildly engaging, but not terribly durable as a topic, or so it seems to me at this moment.

I am still (as far as I know) unable to use any of Fuckerberg’s apps, and to be honest, I haven’t even tried since before the last time I wrote about it.  It’s annoying, to some degree, to lose access to some entertainment, but it’s not as though I had any right to their use.  I was not the customer, I was the product, as is the case with all of you, too, if you use your social media for free.  Facebook et al sell advertisers access to and information about you.

Now, if I had been kicked off some service for which I had paid and for which I was paying, then I would have a beef**.

Speaking of paid services, what I really should do‒what I want to crave doing‒is to spend those moments that I would spend looking at funny reels on Instagram or whatever doing stuff on Brilliant dot org.  I pay for that service, and it is very good.  I also have a lifetime subscription to Babbel, which I obtained to try to encourage myself to learn more languages (duh!).

So, at some level, at the frontal lobe level, I want to use those sites and their services, to hone and increase my skills.  Otherwise I wouldn’t have contracted the services.  But in any given moment, the activation energy required to begin using them is higher than that for doing other, less beneficial things.

But maybe now that will be a bit different.  Maybe now that differential, that equilibrium, will shift.  I mean, it’s almost certain that it has shifted, or has begun to shift.  It’s all but impossible for one to remove a large factor from a situation that is in dynamic near-equilibrium and to have that near-equilibrium remain unchanged.

I hope that I shall be able to make use of this to improve my mind‒at least to improve my abilities, if not the overall nature of the thing.  At least it would be good if I get some more such use in.

I will miss the sort-of-social-circles one can have and the connection with old friends and distant family members on social media, however tenuous and removed and even occasionally illusory it might be.

I don’t socialize in real life, other than at work during the working day, and that’s a limited thing.  So I feel a little worried about being more disconnected from larger society.  We all know what happened to Melkor when he spent too much time in the Void, away from his brethren, and started to develop thoughts…unlike theirs.

Well, maybe we don’t all know, but read The Silmarillion if you wish to learn more.  It’s really good.

I guess I always have this blog and those who follow it, at least (and that’s no small thing).  I am concerned that some people who only see the blog via Facebook or Threads might not get to interact with it now.  But they are all hereby encouraged to leave a comment or two below.  I welcome them.  Seriously.

That’s all I have to say about that for right now.  I hope you all have an excellent day.


*Unless maybe you swallow a small insect or similar when you open your mouth.  I don’t think that’s how people imagine “the killing curse” working however.

**I’ve been aware of and have occasionally used this expression for as long as I can remember, but it does sound very weird if you listen to it as if from an outsider’s perspective.  “Wait.  You have a…beef?  You have a beef?  What the hell are you talking about?”

Really, Doctor Elessar, you must learn to govern your passions

I woke up this morning thinking‒or, well, feeling‒as though it were Saturday instead of Tuesday; I’m not at all sure why.  But it is Tuesday…isn’t it?  I suppose if I’m wrong I’ll find out soon enough.  But my smartphone and the laptop and the internet-connected clock all seem to support what I think, and what I thought when I woke up (as opposed to what I felt), which was that this is Tuesday, the 27th of January, 2026 (AD or CE).

It’s odd how emotions can be so bizarrely specific and yet incorrect.  I know that this is not merely the case with me.  We see the effects of people following their emotional inclinations over their reason all the time, even though those emotions were adapted to an ancestral environment that is wildly different from the one in which most of us now live.  It’s frustrating.

Though, of course, frustration itself is an emotion, isn’t it?  Still, it is simply an observable fact that emotions are unreliable guides to action.  We definitely could use more commitment to a Vulcan style philosophy in our world.  And by “Vulcan”, I mean the species from Star Trek™, Mr. Spock’s people, not anything related to the Roman god.

Of course, the specifics of the Vulcan philosophy as described in the series have some wrinkles and kinks that don’t quite work.  For instance, curiosity and the desire to be rational are emotions of a sort, as are all motivations, and the Vulcans do not avoid these.  Then again, in the Star Trek universe, Vulcans do have emotions, they just train themselves to repress them.

Still, the Vulcan ethos is not so terribly different from some aspects of Buddhism (and some of Taoism and also Stoicism), and the logic focus and internal self control are quite similar to the notion and practice of vipassana and other meditation types.  Perhaps metta can be part of that, too**.

Wouldn’t it be nice if everyone on this planet committed themselves to mindfulness and rationality*?  Perhaps it will happen someday, if we do not die as a species first.  It’s not impossible.

By the way, AI is not our hope for that future, specifically.  Just because AIs are run on GPUs that use good old digital logic (AND, OR, NOT, etc., i.e., logic gates) doesn’t mean that what they do is going to be logical or rational or reasonable.  We are creatures whose functions can be represented or emulated by circuit logic, but the functions‒the programs, if you will‒are not necessarily logical or rational or reasonable.

Humans’ (and humanoids’) minds are made up of numerous modules, interacting, feeding back (or forward) on each other, each with a sort of “terminal goal” of its own, to use AI/decision theory terminology.  They play a figurative tug-of-war with each other, the strengths of their “pulls” varying depending on the specific current state of that part of the brain.

I’ve spoken before of my notion of the brain/mind being representable as a vector addition in high-dimensional phase space, with the vector sum at any given moment producing the action(s) of the brain (and its associated body), which then feeds back on and alters the various other vectors, thus then changing the sum from moment to moment, which changes the feedback, which changes the sum, and so on.

The AIs we have now are at best analogous to individual modules in brains of creatures of all levels of braininess, doing specific tasks, like our brains’ language processing centers and spatial manipulation centers and memory centers and facial recognition centers and danger sensing centers and so on.  We know that these modules are not necessarily logical or rational in any serious sense, though all their processes can, in principle, be instantiated by algorithms.

If we imagine a fully fledged mind developed from some congregation of such AI modules, there is no reason to think that such a mind would be rational or reasonable or even logical, despite its being produced on logic circuits.  To think that AI must be reasonable (or even “good”) in character is to fall into a kind of essentialist, magical thinking‒a fairly ironic fact, when you think about it.

Okay, well, this has been a rather meandering post, I know (a curious phrase, “meandering post”‒it seems oxymoronic).  I didn’t plan it out, of course.  There is much more I could say on this subject or set of subjects, and I think it’s both interesting and important.  But I will hold off for now.

Perhaps I’ll return to it later.  I would love to receive lots of feedback on this in the meantime.  Also, I would still like to get feedback about yesterday’s post’s questions, such as those about Substack.  I won’t hold my breath, though.

Heavy sigh.  Have a good day.


*Not “logic” as they called it in Star Trek, because logic is not necessarily related to the real world, but can be entirely abstract.  Imagine if the logic to which Vulcans dedicate themselves were Boolean logic.  Of course, at some level, based on the Turing’s ideas, including the Church-Turing Thesis, all thought processes can be reduced to or represented by intricate Boolean logic.  But I don’t think that’s what the Vulcans are on about.  I’ve often wondered if perhaps the Vulcan word that translates as “logic” in English has more sophisticated connotations in Vulcan.  Maybe they don’t use “rationality” because they connect it to rational numbers, and maybe “reason” is too closely related in Vulcan to “cause”, which as I’ve noted before is not the same thing (“there are always causes for things that happen, but there are not necessarily reasons”).

**One can imagine a perverse sort of dukkha based meditation, in which a person focuses deliberately on feeling the unsatisfactoriness of life.  I doubt it would be very beneficial, but I can almost imagine ways in which it might be.  The very act of deliberately focusing on suffering and dissatisfaction might lead one to recognize the ephemerality and pointlessness of such feelings.  I don’t intend to try it, though.