For ’tis your thoughts that now must blog our kings

Hello and good morning.  It’s the first Thursday in November today—it has to be so, since it’s the 6th, and there are only 7 days in a week, so there could not have been a prior Thursday in November, there being no “negative numbered days”.  QED*.

I’m writing today’s post on my mini lapcom, as I call it, which I decided to bring with me to the house yesterday, just in case.  Possibly I was persuaded by my discussion in yesterday’s post about the prospect of writing and writing and writing, on some future day, to see how long I could just keep writing off the cuff, impromptu, without a script and without an agenda, with only bathroom (and food) breaks.

I realized that was not something I would ever want to do on my smartphone.  Not that it couldn’t be done, it just wouldn’t be as much fun.  Also, I think the bases of my thumbs would probably swell up to twice their baseline size if I did that, and I might never be able to use them again.

I don’t know what subject or subjects to address on this first Thursday post of November in 2025 (AD or CE, whichever you prefer), but that didn’t stop me from writing nearly two hundred words before even beginning this paragraph.  I guess maybe this is how most casual conversations go, isn’t it?  People just sort of start talking and see what comes out of their own mouths and the mouths of their interlocutor(s).

I suspect that, a decent portion of the time, most people in a conversation are only slightly more “surprised”** by what another person says than they are about what they say themselves.  We don’t tend to think ahead before we speak, at least not in most interactions; we hear our own thoughts even as we’re enunciating them.

So it is with my writing—at least my nonfiction (though my fiction very much also just happens).  I rarely know ahead of time what the next word will be.  I certainly don’t know more than a word or two in advance, unless I’m really focused on making some specific point that’s going to require specific words.

I guess it’s not entirely unlike the way LLMs produce words and so on.  They don’t exactly plan it out ahead of time.  The various weights in the network interact in whatever way they do, which has been influenced by their “training”, and they come out with the next word and the next.  They don’t really have any clearer, linear, step-by-step processes that they would understand (in detail) themselves.

That’s not to say they couldn’t in principle know the weight values of their nodes (I think that’s the term usually used), and could literally copy those weights into other places to run an AI that starts off identical to the original—it’s much easier for software to do this than for wetware like human brains/minds.  But they couldn’t discern and work out the logic, the steps, the process in detail of how and why they work they way they do specifically.

This is the good ol’ Elessar’s Conjecture (which I suspect is a law, or else I wouldn’t conject it):  No mind can ever fully and completely understand itself, because each data processing unit, be it neuron or a transistor or whatever, does not have the information processing power to describe itself, let alone its interaction with the rest of the network of which it is a part.

Intelligence cannot ever be a simple process, I’m very nearly certain of that.  And nonlinear, neural network style “programs” are not simpler just because we can grow them far more easily than we can write out the program for an actual AI.  We don’t know how they work—not in detail, sometimes barely even in vague terms.  They just “grow” if we follow certain steps.

But you can grow a plant in a similar fashion.  Heck, you can grow a new human if you follow a few relatively simple and often not unpleasant*** steps.  But could you “write” a human?  Could you design and them build one, biochemistry to brain and all?

If you can honestly and correctly answer “yes” to that question, what the hell are you doing reading this?  We need you out there solving all the world’s problems!  Maybe you are, though.  I could hardly expect to know better than you what actions you should take if you are such an incredible mind.  Maybe you know exactly what you’re doing.

I doubt it, though.

Nevertheless, perhaps we only truly understand something when we can actually design and build it, piece by piece.  We do not understand our AIs.  What’s more, they do not understand themselves, any more than you and I understand ourselves in detail (though I think we’re currently better at that than AIs, but we’ve had a lot more practice).

Okay, well, I passed 701 words just a moment ago, so I’ll bring this post to a close, having once again meandered into surprising territory, though I hope it’s at least mildly interesting and thought provoking.  I’ll just close with the notion that, perhaps, if one wishes to take drastic, revolutionary action to save the world from great crisis, one should not act against specific human political leaders and the like, but one should rather sabotage server farms and related parts of computer infrastructure.  It is relatively fragile.

I’m not saying I recommend this, I’m just…thinking “out loud” on a keyboard.

TTFN


*That’s the old quod erat demonstrandum, not quantum electrodynamics, though kudos indeed to the Physics community for making one of the best science acronyms ever in QED.

**By which I don’t mean “startled” in any sense, though that can happen.  I just mean that one doesn’t know ahead of time and so one’s own speech is as much a revelation to one’s consciousness as is that of others.

***For good, sound, biological reasons:  Creatures that enjoy sex are far more likely to leave offspring than those that do not, so over time, such creatures will tend to comprise the vast majority of any population that reproduces sexually.

Doubt is called the beacon of the wise, the blog that searches to th’ bottom of the worst.

Hello and good morning.

It’s Thursday, and so it’s time for my “regular” weekly blog post.  It’s the first Thursday in October of 2024.  It’s also Rosh Hashanah, so for those of you who celebrate it, L’shana Tovah.

I haven’t been working on any fiction at all since my last report‒unless you count my façade of being a normal person or living a normal life, of course.  That’s doing what it does, and I continue to do it for whatever reason(s)‒perhaps habit, perhaps duty (to whom or what, though?), perhaps out of self-punishment or self-harm, I don’t know.

I wish I had something interesting to discuss.  I’m nearly done with Authority, the second book in the Southern Reach novels.  They are (so far) much better than the movie Annihilation was.  But they are disorienting, as I’ve mentioned before, and given my own chronic and worsening insomnia and pain, they make me feel as though I might not be experiencing my own life as what it really is.  Not that I actually think I’m being fooled or am hallucinating in any serious ways.  But I do feel disconnected, separate, as though I’m not fully within or fully a denizen of this universe, but of some nearby, partly overlapping one.

I’ve long suspected that it would be difficult to “gaslight” me, because I have always found my own memory and understanding (certainly of my experiences) to be better than that of anyone around me.  Yet I don’t “trust” myself, either, which means I tend to keep checking and confirming aspects of reality to test the consistency of my impressions.  It may smack of OCD a bit, but it means that, at least intellectually, I find my own take on reality to be more coherent and consistent than that of most people with whom I interact.  Though there are always things one can learn from others, too.  One just has to be rigorous and strict in assigning credences.

As Descartes pointed out, we can never truly be certain that some powerful enough entity has not pulled the world over our eyes*.  He famously came down to the conclusion, or rather the starting point, of cogito ergo sum‒“I think, therefore I am”, the point being that he knows, to his own satisfaction at least, that he is there and is thinking, because he experiences it even if all else is an illusion.

Of course, even subjectivity could be an “illusion” in some sense, in principle.  The characters in all my stories have thoughts and subjective experiences‒they “think” they exist‒but that subjectivity only exists when they are being read, or when I wrote them.

And of course, we could be within an immensely complex “simulation”, and “merely” be aspects thereof.  Such a simulation could be paused, say, and this could happen frequently or for tremendous periods of time up in the level of reality in which the simulation is being run, and as long as the simulation picks up right where it left off, no one here would ever have any way to notice or to know.

There could be a googol “higher-level” years between every Planck time in our universe** and as long as the simulation wasn’t changed, or was changed in ways that were logically consistent, there would be no way to see it from inside.  This is one of the implications of the “simulation hypothesis” or whatever the “official” term is, put forward by such notables as Nick Bostrum, who apparently has a new book out called Deep Utopia.  I have not read it; I never finished his book Superintelligence, because it dragged on a bit and I didn’t find it as challenging or revelatory as I hoped it would be.  Maybe if I started again, the experience would be different.

I am reading at least two other books, though.  I’m reading Yuval Noah Harari’s new book, Nexus, which is quite good so far, though nothing is likely to surpass his first book, Sapiens, which is one of the best books I’ve read.

I’m also working through Now: The Physics of Time, by Richard A. Muller.  He’s trying to describe his notion of the true source and nature not only of time’s arrow, but of time itself.  It’s reasonably good so far, but his arguments have not been as interesting or as impressive as I’d hoped they might be.  Still, I look forward to getting to the point in which he elaborates on his idea that not merely space is expanding, but time is also doing so, and this is the source of time’s arrow and the nature of “now” and so on.  It’s intriguing, and it’s far from nonsensical, considering that Einstein/Minkowsky showed that space and time are one entity.

I’m sort of on hiatus from Nate Silver’s On the Edge, which is a good book, but is quite long and in-depth, and some things he discusses are more interesting than others, to me.

Other than that, I continue to feel discordant, or hazy or separate, like everything, including me, is “a copy of a copy of a copy of itself”.  Last night, the feeling of being disconnected, rootless, and that I am in the process of disintegrating felt highly distressing***.  I wished I could find a way to feel connected with the daily, normal processes of my life, instead of feeling as though I am, for instance, one of the people exploring Area X and trying to understand it without much chance or hope of success.  Or perhaps it felt more that I am the analogy of Area X, I am the alien thing/environment in the more “ordinary” world, dropped here perhaps by accident, with no idea where I really belong or whence I really came.

Now, this morning, those notions are not gone, but the alarm associated with them is not as intense, replaced more and more by fatigue, a kind of learned helplessness.  As time goes by, I tend more and more toward apathy‒not acceptance but merely giving up, just not having the energy to continue to care.  I would like to connect in some way, to feel as though I belonged somewhere, but I am a Nexus 13 in a world of humans‒a world where, inexplicably, nobody seems ever to have manufactured such replicants, and yet here I am, making everything ever more drearily baffling.

Oh, well.  Maybe as the disjunction progresses, I will reach some turning point, and I will melt, thaw, and resolve myself into a dew.  Or maybe I’ll have to try Hamlet’s next mentioned option and make my own quietus as I intended to do on the 22nd‒I don’t believe in any “Everlasting” being, fixed canons or otherwise, that could prohibit “self-slaughter”.

Or maybe I will find some answers; or if answers don’t already exist, maybe I’ll create some answers.  It seems unlikely, given my personal experience and understanding, but the odds are not zero.  Though they may well be close enough for all practical purposes.

TTFN

rosh-hashanah-merged


*To borrow a lovely expression from The Matrix.

**Ignore Relativity’s problems with simultaneity for…well, for now.

***So many “dis” words.

Author’s note for “I for one welcome our new computer overlords”

ifowonco final

I for one welcome our new computer overlords was the first new short story I wrote after having completed Mark Red, The Chasm and the Collision, and Son of Man.*  Despite what you might think, this was not a story that driven by its title, though that came along shortly after the story began, and I’ll deal with it first.  The title is a direct quote from Ken Jennings, who wrote it as his Final Jeopardy answer when he and his fellow all-time human Jeopardy champion lost to IBM’s Watson computer.  It was a good joke, referring back to an episode of The Simpsons, when news anchor Kent Brockman mistakenly thinks that a space shuttle mission is being attacked by a “master race of giant space ants,” adding, “and I for one welcome our new insect overlords.”  The obvious joke—particularly funny because Brockman’s conclusion is so ridiculous—is about how real people do sometimes, cynically, and in cowardly fashion, try to ingratiate themselves to powerful ruling classes or individuals.

Peter Lunsford, the main character of I for one welcome our new computer overlords, is no coward.  He’s a seemingly simple man—without college education, a widower, a loner, a phone salesman.  But he’s a voracious reader, and even more, he is a deeply thoughtful and intelligent person.  Because of his own experiences with irrationality, even in people he has loved, he pines for the advent of a higher class of mind, which he expects to come from the eventual creation of artificial intelligence.  But he’s by no means a misanthrope.  He laments the senselessness of much human behavior but has an optimistic attitude toward the possibilities inherent in human creativity.  He also has a deep sense of the tragedy of the loss of brilliant people like his wife who, because of the scars of her harsh background, self-sabotaged her future through a fatal drug overdose.  Thus, when Peter wins a nearly billion-dollar lottery jackpot, he uses it to create an educational program and a scholarship fund to help people like his wife avoid the tragic end she met, and to allow at least some of them reach their potential and make great contributions to the world.

The triggers for this story were discussions by neuroscientist, writer, and podcaster Sam Harris, of whom I am a fan.  Harris began to think publicly about dangers that might be posed to humanity by our possible creation of artificial intelligence; he recommended that we think very carefully about such dangers, so we can avoid potentially irreversible errors.  His concerns are shared by such luminaries as Max Tegmark, Elon Musk, and the late, great Stephen Hawking, in contrast to the quasi-Utopian attitudes of such writers and thinkers as Ray Kurtzweil.  Both points of view are worth considering, and it’s an issue I think we should approach with our eyes as wide open as we can possibly get them.  But when contemplating Harris, et al’s concerns, I couldn’t help thinking that, if a truly superior artificial intelligence were to make humans obsolete, would that be such a terrible thing?  Peter Lunsford is my proponent of that perspective.**

I wanted to write a story revolving around those concerns about artificial intelligence, but I didn’t want to write about a cliché takeover of the world by AI—in this, my title is deliberately ironic.  Personally, I suspect that ethics and morality are generally improved by higher intelligence, all other things being equal, so I think that artificial intelligences might be inherently more ethical and reserved than we humans, with all our non-rational evolutionary baggage.  In this, Ifowonco is a story of wish-fulfillment.  It’s my daydream of the possibility that someone winning a truly gargantuan sum of money might use it to deeply positive philanthropic effect, inspiring others to act likewise, then leading, through that beneficial action, to a great leap forward in intelligent life (yes, I would without embarrassment refer to AI as a form of life).

Of course, you can’t say that Ifowonco is a uniformly happy story.  It entails a (non-nuclear) World War III, the rejection of AI by the human race, and of course, Peter Lunsford’s willful self-destructiveness.  Overall, though, it’s optimistic.  Darrell White is my example of a brilliant, world-changing mind springing from the least promising of seeming circumstances, wanting only the opportunity and nurturing that would allow such a mind to flourish.  He and my imagined AIs represent of my personal conviction that reason and morality and vastly more powerful than their antitheses; I cite as evidence for this the fact that civilization continues to exist and grow, even though it’s so much easier to destroy than to create.

In some senses, Ifowonco is the most personal story that I’ve written hitherto.  Of course, any character in a story must be a reflection of some part of the mind of the author—a person incapable of dark thoughts could hardly write a believable villain, for instance.  But Peter Lunsford is the avatar of a large part of my personality, in both his positive and negative character attributes.  Though I’ve had almost twice as much formal education as Peter, that difference is inconsequential because of Peter’s incessant self-education.  There is, in fact, almost no daylight between Peter Lunsford and me (and what little there is must generally be in Peter’s favor).  I would even like to think that, were I to win a prize such as Peter wins, I would choose to do with it something like what he does; in this, also, the story is a form of wish-fulfillment.

Speaking, in closing, of wish fulfillment:  I deliberately made the reality of the second half of the story ambiguous.  Do Darrell White and his creations, and all that comes with them, even exist in this universe?  Or are he and those subsequent beings and events simply a species of dream that Peter has while his brain succumbs to hypoxia?

I know the answer to this question in the universe of the story—and yes, there is a correct answer—but I’m not going to tell you what it is.  I’d rather have you draw your own conclusions.  I think it’s more fun that way, and it may even be a useful tool for personal reflection, bringing us back to that whole question of consciousness that troubles thinkers like Sam Harris.  I’d be intrigued and delighted to hear any of your thoughts on the subject, so feel free to send them my way, either here, or on Facebook, or on Twitter.  I wish you well.


* Just this week I released the audio of this story, now available to enjoy, for free, here on my blog.

** I don’t have the concerns, which Harris does, about the possibility that AI could be highly intelligent and competent but might nevertheless not be conscious, for two reasons:  First, I strongly suspect that consciousness is a natural epiphenomenon of highly complex information processing involving internal as well as external monitoring and response, though I’m far from sure; and second, I can’t be philosophically certain even that other humans are conscious (I think they are, but this extrapolation is based on my own experience and their apparent similarity to me), but it doesn’t seem to matter much for the purposes of their function in the world.