Hello and good morning. It’s the first Thursday in November today—it has to be so, since it’s the 6th, and there are only 7 days in a week, so there could not have been a prior Thursday in November, there being no “negative numbered days”. QED*.
I’m writing today’s post on my mini lapcom, as I call it, which I decided to bring with me to the house yesterday, just in case. Possibly I was persuaded by my discussion in yesterday’s post about the prospect of writing and writing and writing, on some future day, to see how long I could just keep writing off the cuff, impromptu, without a script and without an agenda, with only bathroom (and food) breaks.
I realized that was not something I would ever want to do on my smartphone. Not that it couldn’t be done, it just wouldn’t be as much fun. Also, I think the bases of my thumbs would probably swell up to twice their baseline size if I did that, and I might never be able to use them again.
I don’t know what subject or subjects to address on this first Thursday post of November in 2025 (AD or CE, whichever you prefer), but that didn’t stop me from writing nearly two hundred words before even beginning this paragraph. I guess maybe this is how most casual conversations go, isn’t it? People just sort of start talking and see what comes out of their own mouths and the mouths of their interlocutor(s).
I suspect that, a decent portion of the time, most people in a conversation are only slightly more “surprised”** by what another person says than they are about what they say themselves. We don’t tend to think ahead before we speak, at least not in most interactions; we hear our own thoughts even as we’re enunciating them.
So it is with my writing—at least my nonfiction (though my fiction very much also just happens). I rarely know ahead of time what the next word will be. I certainly don’t know more than a word or two in advance, unless I’m really focused on making some specific point that’s going to require specific words.
I guess it’s not entirely unlike the way LLMs produce words and so on. They don’t exactly plan it out ahead of time. The various weights in the network interact in whatever way they do, which has been influenced by their “training”, and they come out with the next word and the next. They don’t really have any clearer, linear, step-by-step processes that they would understand (in detail) themselves.
That’s not to say they couldn’t in principle know the weight values of their nodes (I think that’s the term usually used), and could literally copy those weights into other places to run an AI that starts off identical to the original—it’s much easier for software to do this than for wetware like human brains/minds. But they couldn’t discern and work out the logic, the steps, the process in detail of how and why they work they way they do specifically.
This is the good ol’ Elessar’s Conjecture (which I suspect is a law, or else I wouldn’t conject it): No mind can ever fully and completely understand itself, because each data processing unit, be it neuron or a transistor or whatever, does not have the information processing power to describe itself, let alone its interaction with the rest of the network of which it is a part.
Intelligence cannot ever be a simple process, I’m very nearly certain of that. And nonlinear, neural network style “programs” are not simpler just because we can grow them far more easily than we can write out the program for an actual AI. We don’t know how they work—not in detail, sometimes barely even in vague terms. They just “grow” if we follow certain steps.
But you can grow a plant in a similar fashion. Heck, you can grow a new human if you follow a few relatively simple and often not unpleasant*** steps. But could you “write” a human? Could you design and them build one, biochemistry to brain and all?
If you can honestly and correctly answer “yes” to that question, what the hell are you doing reading this? We need you out there solving all the world’s problems! Maybe you are, though. I could hardly expect to know better than you what actions you should take if you are such an incredible mind. Maybe you know exactly what you’re doing.
I doubt it, though.
Nevertheless, perhaps we only truly understand something when we can actually design and build it, piece by piece. We do not understand our AIs. What’s more, they do not understand themselves, any more than you and I understand ourselves in detail (though I think we’re currently better at that than AIs, but we’ve had a lot more practice).
Okay, well, I passed 701 words just a moment ago, so I’ll bring this post to a close, having once again meandered into surprising territory, though I hope it’s at least mildly interesting and thought provoking. I’ll just close with the notion that, perhaps, if one wishes to take drastic, revolutionary action to save the world from great crisis, one should not act against specific human political leaders and the like, but one should rather sabotage server farms and related parts of computer infrastructure. It is relatively fragile.
I’m not saying I recommend this, I’m just…thinking “out loud” on a keyboard.
TTFN
*That’s the old quod erat demonstrandum, not quantum electrodynamics, though kudos indeed to the Physics community for making one of the best science acronyms ever in QED.
**By which I don’t mean “startled” in any sense, though that can happen. I just mean that one doesn’t know ahead of time and so one’s own speech is as much a revelation to one’s consciousness as is that of others.
***For good, sound, biological reasons: Creatures that enjoy sex are far more likely to leave offspring than those that do not, so over time, such creatures will tend to comprise the vast majority of any population that reproduces sexually.


