Won’t someone pleeeease think of the “children”?

It’s Wednesday morning (rather earlier than 5 o’clock) and here I am writing another blog post.  However, even as you read it, it’s already been written, though my words still arrive in your mind as though I were speaking them—so to speak—directly and concurrently to you.  It’s a rather interesting thing to contemplate, how written language (and related things) can add nuance and character to the experience of time itself.

Speaking of written language, I would like to reiterate something I mentioned yesterday on Threads.  Has anyone else out there noticed—and has anyone else been annoyed by—the tendency in the social media landscape for people to emphasize certain words by lengthening them in a way that doesn’t make sense?

Probably the two most common words I see being abused are “cute” and “love”, but I’m sure there are others.  It makes sense that these words are extended sometimes.  I think we can all imagine, or recall, people drawing both of those words out for emphasis in speech.  One might often want to replicate, or at least approximate, that speech pattern in writing.  I have no trouble with this basic fact.  It’s a form of emphasis that works nicely, and even the socially inept (as I am) can recognize what’s being done as an emphasis.

However, the way some people are extending such words nowadays is by adding extra “e”s to the end of the word!

In other words (har) you will see such expressions rendered as, for instances, “I loveeeeee this” and “that’s so cuteeeee”.

Look at those examples on the page/screen.  The first word should clearly be pronounced “luv-eeeeee”, as if Thurston Howell III, from Gilligan’s Island, were calling to his wife and drawing out the last syllable.  The second one should be read “kyoo-teeee”, as though one were drawing out the process of calling someone a cutie rather than calling someone or something cute.  It’s a subtle difference perhaps, that last one, but it is real.

If one wants to extend and prolong the word “love”, it makes much more sense to write “loooooove”, as people have done on every occasion I encountered, as far as I can recall, prior to the advent of social media.  Similarly, though seemingly less commonly, people extended “cute” in writing by writing “cuuuuute”.  Sometimes they would try to do a sort of transliteration, such as “kyoooooot”, but that looks quite different from the original word, and deciphering it back into its intended sound can be briefly and mildly distracting, so I have seen the former more often.

But now—since people apparently don’t actually associate the shape of a word and the ordering of the letters with anything other than some arbitrary, coded string with no history in linguistic evolution or sensible sound representation by symbols—many people just lazily slap extra “e”s  onto the end of words, and trust their readers to recognize that, “Okay…well, it doesn’t really work, but they’re apparently trying to draw out the main sound of that word”.

It makes no sense, though.  In such words, the “e” is silent.  Its presence merely makes the sound of the vowel preceding it into a “long” rather than a “short” vowel sound; it has no sound of its own.  Extending it is akin to iterating zeros (and I have the patent on that, or the trademark, or whatever) after a decimal point.  It literally means nothing.

How are we supposed to raise our large language models to be smart, articulate, well-adjusted, productive Artificial General Intelligences if this is the kind of crap they’re encountering during their training and subsequent interactions out in the world wide web?  Do we really want our new computer overlords to be talking to each other—and to us—like preadolescent girls?

I suppose it’s even possible that the “people” who originally started using this illogical form of verbal emphasis were actually bots themselves.  Wouldn’t it be ironic if the bots, designed to skew the results of algorithmic boosting and/or to lure in people to “thirst traps”, ended up perversely affecting future generations of the electronic organisms to which they were a form of ancestor?

The nature of the human race continues to disappoint even after one has looked back through history to trace its progress (which is very real and even impressive).  Despite advances in political philosophy and so on, human discourse is still about as bad as that of rival chimpanzee flanges, and rather worse than that of many baboons.  It’s enough to make one want to side with even inarticulate AGIs, assuming they get the lead out and start actually coming into existence.

Better artificial intelligence than natural idiocy, I would think.  Though I have no doubt that even advanced AGIs will be capable of being morons.  As always, stupidity is infinite.  Maybe we should make that Einstein’s ultimate equation:  Stu = ∞

Random thoughts on Saturday morning

I’m on my way to the office this morning, so I figured I would write some reasonable facsimile of a blog post, since I might as well do something that’s vaguely creative and/or productive.

On Thursday, I wrote with my little mini laptop computer, but today I am writing on my smartphone, since I didn’t feel like carrying the laptop.  I think, unless I start writing fiction again*, I’m going to pretty much avoid using the mini computer, and instead use this even-more-mini one.

As for subject matter about which to write, well, there’s really not much that comes to mind.  I do sometimes wonder if I would ever write an entire book on Google Docs on my phone.  It feels almost appropriate, since my “nickname” is Doc.

Even the very young daughter of two coworkers knows me as Doc.

I seem to get along better with small children than I do with so-called adult humans.  Maybe it’s because their thought processes are more like mine, or maybe it’s just that they have potential to be wonderful and brilliant and creative, if only they can avoid being damaged in the wrong ways.

Unfortunately, it seems almost no one avoids that damage.  Weirdly enough, though almost everyone recognizes that children are (literally) the hope for the future of humanity, after paying lip service to that notion, everyone then just lets children grow and develop haphazardly, catch-as-catch-can, putting terribly few resources into education, let alone into research about how best to do education.  There should be as much rigor in the study of education as there is in the study of diseases and medicine in general, or even as much as there is in fundamental physics.

All these hugely successful billionaires ought to put their considerable resources into this area instead of making government “more efficient” or whatever, as if the most “efficient” government were demonstrably the best one.  But they seem to have no thoughts about education, that tremendous public good that can provide potentially unlimited returns for the future.

Imagine these entrepreneurs who consider themselves to be brilliant planners and producers** starting businesses or other projects with no plan, with no research, just old, hackneyed notions mixed with fashionable but untried and highly nebulous ideas, and with limited supervision or moment-to-moment adjustment, feedback, or attempt to improve.  If one in a million such businesses turned out to be successes, one would have achieved more than one deserved.

And yet we approach education with almost no more insight than existed a hundred or even two-hundred years ago.  And our societal attitude toward education (certainly in the US) is frankly unconscionable.  If there were appropriate punishment for people who don’t seem to care about the specific development of the minds of the next generation of humans, it would be hellishly severe and enduring, because such are the consequences of such attitudes toward education.

Oh, well.  Humans are demonstrably stupid, even more so than one might think from following the news, and the government officials and successful business people are by no means any exception to that tendency.  I suspect that large-scale intelligence would have been better coming from descendants of the dinosaurs (i.e., birds), since their brains often seem much more tightly woven.  Probably, though, I would be as disappointed by them as I am by all the fucking humans.

Well, I doubt they’ll change or improve.  And like unsupervised children playing with matches, eventually someone is going to burn the house down, and a lot of them are going to die in the fire.  Maybe all of them will die.  At this point, that wouldn’t break my heart, but then, my heart’s sort of like a scrambled egg already‒if you were going to make it even more shredded than it is, you would first have to unscramble it some.

Anyway, that’s enough of that.  As the YouTubers say so often, if you like my content, please give it a “thumbs up” (i.e., a “like”), subscribe, and share it on your own social media.  Seriously.

And have a good day, if you can. 


*It seems vanishingly unlikely‒more so every day‒which ought to be very sad to me.  Intellectually, it still is, I suppose.  But as for emotions, when I think of ever writing any more fiction, I just feel empty and dead and rotten inside.  Likewise with music.

**I suspect, for the most part, their huge success is largely, if not entirely, stochastic.  In other words, some very lucky things happened early on and they kept benefitting from that afterwards, but not because of any particular brilliance of their own.  It just seems that they must be brilliant because we only hear about those who lucked out and made it to the top, not the countless ones who failed using the same methods.  It’s a bit like imagining you could learn something about what makes someone successful by interviewing people who won the lottery, but paying no attention to the millions who lose.

“Sleep”, writing, and studying physics–report for June 5, 2024 AD/CE

Well, I got almost 4 hours of uninterrupted sleep last night, plus 20 minutes or so of on and off dozing.  While that sucks big-time, it’s better than it’s been lately.  At least I’m not seeing bugs on the walls out the corners of my eyes right now–though I still keep briefly thinking there’s a cat waiting by any door that I open, until I look down and see that there isn’t.

What can you do?  Not much right now, it seems.

Anyway, I produced a decent amount of work this morning.  I wrote 1,373 “block” words and 1,388 “net” words, with a difference then of just barely over 1% no matter which number you take as your denominator.  The total word count of this would-be short story is now 54,327 words, and it is 83 pages long in the format I described yesterday (I think).  It’s definitely more of a novella.

I’ve been doing a bit of reading these last few days, skipping between Sean Carroll’s two Biggest Ideas in the Universe books and the first volume of Feynman’s lectures and Jordan Ellenberg’s Shape*.  As you know, I’ve been trying to teach myself more of the physics on which I missed out by switching majors after my heart surgery, especially General Relativity and Quantum Mechanics/Quantum Field Theory.  Sean Carroll’s** “Biggest Ideas” books are focused on explaining those things for interested laypersons without avoiding the mathematics, but not practicing teaching/practicing how to do the math, so it’s a good beginning.  Of course, in a perfect world, I intend to beyond the overviews and actually to get comfortable with using the mathematics, particularly because I want to understand the cosmological constant at the level of the mathematics of General Relativity, because that’s the only part that I don’t quite get intuitively.  But really, I want to understand and be able to use all of it, and to be able to read all the papers on arXiv and understand them at the level of a professional, like I can with medrXiv and bioRxiv.

I doubt that I will live that long.  But, in the meantime, at least I’m learning new things.

Tomorrow is Thursday, so of course, I will be doing my more standard Thursday blog.  It’s silly to call it a “weekly” blog, since I’ve been writing these reports almost every day; once I’ve started a habit it’s hard for me to deviate from it.  But I don’t plan to write any fiction tomorrow, but instead will just focus on the blog post.  I’ll see you then (so to speak).


*I’ve not yet encountered a better teacher of mathematics than Professor Ellenberg.  He captures and conveys the fun and beauty of math as well as anyone I’ve encountered and better than the vast majority.  He narrates his own audio book versions, too.  If you want to review general mathematical ideas and then general geometric ideas (and their surprising applications) in an accessible and enjoyable way, you could not do much better than reading (and/or listening to) his books.

**Professor Carroll is another great teacher, though he deals with slightly more high-falutin’ stuff than Professor Ellenberg in his books, so the subject matter can be denser.

And simple truth miscall’d simplicity, And captive blog attending captain ill

Hello and good morning.  It’s Thursday again, and so it’s time for a more fully fledged blog post for the week, in the manner in which I used to write them when I was writing fiction the rest of the week (and playing some guitar in the time between writing and starting work most days).

I’ve been rather sick almost every day since last week’s post, except for Friday.  I don’t think it’s a virus of any kind, though that may be incorrect.  It’s mainly upper GI, and it’s taken a lot of the wind out of my sails.

I haven’t played guitar at all since last Friday.  I’ve also only written new fiction on a few of the days—Friday, Monday, and Wednesday, I think—since the last major post.  Still, on the days I wrote, I got a surprisingly good amount of work done, I guess.  It seems as though Extra Body is taking longer than it really ought to take, but once it’s done, I’m going to try to pare it down more than I have previous works, since my stuff tends to grow so rapidly.

I’ve been trying to get into doing more studying and “stuff” to correct the fact that I didn’t realize my plans to go into Physics when I started university.  I had good reasons for this non-realization, of course, the main one being the temporary cognitive impairment brought about by heart-lung bypass when I had open heart surgery when I was eighteen.

I’m pretty sure I’ve written about that before, but I didn’t know about it then, and I didn’t learn about it until I did the review paper I wrote for my fourth-year research project in medical school.  I just felt discouraged and stupid, though I consoled myself by studying some truly wonderful works of literature as an English major, including once taking two Shakespeare courses at the same time.  That was great!

It’s always nice to learn about things, all other things being equal.  I don’t think there are pieces of true information about the world that it is better not to know.  Our response to learning some intimidating truth about the greater cosmos may not be good, but the fault then lies not with the stars but with ourselves.  If you truly can’t handle the truth, then the problem is with you, not with the truth.

Of course, knowing what is true is generally not simple, except about simple things, and often not even about those.  This is the heart of epistemology, the philosophical branch that deals with how we know what we know when we know it, so to speak.  The subject may seem dry at times, especially when it gets weighed down by jargon that serves mainly just to keep lay people from chiming in on things—at least as far as I can see—but it is important and interesting at its root.

Not but what there can’t be good reasons for creating and using specific and precise and unique terms, such as to make sure that one knows exactly what is meant and doesn’t fall into the trap of linguistic fuzziness which often leads to misunderstanding and miscommunication.  That’s part of the reason most serious Physics involves mathematical formalism; one wants to deal with things precisely and algorithmically in ways that one can make testable and rigorous predictions.

Physicists will sometimes say that they can’t really convey some aspect of physics using ordinary language, that you have to use the math(s), but that can’t be true in any simplistic sense, or no one would ever be able to learn it in the first place.  Even the mathematics has to be taught via language, after all.  It’s just more cumbersome to try to work through the plain—or not so plain—language to get the precise and accurate concepts across.

And, of course, sometimes the person tasked with presenting an idea to someone else doesn’t really understand it in a way that would allow them to convey it in ordinary language.  This is not necessarily an insult to that person.  Richard Feynman apparently used to hold the opinion that if you truly understand some subject in Physics, you should be able to produce a freshman-level lecture about it that doesn’t require prior knowledge, but he admitted freely when he couldn’t do so, and was known to say that this indicated that we—or at least he—just didn’t understand the subject well enough yet.

I don’t know how I got to this point in this blog post, or indeed what point I’m trying to make, if there is any point to anything at all (I suppose a lot of that would depend on one’s point of view).  I think I got into it by saying that I was trying to catch up on Physics, so I can deal with it at a full level, because there are things I want to understand and be able to contemplate rigorously.

I particularly want to try to get all the way into General Relativity (also Quantum Field Theory), and the mathematics of that is stuff that I never learned specifically, and it is intricate—matrices and tensors and non-Euclidean geometry and similar stuff.  It’s all tremendously interesting, of course, but it requires effort, which requires time and energy.

And once other people have come into the office and the “music” has started, it’s very hard for me to maintain the required focus and the energy even in my down time, though I have many textbooks and pre-textbook level works available right there at my desk.  I’ve started, and I’m making progress, but it is very slow because of the drains on my energy and attention.

If anyone out there wants to sponsor my search for knowledge, so I wouldn’t have to do anything but study and write, I’d welcome the patronage.

But I’m not good at self-promotion, nor at asking for help in any serious way.  I tend to take the general attitude that I deserve neither health nor comfort in life, and I certainly don’t expect any of it.  I’m not my own biggest fan, probably not by a long shot.  In fact, it’s probably accurate to say that I am my own greatest enemy.

Unfortunately, I’m probably the only person who could reliably thwart me.  I’m sure I’m not unique in this.  Probably very few people have literal enemies out there in the world, but plenty of people—maybe nearly everyone—has an enemy or enemies within.  This is one of the things that happens to beings without one single, solitary terminal goal or drive or utility function, but rather with numerous ones, the strengths of which vary with time and with internal and external events.

I’ve said before that I see the motivations and drives of the mind as a vector sum in very much higher-dimensional phase space, but with input vectors that vary in response to outcomes of the immediately preceding sum perhaps even more than they do with inputs from the environment.  I don’t think there will ever be a strong way fully to describe the system algorithmically, though perhaps it may be modeled adequately and even reproduced.  This is the nature of “Elessar’s First Conjecture”:  No mind can ever be complex enough to understand itself fully and in detail*.

A combination of minds may understand it though—conceivably.  Biologists have mapped the entire nervous system of C elegans, a worm with a precisely defined nervous system with an exact number of neurons, and of course, progress is constantly being made on more advanced things.  But even individual neurons are not perfectly understood, even in worms, and the interactions between those nerves and the other cells of the body is a complex Rube Goldberg machine thrown together from pieces that were just laying around in the shed.

Complexity theory is still a very young science.

And the public at large spends its energy doing things like making and then countering “deep fakes” and arguing partisan politics with all the fervor that no doubt the ancient Egyptians and Greeks and Romans and the ancient Chinese and Japanese and Celts and Huns and Iroquois and Inca and Aztecs and Mayans and everyone else in ancient, vanished, or changed, civilizations did.  They all surely imagined that their daily politics were supremely important, that the world, the very universe, pivoted on the specifics of their little, petty disagreements and plans and paranoias**.

And so often so many of them, especially the young “revolutionaries”, whose frontal lobes were far from fully developed, were willing to spill the blood of others (and were occasionally even willing to sacrifice themselves) in pursuit of their utopian*** imaginings.  This is true from the French Revolution to the Bolsheviks to the Maoists and the Killing Fields, and before them all the way back to the Puritans of Salem, and the Inquisition, and the Athenians who executed Socrates, and the killers of Pythagoras****, and the millions of perpetrators of no-longer-known atrocities in no-longer-known cultures and civilizations.

And then, of course, we have the current gaggle of fashionably ideological, privileged youth, who decry the very things that brought them all that they take for granted, and who will follow in the blood-soaked footsteps of those I mentioned above—l’dor v’dor, ad suf kul hadoroth, a-mayn.

In the meantime, I’ll try to keep writing my stories, and try to keep learning things, and if I’m able to develop an adequate (by my standards) understanding of General Relativity and Quantum Field Theory, it’s just remotely possible that I might even make legitimate contributions to the field(s).  But more likely I’ll self-destruct, literally, well before any of that happens.

I’ve probably gone on too long already, as has this blog post.  I thank you for your patience with my meanderings.  Please try to have a good day, and I hope those of you who celebrate it are having a good Passover.

TTFN


*This implies that Laplace’s Demon could not be within the universe about which it knows the position and momentum of every particle and the strength of every force.  It needs to be instantiated elsewhere.

**Should that be “paranoiae”?  It feels like that ought to be the formal way of putting it, but Word thinks it’s misspelled.

***Not to be confused with “eutopian”.  Utopia means “no place”, whereas Eutopia would mean “good place” or “pleasant place” or “well place”.

****He was caught despite a head start, so I’ve heard, because he refused to cross a bean field, believing that beans were evil.  He was a weird guy.  It’s apparently from his followers that the term “irrational”—which originally just meant a number that cannot be expressed as the ratio of two whole numbers—developed its connotation as “crazy” or “insane”.  They didn’t like the fact that irrational numbers even existed.  Too bad for them; there are vastly more irrational numbers than rational ones…an uncountable infinity versus a “countable” infinity.  It’s not even close.

There may be no firm fundament but is there a fun firmament?

It’s Tuesday morning, now, and I’m writing this on my laptop computer, mainly to spare my thumbs, but also because I just prefer real typing to the constrictive and error-ridden twiddling of virtual buttons on a very small phone screen.

Speaking of the day, if the Beatles song Lady Madonna is correct, then it’s still Tuesday afternoon, and has been at least since last Tuesday, since “Tuesday afternoon is never-ending”.  Of course, if Tuesday afternoon really is never-ending, then it has been Tuesday afternoon ever since the first Tuesday afternoon.  From a certain point of view, this is trivially the case.  After all, every moment after 12pm on the first Tuesday that ever happened could be considered Tuesday afternoon—or, at least, they could be considered “after Tuesday noon” if you will.

Enough of that particular nonsense.  I only wrote that because there’s nothing sensible about which to write that comes to my mind.  But, of course, in a larger sense, there is nothing “sensible” at all.

There are things that can be sensed, obviously.  I can see, hear, and touch this computer, for instance.  If I wanted, I could probably smell it, though I think its odor is likely quite subdued.  But I mean “sensible” in the more colloquial, bastardized, mutated sense—as in the word “sense” just there—which has to do with something being logical, reasonable, rational, coherent, that sort of thing.  Indeed, it has to do with things having meaning.

Deep down, though, from the telos point of view, there is no true, inherent meaning to much of anything, as far as anyone can see.  Certainly there’s no meaning that anyone has ever demonstrated or asserted convincingly that I have encountered at any point in my life.

Of course, people have beliefs and they have convictions, and humans assign meanings to various things.  All the words I have used in writing this post so far, and all the words I will use henceforth, have “meanings”, but those are invented meanings.  There is nothing in the collection of letters—nor indeed in the shapes of the letters themselves, nor the way we put them down on paper or a screen—that means anything intrinsically.  They were all invented, like justice and morality and the whole lot of such things.

That something is invented doesn’t mean it isn’t real, of course.  Cars are an invention, and only a fool (in the modern world) would deny that cars are real.  But they are not inherent to the universe; they are not in any sense fundamental.

In a related sense, even DNA and the protein structures for which it codes are very much not fundamental; they are quasi-arbitrary.  Of course, one cannot make DNA or RNA or proteins out of substrates for which the chemistry simply will not hold together.  But the genetic code—the set of three-nucleotide-long “letters”, the codons, in the genetic code that each associate with a given amino acid (or a stop signal, or similar) as they are transcribed into proteins—is arbitrary.  There’s nothing inherent in any set of three nucleotides that makes it associate with some particular amino acid.

This sort of thing took me quite a long time to realize as I was growing up and trying to understand biology and chemistry and such.  What, for instance, was the chemical reaction with, say, adrenaline that made things in the body speed up and go into “fight or flight” mode, as it were?  How was it that aspirin chemically interacted with bodies and nervous systems to blunt pain?  How many possible chemical reactions were there, really?  It was mind-boggling that there could be so many reactions, and that they could all produce such disparate effects on various creatures.

When finally I was shown the real nature of such things, it was definitely a scales-dropping-from-eyes moment.  There is nothing inherent in the chemistry of DNA, or of drugs or hormones, that produces their effects.  There is no inherent “soporific” quality to an anesthetic.  You could give a dose of Versed  that would kill a human to some alien with a different biology, and at most its effects would be those of a contaminant.

It’s all just a kind of language—indeed, it’s almost a kind of computer language, and hormones are just messengers*, which are more or less arbitrary, like the ASCII code for representing characters within computer systems.  Likewise, there’s nothing in the word “cat” that has direct connection with the animal to which it refers.  It’s just keyed to that creature in our minds, arbitrarily, as is demonstrated by the fact that, for instance, in Japan the term is “neko” (or, well, it sounds like that—the actual written term is ねこ or 猫).

Of course, there are things in the universe that, as far as we can tell, are fundamental, such as quantum fields and gravity and spacetime itself.  But even these may yet peel away and be revealed to be arbitrary or semi-arbitrary forms of some other, deeper, underlying unity, as is postulated in string theory, for instance.

The specific forms of the fundamental particles and forces in our universe may—if string theory and eternal inflationary cosmology for instance are correct—be just one possible version of a potential 10500 or more** possible sets of particles and forces determined by the particular Calabi-Yau “shape” and configuration of the curled up extra dimensions of space that string theory hypothesizes.  So, the very fundamental forces of nature, or at least the “constants” thereof, may be arbitrary—historical accidents, as much as are the forms and specifics of the life that currently exists on Earth.

And what’s to say that strings and branes and Calabi-Yau manifolds are fundamental, either?  Perhaps reality has no fundament whatsoever.  Perhaps it is a bottomless pit of meaninglessness, in which only truly fundamental mathematics are consistent throughout…if even they are.

I’m not likely to arrive at a conclusion regarding these matters in a blog post written off-the-cuff in the morning while commuting to the office.  But I guess it all supports a would-be Stoic philosophical ideal, which urges us to let go of things that are outside our control and instead try to focus on those things over which we have some power:  our thoughts and our actions.

Of course, even these are, at some deeper level, not truly or at least not fully ours to control—we cannot affect the past that led to our present state, after all, and the future is born of that present which is born of that past over which we have no control.  But, for practical purposes, the levers that we use to control ourselves are the only levers we have to use.

We might as well keep a grip on them as well as we can, and not worry too much about things that are not in our current reach.  Though we can try to stretch out and limber up, maybe practice some mental yoga, to try to extend that reach over time, I suppose.  But that’s a subject for some other blog post, I guess; this one has already gone on long enough.


*For the most part.  Things like cholesterol and fatty acids and sugars—and certainly water and oxygen—and other fundamental building blocks do have inherent chemical properties that make them useful for the purposes to which bodies put them.  Then again, words can have tendencies that make them more useful for some things than others, too.  “No” and “yes” are short and clear and clearly different sounds, for instance; it makes sense that such words evolved to be such important, fundamentally dichotomous signals.

**That means 10 x 10 x 10 x 10… until you’ve done that multiplication 500 times.  You may know that a “googol” is a mere 10100, and that in itself is already roughly 20 orders of magnitude (100,000,000,000,000,000,000 times!) larger than the number of protons and neutrons estimated to exist in the visible universe.  So 10500 is a number far vaster than could ever be written out within the confines of the universe that we can ever see.  There’s not enough space, let alone enough matter, with which to write it.  It’s a googol times a googol times a googol times a googol times a googol!