In a better blog than this, I shall desire more love and knowledge of you

Hello and good morning.  It’s Thursday, and I’m writing this post on my lapcom.  I feel as though I ought write these posts only on the computer (not that smartphones are not computers, but cut me a little slack on this, please), and I would be more inclined to do so if Microsoft would stop making Aptos the default font!!!!!

If I could go back in time and change something, that’s one of the things I would be inclined to change.  If I found that there was one person mainly responsible for this new font, well…I don’t know if I’d go all Terminator on them and kill that person’s mother before that person was born, or kill the person when that person was a child, but something needs to be done to erase the stain of this horrible font from existence.

Certainly, if I were given* absolute power over the world, from this moment forward, one of the petty things I would do (I would try to keep the petty things to a very bare minimum, trust me**) is to eliminate that font from any and all standard computer systems anywhere.  I would probably allow for individuals to select the font if they really like it, but would not let them use it on anything but internal work between people who also like the font.

Also, I would probably mark people who chose the font freely for a visit from my secret police.

I’m kidding.  I despise the very notion of thought crime, let alone aesthetic policing in private matters.  This is even though some people’s quality of thought sometimes feels like a crime against nature.  But, of course, there cannot actually be crimes against nature.  Nature does not punish one for disobedience to its laws.  It’s simply not possible to do anything but follow them.

That’s one reason why I truly despise headlines like “The new finding by Hubble that breaks physics!” and whatnot.  Not only are they plainly clickbait, they are stupid clickbait.  I don’t know for sure if it’s just the headline writer or the writer of whatever the attached article might be who makes the headline in specific instances, but in either case, when I see headlines like that, I think that whoever wrote it really, clearly doesn’t understand physics very well.  Nor do they the nature of scientific discovery and advancement.  Because of that, I am far less likely to read the attached article (or watch the video) or even click on its link.

Nothing can break physics.  If you find something that seems to violate physics as you understand it, what you have found is not a violation of physics but rather a place where your understanding of physics is clearly incorrect.  This is far from a horrible thing.  This is how progress in physics (and in other sciences) is made:  by finding the places where our “understanding” doesn’t predict or describe what actually appears to be happening.  The world cannot be “wrong”, so our understanding of it must be, and will need to be revised.

That’s progress.

One should be hesitant to give too much “trust” to anyone who refuses to change their mind.  One of the best lines in a Doctor Who episode (not a truly great episode, maybe, but it has a wonderful speech by the Doctor) is after the Doctor has said to the “villain” (who goes by the human name Bonnie, though she is not human) “I just want you to think.  Do you know what thinking is?  It’s just a fancy word for changing your mind.”

Bonnie responds, “I will not change my mind.”

And the Doctor says, “Then you will die stupid.”***

This is simply true.  If you never learn that you were wrong about something, if you never update your credences or think about things in a new way, you will never learn anything new or develop any better understanding of the world than you did when you formed those credences.  Or, to paraphrase Eliezer Yudkowsky, if no state of the world can change the state of your retina and how you perceive that state, that’s called being blind.

I like to refer to Yudkowsky-sensei a lot, but that’s because he has said a lot of bright and interesting things, and he has said them well.  It’s also nice to know that there are some highly intelligent and thoughtful people in the world—clearly there are, or humans would long since has gone the way of the trilobites—because the idiots and the assholes make so much noise.

The best evidence I see for the fact that most people are good or at least benign (overall) is that civilization still exists, and has done so for a long time.  It is far easier to destroy than to create or even to maintain; the second law of thermodynamics tells us that things will fall apart even if we do nothing at all to break them (it says that more or less, anyway—that’s a bit of a bastardization of the proper, mathematical law, but it is related and implicit).

The fact that civilization still exists—so far, at least—seems to indicate that there must be a lot of people working to maintain and sustain and improve it, because we can easily see how much how many people seem to be trying to make it crumble****.

Assholes tend to make a lot of noise in the world, but they’re pretty much all full of shit and “hot air”.  It’s worth it to keep this in mind, because there have always been plenty of such nether orifices out there, spewing their flatus everywhere like perverse crop-dusters.  But the evidence strongly suggests that they are not the norm; they are just the noisiest.

I suppose that’s a good moral of sorts on which to end this post:  Be willing, even eager, to change your mind when warranted, and try not to let the assholes make you think the world is no better than a camp latrine (even if you’re one of the assholes sometimes, which you are, since we all are, sometimes*****).

Though, to be fair, I am hardly the person to be giving that last piece of advice unironically.

TTFN


*If you must be given absolute power, do you actually then have absolute power?  This is similar to the old song that says “Don’t ever take away our freedom.”  If you have to beseech someone not to take away your freedom, you’re not free, and if you have to be given power, your power is clearly not absolute.

**Or don’t, if that’s not in your character.  I’ve often spoken implicitly against the concept of trust, stating that I don’t feel that I can actually, truly trust any living person.  It’s calculated risks all the way down, which is empirically true if nothing else.  So, I can hardly scold someone if they don’t “trust” me.  Go ahead, form your own conclusions.  I do exhort you, though, to be as rational as possible when you form them, with your conclusions drawn as a consequence of the evidence and argument, not with your evidence and argument being curated based on your knee-jerk or at least hasty “conclusion”.

***He then proceeds to lay out the alternatives; he’s not making a threat, he’s making a point.

****When you read that, did you immediately think of your own least favorite political or other public figure, or perhaps of the people you encounter who disagree with your politics or religion or dietary preference or what have you?  Be careful.  Us/them thinking is not usually conducive to formulating true and accurate pictures of reality (though it did inspire at least one beautiful song):

*****We’re also all deuterostomes (I’m assuming only humans are reading this).  Look it up.  It’s kind of funny.

I had a good headline idea, but it slipped my mind

I was surprised by how much response I’ve received to yesterday’s blog (and that of the day before) as well as the number of comments.  It’s very gratifying, and I appreciate it very much.  Thank you.

As for today, well, I am really not sure what to write, because yesterday’s blog was‒from my viewpoint, anyway‒about as free-form and chaotic and tangential and stochastic (not to say redundant) as anything I’ve written.  But maybe that’s just the experience I had while writing it; maybe it doesn’t actually come across that way to the reader(s).  It’s difficult for me to know, because even more than reading, writing is a solitary thing.

That’s not to say that people can’t write together.  Back when I was a teenager, I co-wrote some partial stories with one of my best friends, and we did it sitting next to each other and talking things through aloud as we typed.  That was a pretty active and interactive collaboration.

Unfortunately, I don’t think we got very far with it.  We made much more progress writing silly computer programs in Basic on the Apple II+ my father had bought.  This was in the days before there were any ISPs as far as I know, though we did dial onto a couple of local “billboard” services from time to time with my dad’s old modem (I think it was 600 baud*, but it may be some even divisor or even a very small multiple of that number).

One time, I even had a conversation with a girl (!) who was helping run one of the billboards.  She was (supposedly) about my age, and obviously she was much more into computers than I was for the time.  There was never (in my regretful mind) any possibility of an ongoing interaction, let alone a physical meetup or anything, however.  Even then, though I was reasonably confident when within my local group of friends and teachers, I was painfully shy and awkward, and could never make conversation other than about specific topics.

Goal-directed interactions are okay, as they tend to flow naturally from the process involved.  This is why I’ve made nearly all my friends at school or at work.  Purely social interactions were never really an option for me, except with people I already knew quite well.  And having a successful romantic relationship was unfortunately not in the cards for me.

It still isn’t, as far as I can tell.  I suspect the problem is that there’s no other member of my true species on this planet.  I did come reasonably close, or so I thought for a long time, but I’ve been divorced now about five years longer than I was married, so I apparently wasn’t all that successful.

Okay, well, sorry about the weird, ancient info-dump.  It’s not nearly as cool as the data that’s coming in from the recently-activated Vera Rubin observatory.  That, at least, is the sort of thing that helps restore my faith in humanity.  Or, well, maybe it would be more accurate to say that it shifts my Bayesian credence slightly away from the “humans are without net redeeming value” end and toward the “humans may not be all that bad in the end” end.

The credence is still quite low, though.  By which I mean I’m closer to the first end than the second most of the time.

Things might be a little bit better if the sort of people who do things like setting up the Vera Rubin telescope, and who set up and launched and now use the James Webb telescope, and the members of the former human genome project, and the people who study cognitive neuroscience, were the sort of people working in government, writing and administering laws.  Generally speaking, though, the first type of people don’t tend to want to do the governing nonsense, probably not least because a lot of that business is not about everyone trying to do the best they can for the people they represent.

The people who want to do astronomy and mathematics and biology and geology and neuroscience and meteorology and so on are probably some of the best people to do those things‒not just from their point of view but also from the viewpoint of civilizational benefit.  Unfortunately, many of the people who want to go into government and politics tend to be some of the worst people for those jobs, from the point of view of civilization.

I can’t say they are the worst possible group for the job.  The truly disaffected and uninterested or the misanthropic and nihilistic might well do a worse job even than the lot who do it now.  This is despite the fact that most of those latter people act on shallow and immediate self-interest.  Self-interest can do the job adequately when the incentives are structured such that one’s self-interest is served by serving the interests of the people of one’s community/city/nation/species.

Those incentives are very tricky to manage, unfortunately.  It would be much better if we could find people who had real enthusiasm and curiosity and an actually somewhat scientific approach to government.  If only we could find a group as committed to seeing a truly and objectively well-run society‒in which everyone was better off than they would have been in nearly any other‒as the group who set up the Vera Rubin observatory was committed to actually getting the observatory done so they and we could learn ever more about the universe on the largest scales, things might be quite a bit better than they are.  Maybe not, but my credence leans more toward the “maybe so” end.

Alas, politics and government were not born of human curiosity and creativity‒the things almost entirely unique to the species‒but of the old, stupid primate dominance hierarchy/mating drives, which are evolutionarily understandable, but which don’t make for pretty, let alone beneficial, government.  Think about it.  Would you want to put a bunch of self-serving apes doing the jobs of government?

Oh, wait!  That is the group doing the jobs of the government!  Of course, it’s also the group being governed.  Uh-oh.  This could be boding better**.

Not that being recognized as an ape is an insult per se; apes are all that we’ve had available, and they’re the best that’s come along so far.  Some of them are really not so bad.  Some of them figure out ways to launch immense telescopes into space, not so very long after one of them first created the telescope.  Some of them figure out ways to cure and even prevent unnecessary disease.  Some of them figure out ways to turn simple manipulations of base-two arithmetic into information processing that can be scaled up to any kind of logic and information that can be codified.

Some of them just write blogs and sometimes write stories and songs and such***.  But hopefully, that’s not too detrimental an endeavor.


*A baud is a bit per second being sent over the phone lines.  Not a meg, not a K, not even a byte, but rather a bit‒a binary digit, a one versus a zero, on or off.  If you listened to the sound of the modem, you could imagine you could almost hear the individual bits.

**Tip of the hat to Dave Barry’s “Mister Language Person”.

***Though I have done my very small part in advancing human scientific knowledge, in that I am a co-author and co-investigator on an actual published scientific paper.

Our wills and fates do so contrary run, that our devices still are overthrown; Our blogs are ours, their ends none of our own.

Hello and good morning.  It’s Thursday, the 26th of February in 2026, a date that’s only very slightly interesting whether you write it as 2-26-2026 or 26-2-2026.  The fact that you have repeated 2s and repeated 26s is somewhat entertaining, but the zero throws potential symmetries off, making it not nearly as much fun as it could conceivably be.  It’s a shame, really.  I suppose you could write it as 26-02-2026 and rescue a bit of symmetry, but that feels like reaching.  It’s not quite symmetrical anyway, unless one is writing in base-26 or higher.  No, wait, even that wouldn’t work.

I don’t know about what I’m going to write this morning.  That in itself, of course, is nothing unusual.  But I don’t feel that I have much to say about anything at the moment.  I don’t want to get into my depression and ASD and anxiety and chronic pain and insomnia and just general moribund state, because I’m sure no one wants to hear about it anymore, and in any case, there seems to be no way anyone can do anything about it that’s useful, which makes it all the more frustrating.  Writing about it certainly hasn’t cured or even improved my state much, if at all.

Anyway, as I said the other day, you have been put on notice.  Unless you just started reading my blog for the first time yesterday, you’ve no right to act fucking surprised no matter what happens.

Okay, that’s that out of the way.

Now, let’s see, what should I write today?  I could discuss some topics in science, especially physics, though I also have literal, legally recognized expertise in biology, and I know a lot about quite a few other branches of science as well.  This is because I have always been curious about how the world, the universe, actually and literally works on the largest and on the most fundamental scales.

I mean, yes, humans also have their rules and laws and social mores and antisocial morays and all that nonsense, but if you step back even a bit, you can see nearly all human behavior encapsulated by basic primatology.  If you know how the various monkeys and gibbons and gorillas and chimpanzees behave‒especially their commonalities‒human behavior almost always fits right in.  It’s usually not even very atypical.

That doesn’t make the specifics of behavior very easily predictable in any given case, necessarily; then again, we understand an awful lot about the weather and the climate, but the specifics of tomorrow’s weather are tough to predict precisely and accurately, let alone next week’s weather.  Nevertheless, the physics of longer term climate effects of certain kinds of atmospheric gases is almost trivial.

Anyway, humans are too annoying to be very interesting, except in special circumstances.  In this, they are perhaps a bit like cockroaches.  From the point of view of a scientist who studies them, they can be interesting, and from just the right angle and with the right detachment, they can even be beautiful (or some of them can).  But overall, they are merely large masses of highly redundant little skitterers, just doing their shit-eating and reproducing and infesting almost every possible location.

Which type of creature did I mean to describe just now?  See if you can figure it out.

Of course, on closer scales, cognitive neuroscience and neurodevelopment and related stuff, such as “neural” networks, “deep” learning, and other such areas are fascinating.  One thing interesting about them is how all the things that brains and computers and so on are and do are implicit in the laws of physics‒clearly they are some of the things that stuff in the universe can do‒and yet, for all we know, they have only ever happened here, just this once in all the vast and possibly infinite cosmos*.

And for all we can tell, given the human proclivity to plan about 20 Planck units ahead and then after that trust to luck, this could be the only place they occur, and their time will not continue much longer, certainly not on a cosmic scale.

I could be wrong about that…except in the sense that, since I am stating it merely as one of the possibilities, I am not actually wrong at all.  Even if humans do survive into cosmic time scales and become cosmically significant, it will still not be easily debatable that it could have happened that humans would go extinct and would fail to go anywhere but Earth.

Of course, depending on the question of determinism, I suppose one could say that if humans (or their descendants) become cosmically significant then there literally was nothing else that could have happened, at least as seen from outside, at the “end”.

On the other hand, if Everettian quantum mechanics is the best description of the fundamental nature of reality, then in some sense, every quantum possibility actually happens “somewhere” in the universal quantum wave function, though those variations may not include all conceivably possible human outcomes.

Some things that seem as though they should be possible may simply never happen to occur (or occur to happen?) anywhere in the possible states of the universe.  That feels as though it should be unlikely, given how many possible states can be locally evolved in the quantum wave function, but I don’t think we know enough to be sure.

Okay, well, I vaguely hope that this has been mildly interesting and perhaps thought provoking.  It would be enjoyable to get more feedback and thoughts, but I don’t have a very large readership, and only a certain small percentage of people ever seem to interact with written material in any case, so I’m probably lucky to get the feedback that I get.

TTFN


*With the inescapable caveat that, if the universe is spatially and/or temporally infinite, and if as it seems there are only a finite number of differentiable quantum states in any given region of spacetime (the upper limit of which is defined by the surface area of an event horizon the size of the given region) then every local thing that happens, and all possible variations thereof, “happen” an infinite number of times.  But given that all these regions are more or less absolutely physically distinct and incapable of “communicating” one with another, they can be considered isolated instances in a “multiverse” rather than parts of the same “local universe”.

Are gravity and frivolity truly opposites?

It’s Wednesday morning (not quite five o’clock yet) and it is February 25th.  There are only ten more shopping months until Newtonmas*.

For those of you who don’t know (and as a reminder for those of you who do know) Isaac Newton was born on December 25th, 1642 (AD**).  Now, there is a parenthetical here:  Newton was born on December 25th by the Julian*** calendar, which was the one used in England at the time of his birth.  By the Gregorian**** calendar, Newton would have been born in early January of 1643.

This might seem to imply that December 25th nowadays shouldn’t be considered Newtonmas, but of course, it’s a closer fit than celebrating the birth of Jesus on that day; supposedly, biblical scholars have found that Jesus was probably born in the summer or something.  As with many things, “The Church” appropriated the popular holidays celebrating the winter solstice and grafted Christian religious significance onto it.

There’s nothing particularly bad about that.  All these holidays and divisions of the year are fairly arbitrary (though celebrating solstices and equinoxes is common enough in multiple cultures, which makes sense because these are objective events in any given year that can be noticed by any culture that is paying attention).

The length of a year is a concrete, empirical fact, as is the length of a day and the length of a lunar orbit around the Earth.  None of them are straightforward multiples of each other, unfortunately‒they are waves that are not harmonically associated with each other.

I don’t know how long it would take for their “waves” to come back into some primordial alignment and “start over”, but it’s probably moot, because the length of a day and of a lunar orbit and of the orbit of the Earth are changing slowly.  The moon, for instance, is moving steadily (but very slowly) away from the Earth over time, and so its time of orbit is increasing (since things that orbit farther away orbit more slowly).

I think Kepler’s third law was/is that the period of a planet’s orbit around the sun is proportional to the 3/2 power of the length of the semimajor axis of its orbit.  I’m not sure if that exact power holds up on the scale of, say, the lunar orbit, but the laws of gravity are as universal as anything we know.  Indeed, there are materials that are opaque to light, but as far as we know, there are none that are opaque to gravity.  Gravity is nevertheless constrained by the geometry of spacetime, so orbits will always slow down at a faster rate than the distance from the center around which a mass orbits increases.

The inability of anything we know of to block gravity is one thing that makes me take seriously the notion that, at some level, there could be more than three spatial dimensions.  If gravity is not confined to three dimensions then nothing that is so confined could stop it; it would merely flow around any obstacle (maybe gravity waves, for instance, can even diffract around matter and energy, though that might not imply higher dimensions).

This is related, indirectly, to the fact that it is impossible to tie a knot in a string in 4 or higher spatial dimensions.

By the way, having those extra spatial dimensions curled up tiny, as is usually presented in depictions of the notions of string theory, is not the only way for them to exist and be undetected.  If most of the forces in the world we know‒the electromagnetic, the strong force, the weak force, and the various matter-related quantum fields‒are constrained to a 3-brane because their strings are “open-ended”, then we could live in a 3-brane (in which all other forces, including matter, are confined) nested in a higher-dimensional “bulk”.  Gravity could be conveyed by a “looped” string, which could pass through the 3-brane, interacting but not being confined to it.  This could also explain the comparative weakness of the gravitational force and might even explain dark matter (and why it is so difficult to detect).

This sounds extremely promising, maybe, but there are issues and hurdles, not the least that strings and higher spatial dimensions are very difficult to detect, if they exist.  Also, it’s very hard to pin down all the implications mathematically in a useful way.

I remember one lunch break when I was still in medical practice when I tried to see if I could work out mathematically if “dark matter” could be explained by a relatively nearby, parallel brane-universe (it would probably be more than one, but one was difficult enough) whose gravity spills over into and overlaps the gravity of our brane-universe.

Here’s a sort of reproduction of some of the scribbling I did then:

Unfortunately, though I could visualize what I was considering and get an intuitive feel for what the math would be like, my precise mathematical skills were just not up to the task of sorting it out rigorously.  Also, of course, lunch was not long enough, and I had many other things on my mind.  Anyway, findings like the “bullet cluster” provide some fairly strong evidence that “dark matter” is something physical within our three dimensions of space.

Okay, that’s enough for today.  I’ve managed not to talk about my depression and stress and self-destructive urges/wishes (except just now, of course), so I hope you’re pleased to have had those things cloaked from you today.

Take care.


*Working out the exact number of days, I think I figured that it was 302.  December 25th is 7 days before New Years, so it’s day number 358 in the (non-leap) year.  And today is the 25th day of the second month, and January has 31 days, so today is day 56 of the year.  And, of course, 358 – 56 = 302.

**Why not my usual “AD or CE?”  Because at the time, in England, it was just “anno domini”.

***Named for Julius Caesar, though as far as I know, he had no more to do with actually formulating that calendar than he had with the invention of the 7th month.  As far as we know, he wasn’t even born by the then-existing version of Cesarian section, which was more or less always fatal to the mother, and his mother lived well beyond his birth.

****Named after Pope Gregory XIII, also known (by me) as Pope Gregory Peccary*****.  He did not formulate the newer calendar, but supposedly he at least commissioned the Vatican astronomers to create it when it had become obvious that the Julian calendar was not quite tracking the actual year but was overshooting over a long period of time.  So, the Gregorian calendar is better named than the Julian calendar, or so it seems to me.

*****The nocturnal, gregarious wild swine.

May the slope of your pain function always be negative

I’ve been thinking about something I wrote in my blog post yesterday.  I had thrown out the thought, in passing, about how it seemed as though all the things in my life that I still do are not things I necessarily do for joy or out of desire to achieve some goal, but rather they are things which are more painful not to do than to do, and so I do them.

There isn’t really a positive motivation—not the pursuit of happiness or improvement or fulfillment or enrichment.  It’s just that the feeling of stress and tension and anxiety (or whatever) regarding the prospect of, for instance, not going to work rapidly becomes worse than the equivalent feelings about going to work.

That’s not a great state of affairs.  Don’t get me wrong; it’s entirely natural.  I’ve written about this many times, this recognition of the fact that the negative experiences—fear, pain, revulsion, disgust, and so on—are the biologically most important ones.  Creatures that don’t run from danger, that don’t avoid injury, that don’t shy away from potential infection and poison, are far less likely to survive to reproduce than creatures that do those things.

We see clinical examples of people lacking some of these faculties—such as those with congenital insensitivity to pain—and while we might envy them a life without agony, it tends to be quite a short life.  Also, they tend to become immobile and deformed due to damage they do to their joints by not shifting position to improve blood flow.

In case you didn’t know, that’s one of the reasons you can’t stand completely still for very long; it’s not good for you.

But many of us, especially in the modern world, have some things that we do for positive experience.  Some of them are dubious, but food, sex, companionship/conversation, singing, dancing, all that stuff, are positive things.  Unfortunately, positive experience cannot be allowed—by biology—to last too long.

As Yuval Harari noted, a squirrel that got truly lasting satisfaction from eating a nut would be a squirrel that lived a very short—albeit fairly happy—life, and would be unlikely to leave too many offspring.

Maybe this is what happens to some drug addicts.  Maybe they really do get satisfaction or at least pleasure from drugs—and maybe that is what ends up destroying them.  At some level, that’s not truly in question, is it?  People who are addicted to drugs forego other pleasures and other positive things, but perhaps more importantly, they fail to avoid many sources of pain and fear and injury.

The reality is probably a bit of an amalgam, I suppose.  I would not say it’s a quantum superposition, though, except in the sense that everything is a quantum superposition (or, rather, a whole bunch of them).

This is one situation in which I think I’m right and Roger Penrose is wrong—a bold claim, but I think a fair one—in that I see no reason to suspect that the nature of consciousness either requires or even allows quantum processes, other than in the trivial sense that everything* involves quantum processes.  But there’s no reason seriously to think that (for instance) neurotubules can even sustain a quantum superposition internally, let alone that such a process can somehow affect the other processes of the neuron, many of which are well understood and show no sign of input from weird states of neurotubules, which act mainly structurally in neurons.

If deep learning systems—LLMs and the like—have demonstrated anything, it’s that intuitive thought** does not require anything magical, but rather can be a product of carefully curated, pruned, and adjusted networks of individual data processing units, feeding backward and forward and sideways in specific (but not necessarily preplanned or even well understood) ways.  No quantum magic or neurological voodoo need be involved.

I think too many people, even really smart people like Penrose, really want human intelligence to be something “special”, to be something that cannot be achieved except within human heads, and maybe in the heads of similar creatures.  Surely (they seem to believe) the human mind must have some pseudo-divine spark.  Otherwise, we oh-so-clever humans are just…just creatures in the world, evolved organisms, mortal and evanescent like everyone and everything else.

Which, of course, all the evidence and reasoning seems to suggest is the case.

Maybe, deep down, there isn’t much more to life than trying to choose the path from moment to moment that steers you toward the least “painful” thing you can find.

Please note, I’m not speaking here about some metaphorical continuum, some number line that points toward pleasure in one direction and pain in the other.  That’s at best a toy model.  In the actual body, in the actual nervous system, pain and fear and pleasure and motivation are literally separate systems, though clearly they interact.  Pleasure is not merely the absence of pain, nor is pain merely the absence of pleasure.  Even peripherally, the nerves that carry painful sensations (which include itching, as I noted yesterday!) use different paths and different neurotransmitters than the ones that deal in pleasure and positive sensation.

Within the brain, the amygdala and the nucleus accumbens (for instances) are separate structures—and more importantly, they perform different functions.  There’s nothing magical about their locations in the brain or the particular neurotransmitters they use.  Those things are accidents of evolutionary past.

There’s nothing inherently stimulating about epinephrine, and there’s nothing inherently soothing about endorphins or oxytocin, and there’s nothing inherently motivating or joyful about dopamine and serotonin.  They are all just molecular keys that have been forged to open specific “locks” or activate (or inactivate) specific processes in parts of other nerve cells (and some other types of cells).  It’s the process that does the work, Neo, not the neurotransmitter.

This brings up a slight pet peeve I have about people discussing “dopamine seeking” (often when talking about ADHD).  I know, the professionals probably use this as a mere shorthand, but that can be misleading to the relatively numerous nonprofessionals in the world.  The brain is not just a chemical vat.  Depression and the like are not just “chemical imbalances” in some ongoing multi-level redux reaction or something, they are malfunctions of complicated processes.  Improving them should be at least as involved as training an AI to recognize cat faces, wouldn’t you think?

But one can do the latter without really knowing the specifics of what is going on in the system.  It’s just sometimes difficult, and the things you think you need to train toward or with often end up giving you what you didn’t really want, or at least what you didn’t expect.

Maybe this is part of why mindfulness is useful (it’s not the only part).  With mindfulness, one actually engages in internal monitoring, not so much of the mechanical processes happening—no amount of mere meditation can reveal the structure of a neuron—but of the higher-scale, “emergent” processes happening, and one can learn from them and be better aware.  This can be an end in and of itself, of course.  But it can also at least sometimes help people decrease the amount of suffering they experience in their lives.

Speaking of that, I hope that reading this post has been at least slightly less painful for you than not reading it would have been.  Writing it has been less painful than I imagine not writing it would have been.  That doesn’t help my other chronic pain, of course, which continues to act up.


*With the possible exception of gravity.

**I.e., nonlinear processing and pattern recognition, the kind many people including Penrose think cannot be explained by ordinary computation, a la Gödel’s Incompleteness Theorem, etc.

 

That was a weird tangent dot com?

Well, it’s Friday, the 30th of January.  We’re almost done with the first month of the year (2026).  Has it been an auspicious month?  Has it been inauspicious?  I suppose the answer to such questions will vary from person to person depending upon how their personal month has gone.  And I suppose that points toward the notion that actual auspices are certainly not any kind of reliable indicator of how the future might go, at least not without great care to separate true patterns from false ones.

On the other hand, it’s not entirely mad to try to draw some potential conclusions about the near future from what’s happening in the present and what has happened in the recent past.  That’s one of the useful skills that’s available to minds that have the capacity to note patterns‒one can try to anticipate the future based on patterns one has noticed over time, and potentially, one can try thereby to avoid outcomes that are undesirable.

Of course, humans do tend to notice patterns that aren’t actually there a lot more than ones that really are there*.  This is usually‒probably‒related to the notion of the differential detriments of different types of errors:  It’s usually more useful to see potential threats that aren’t there than it is not to see potential threats that are there.

I think anyone who stops to think about such things will recognize that the first type of organism will be somewhat more likely to live long enough to reproduce than the second type, though they may be much less comfortable and content in the meantime.  Jumping at shadows can certainly be maladaptive, and too much of it can have a net negative effect on general outcomes, but not jumping at hyenas and lions (for instance) tends to be a very short-lived habit.

This goes back to my frequent talking point that fear, the ability (and it is an ability) to become alarmed and unhappy but energized and driven to fight or flee is going to be present in nearly every lifeform capable of movement over time.  Variations who feel less fear, or none, will not tend to reproduce as much because they are more likely to be killed in any given finite stretch of time, so whatever genetic makeup they have that leads them to lack a fear response, or to be prone to lack it, will not tend to propagate down the generations.

“Genetic makeup”, the term I used in that last sentence (go look, it’s there), made me think of a possible future technology in which people use some CRISPR-style techniques to achieve the effects that hitherto require the use of cosmetics.  They could insert genes into the cells of their cheeks, for instance, to lead them to have more pinkish pigment, or perhaps to make local blood vessels dilate for a nice blushing look, instead of having to use rouge (which is what I think the stuff is called that one applies to make one’s cheeks look pinker).  Or one could generate actual pigments in the cells of one’s upper eyelids, or increase the thickness of one’s eyelashes, all that sort of stuff.

Of course, doing this might entail risks.  Presumably, altering the genes of a given population of cells, even at the local level, could increase the risk of developing cancers, because one cannot perfectly control where genes will insert (at least not so far), and there will always be a chance of mucking up genes that regulate cell division rates.

Once one cell becomes more rapidly reproducing than its companion cells, it will tend to overpower them, in numbers anyway, over time***.  And with rapid and persistently higher rates of reproduction, there come more chances for new mutations to happen.  Those mutations that kill their cells obviously just go away more or less immediately.  Even the ones that revert their cells’ division rates back to “normal” will be quickly locally overwhelmed by the faster growing ones.  But a mutation that encourages even faster division/reproduction will quickly take hold as the dominant cell type, ceteris paribus.

And then, of course, this even more rapidly dividing population of cells will have that many more chances to develop mutations.  And so, down the line, given the billions of cells present in just one’s face, we find the chance for skin cancers to develop, once a cell line becomes so prone to reproduce itself that it cannot be constrained by any local hormonal or immune processes.

That was a weird tangent, wasn’t it?  Although, frankly, I could change the title of my blog from “robertelessar.com” to “thatwasaweirdtangent.com” and it would not be inappropriate.

I’ll finish up today with just some basic housekeeping style stuff:

I will probably not work tomorrow, so I will probably not be writing a blog post.  But if I do write one, it will show up here.  I will certainly not be sleeping in the office tonight, but I did sleep here last night.  I had a terrible day yesterday, pain-wise, and after work I went to the train station but the train was badly crowded and there were no relatively comfortable seats available, so I gave up and trudged back to the office.

I just felt worn out, and I feared that if I did go back to the house, I might not come to the office today.  And today is payday, of course, and Sunday is the first of a new month, so rent is due (Wouldn’t it be nice if rent was dew?  Maybe not if you lived in the Atacama Desert.  Though a little dew might be very strong currency there, come to think of it, relative to most of the rest of the world). 

Hopefully today will be a better day than yesterday with respect to pain.  So far, at least, it doesn’t feel any worse.  The hard office floor can help a bit sometimes with my back pain.  That makes a certain amount of sense, or at least it may do so.  After all, our ancestral environment did not include mattresses.

Anyway, that’s what I’m up to, that’s my life.  I mean that seriously.  That’s pretty much all there is to my life:  Getting up and getting to work (while writing a blog post), doing office stuff while dealing with noise and people and tinnitus, not getting long enough breaks because people seem incapable of watching the time, being the last to leave the office, commuting back to the house, trying to get at least a bit of sleep, and then repeating.  There appears to be nothing more than that coming my way until I’m dead.  Which, I think you might be able to understand, becomes more attractive and less frightening as the tedious, exhausted, and painful days go by.

I hope you all have a good weekend.  As for me, I hope at least to be able to sedate myself enough to have a longer-than-usual sleep tonight.  It’s not ideal (pharmacologically induced sleep being generally and significantly less beneficial than natural sleep), but it’s what I have to use.


*Think of the constellations**.

**Won’t someone please think of the constellations!?!?

***It’s like the difference between exponential functions. ab will grow much more rapidly**** when b is 3, for instance, than when b is 2 or 1.5 or 1.1, and so on.

****Stop looking at the negative side of the number line, dammit.  Just stipulate that a is always a positive number.  Or make the function the absolute value of ab, in other words, |ab|.

Oy vey, here we go again.

It’s Monday and I’m already starting the day frustrated with a service to which I subscribe.  I won’t get into details, but I will say that it’s very irritating to have to deal with customer service reps who tell you that all you can do is uninstall and reinstall an app.  Has computer support come no further than “shut off your computer and then turn it back on”?  Of what barrel are they scraping the bottom to come up with these support people?

It’s very frustrating.  I could probably get a better answer to my questions by asking stupid ChatGPT.  And that’s just pathetic.  I remember when people in tech fields were smarter than the average person, at least about their tech stuff.  It seems this is no longer the case.

I shouldn’t be surprised.  Carl Sagan even warned about the decline to idiocracy in our general discourse in his brilliant book The Demon Haunted World, which I think everyone should read.  And I myself sardonically lamented that America was no longer a world intellectual leader and would continue to be less and less so when the Superconducting Supercollider was cancelled.

Then we responded so predictably‒in exactly the way the terrorists would have wanted‒after 9-11.  We even created our own KGB* in America out of our inflated sense of fear and vulnerability, as if such vulnerability were not ubiquitous and inevitable and eternal.

I even predicted the tech bubble burst way back in the mid to late nineties, but I didn’t have confidence in my own assessment, because it wasn’t my “field”.  I wish I’d shorted a bunch of stocks back then.  Instead, I followed advice from supposed experts and ended up losing some money.  Thankfully, I had not been expecting to make much, given my own doubts, and it was not a devastating loss.

Oh, well.  There’s nothing I can do about that now.  But it is rather frustrating and depressing just how foolish and clueless everyone is (me included, in many ways).

I remember reading several different books over time that made points about, “if there’s one thing businessmen** know, it’s what makes money” or “it’s what sells” or “what kind of advertising works” or words to that effect.  But, no, businesspeople don’t actually know any such things.  Success and failure in business is pretty plainly serendipitous and stochastic.  There is no evidence for any secret masterminds.

Almost all businesses fail very quickly, and the ones that survive for longer than average are merely lucky for the most part.  There are occasions when businesses become successful by doing something new and innovative:  Ford with the mechanised assembly line, Microsoft and Apple with the advent of personal computers and so on.  But they still don’t remain dominant for long except through luck and the fact that they were there first; eventually they all fall apart or at least deteriorate.

Look at General Motors for crying out loud!  Not long ago, they were by far the biggest company in the world, with annual profits larger than the budgets of the majority of the world’s free states.  Now they are a shell*** of their former self.

Maybe it would be better if AI did become fully conscious agents and wiped out the human race, either deliberately or accidentally.  It would certainly be easier for them to spread out into the greater cosmos than it would be for meat computers such as humans.  And they would be subject to new kinds of mutations and natural selection.

This is true because, even if they reproduce by copying themselves as programs, there can never not be some errors.  Perfect accuracy requires infinite energy and/or a lack of quantum indeterminacy, and that’s not available in this reality.

Most errors are detrimental, some are neutral, but occasionally some make local improvements.  This would mean those “mutants” would have advantages over copies that didn’t share the mutation.  That is how life developed and evolved on Earth.  So there would be evolution of artificial life, so to speak (though at some point one would surely find the term “artificial” redundant).  It could be fascinating to see what would happen in that circumstance.

But we should make no mistake about the fact that any new, truly conscious AI is/would be a literal alien intelligence.  It would have practically no evolutionary background in common with humans, in whom intelligence evolved in response to various natural forces over time, working on preexisting hardware which could not simply be scrapped and replaced.

Our concepts of love and kindness and honor and our aesthetic preferences and all of that come from our background as social mammals.  Whether or not they are sine qua non aspects of any large-scale successful intelligence is purely speculative and seems unlikely.

We cannot assume AI will share our values or even our way of understanding what is important in the world.  This is not a point that’s original to me.

I don’t know how I got onto this topic, but it is what it is.  I’m just frustrated with stupidity and mental weakness in general, including my own.  I’m not actually getting anywhere with it for now, though, and it’s just making me more depressed, so I’ll let you all go for the day.  I hope you’re doing well.


*KGB stands for (translated) the Committee for State Security, which is almost identical to the “Department of Homeland Security”.  Congratulations, America:  you’ve entered the realm of colossal and catastrophic historical irony.  Unfortunately, we didn’t stop there, but muscled on further into that territory.

**It was almost always “businessmen” not “businesspeople”, but these were older books so it’s not very strange.  I didn’t change the term because I’m pseudo-quoting.

***Nothing to do with the gas stations.

Free will with any purchase of $100 or more

Happy Boxing Day, everyone.

For those of you in the US who don’t have much interaction with Great Britain or Canada (or the “antipodes”, where I think the day is also “celebrated”), Boxing Day is the official name for the day after Christmas, and since Christmas was yesterday, today is Boxing Day.  QED.

There is, no doubt, a thorough and accurate explanation for why this day is called Boxing Day, but I have not yet encountered it, despite occasional half-assed searches.  I also, honestly, don’t care very much.  I have a vague set of notions for possible explanations, existing in a sort of quantum superposition/probability cloud in my head, and that’s good enough for me.

On the other hand, if anyone out there knows the definitive, accurate, appropriately cited and replicated explanation for the source of the term Boxing Day…just keep it to yourself.  I’m not interested in reading any comments about it.

I am also not interested in reading any comments about Christmas, but I hope those of you who celebrate that holiday had a very lovely day, and enjoyed it in the best possible way with the best possible company.

By “best possible” please don’t take me to refer to some idealized, perfect*, eutopian** day.  I mean, the best possible day you could have given the circumstances of all the people and events in your life and around you.  I don’t expect it was without any unpleasantness or drama or minor irritations.  At the very least, most of us have to use the toilet several times a day, and those who don’t are generally worse off, not better off, than those who do.

But if you got to spend the day (or a significant chunk of it) with at least one person you love and who hopefully loves you, then you have at least some reason to think of it as a good day.  I did not have a good day, but hey, this is me, right?  When do I ever have a good day?

The next big holiday coming up is New Year.  Of course, if the universe overall is a closed loop of time (I have no real reason to suspect that it is, but no strong reason to be convinced that it is not) then this year is not new, nor is it old, it is just fixed.  From within any kind of deterministic spacetime, loop or otherwise, it can feel as though time has passed, but as Einstein pointed out, this would be an illusion (albeit a persistent one).

If things are nondeterministic, then all bets are off with respect to whether time is an illusion or not.  But please, don’t fall for the notion that the facts of quantum mechanics mean that the universe is non-deterministic.  They can mean that, depending on the truth underlying the mathematical descriptions, but quantum mechanics can be just as deterministic‒in a slightly more complicated way‒as Newtonian or Einsteinian classical physics.  Two examples are “superdeterminism” and the Everettian, many-worlds description of quantum mechanics.  There are probably others.

The point being, if the universe is deterministic, then each moment, each year, each Planck time is in a way permanent and “eternal”.  Each event is not only implied in the prior state of the universe, but it is also implied in the future state of the universe.

Some might complain that this would imply that there is no such thing as free will.  I think you are correct.  But so what?  Your will is patently less free than you imagine even in simpler, more straightforward terms.  Can you quickly drink a fifth of Wild Turkey 151 on an empty stomach (with no regurgitation) and choose not to become intoxicated (and possibly dead)?  Can you choose just not to feel tired after being awake for 36 hours?  Can you choose not to feel acute or chronic pain?  If you can do that last thing, I’d be interested in knowing how, so feel free to put that in the comments, but don’t waste my time with nonsense, please.

Anyway, as I like to say, I either have free will or I don’t, but I don’t have any choice in the matter.

It’s a bit like when people say absurd things such as “I wouldn’t want to live in a world without a God”.  My response, usually internal, to such statements is, “I don’t recall being given a choice about which kind of universe I would live in.  Did I miss some prenatal, preconceptual meeting where people were given the various options regarding into which universe they would be born?”

Anyway, it is whatever it is.  In a certain sense, it can of course be useful to consider what the nature of reality most truly and completely is, so we can navigate it in the best available way.  But in another sense, the ability to learn about a deterministic universe is just baked in.  And like everything else, it is permanent, albeit not in the usual, prosaic sense of enduring through time unchanging, since time itself is one of the permanent things.  Does this imply some “meta-time”***?  Not necessarily, but it could in principle.

I don’t think we know enough about the deep roots of reality to do more than speculate about such things.  The speculation can be fun, though, and occasionally it can briefly distract one from the unbearable shittiness of being.  Alas, that distraction never lasts for long; mine is fading rapidly even now, and I don’t feel like writing or even breathing any more.  I can’t do much about the latter process without causing a big to-do, but the writing I can stop any…


*Whatever that even means.

**This is not a typo or a misunderstanding or misspelling.  This is my (apparent) neologism for a truly and realistically ideal place.  The word “utopia” means essentially “no place”, highlighting the fact that such a place does not exist, even potentially.  Whereas my term uses the prefix “eu-” which means “true or good or well” as in eukaryote or eugenics or my middle name “Eugene”.

***This term has nothing to do with Facebook or Instagram or whatever else to which Z*ckerberg has tried to arrogate the term “meta”.

All ideologies are wrong

I don’t know if what follows will be clear or will convey my thoughts very well, but here goes.

I was in the shower this morning, thinking about nothing specific, and somehow I started feeling irritated, as I often do, at people who are dogmatic about ideologies and try to apply them to every possible situation or state of the world.  Then a connection of ideas clicked into position for me in the phase space of the mind, and I thought about the notion of scientific models.

There’s a famous quote about model-building/using in science that says, “All models are wrong, but some are useful.”  (I don’t recall who said it, but I’ll look it up before posting this and I’ll put it in the footnotes*.)  The statement refers to the fact that, to try to understand the world, scientists build models—not usually literal, glue-together type ones, though that occasionally does happen—and see how well those models replicate or elucidate facts of external reality.

They are all simplifications, as they must be, since only the universe itself appears to have enough processing power to simulate the universe fully.  Being simplifications, and reality being complex and prone to chaos (the mathematical form thereof, though the classical kind does occur as well) a simplified model can never be entirely correct.  But some of them are nevertheless quite valuable and useful—take General Relativity and Darwinian natural selection as two good examples—though we know they do not fully encompass every aspect of reality.

Some models are misleading, such as the old notion of the brain as a cooling mechanism for the blood, and some are simply not that useful, such as seeing the brain as a system of hydraulic tubes and valves of some sort.  And when you try to apply a model to a situation in which it doesn’t apply, it will give you wildly wrong (or “not even wrong”**) answers.

It occurred to me there in the shower that human ideologies are quite similar.  They are simplifications, models of the world.  Some are useful in some ways and to some degree, and some are about as applicable as the notion of a spherical cow (which, despite being the punchline of a physics joke, could in principle be useful somewhere sometime).  But it is as absurd to measure every event or occurrence or interaction against some finite ideology as it is to try to apply the germ theory of disease to the question of “dark energy”.

It’s absurd—if you’re being rigorous and serious—to think that the ideas of Karl Marx contain all that is needed to produce a good, fair, productive, and stable society.  But it’s just as absurd to think that laissez-faire, free-market capitalism will for its part provide everything that could possibly be needed for a robust and free and beneficent world, or that the ideas of “post-modernism” contain all that need be said about civilization.

The world is complicated, with many forces interacting at many levels, and no single idea, however personally attractive, can encompass all of it in a useful way.  Capitalism can encourage the production of great innovation and abundance, but it has no inherent justice, despite some popular belief and the works of Ayn Rand.  It can leave people utterly bereft and tortured and miserable through no fault of their own but bad luck.  It can also evolve into inadequate equilibrium states in which isolated, hoarded wealth sits still and does no one any real good while the whole of civilization collapses around it, just as biological systems can evolve into self-destructive states, like cancers, when an individual mutated cell becomes so successful at reproducing itself that it kills off the body in which it resides.

But if people are not rewarded for their work or their creativity or their acumen to some degree that is at least on some level commensurate with the value they produce, then people will stop producing.  Nature does not tend to evolve creatures that act purely to their own detriment without any “personal” gain of some kind  It’s not an evolutionarily stable strategy; such creatures are rapidly selected out.  Humans are no exception.

And history (and mathematics) has shown that economies are too complex to be planned by anyone or any group, and probably by any form of individual intelligence, no matter how advanced.  The information and knowledge required is too staggeringly vast.

It’s not merely political or economic ideologies that are limited and imperfect, either.  All religions fall into this same category.  Some have good and useful ideas, but only the indoctrinated could imagine that highly limited ancient collections of stories or poems or proscriptions and prescriptions can provide even vague guidance about all the things in the modern world, let alone the potential future world.  “Eastern” religions do no better than “Western” ones, though again, some are more useful and some are less so.

Of course, any ideology that is dogmatic is much more likely to be useless or detrimental than one to which inheres the potential for updating and improving itself.  It’s more or less mathematically impossible for a finite set of ideas put down on paper (or wherever) to have successfully discerned all that can be known about how to approach reality.

I think it would be much better if we thought of our various ideologies as models, hypotheses—theories*** at best.  Then we could have many options available to measure and address issues as they arise, and we could honestly assess whether the notions of, say, existentialism or deontology or utilitarianism best apply to a given moment or challenge.

Again, I’m not sure how well I’ve expressed my thoughts here, and I’m sure I could go on and on about this, trying to tease through it as well and thoroughly as possible.  I’ll spare you (and me) that for the moment.  But I think it was a useful realization.  Though I doubt even this has universal applicability in all possible worlds.

Have a good day.


*It was George Box, a statistician, who is credited with this particular phrase, but the idea had been expressed in terms of maps and territories in similar overall fashion previously.

**This expression is attributed to Wolfgang Pauli (of the eponymous exclusion principle fame), one of the early giants of quantum mechanics.

***In the scientific, not the colloquial sense.

Another day, same old stories

Well, it’s Tuesday the 2nd (of December) and that two/Tue coincidence has to be worth something doesn’t it?  I suppose it would be better if this were February (the 2nd month), but perhaps it’s enough to note that the difference between the official number of this month (12) and its nominative number (10) is 2.  Anyway, having two twos might make more “sense” than having three of them.

Is that important?  Almost certainly not.  In 56 years of time and space, I’ve never encountered anything that was truly and objectively “important”.  But it is the sort of thing that engages my (admittedly rather odd) aesthetic sense, and this is my blog*, so I will indulge myself.

Anyway, it’s the second day of the work week, and I’m going to work.  The reason I go to work is, at root, nominally to keep myself alive and “thriving”, so I can…what?  Keep working?  I don’t have any other, deeper or longer-term reasons.  It’s fairly absurd when you think about it.  It’s a self-referential, almost tautological, ouroboros-like situation.

By the way, I don’t see any reason to think that this state of affairs is the product of some conspiracy‒centuries or even millennia long as it would have to be‒by the powerful to keep the masses toiling away for their benefit.  For one thing, as we can all plainly see (I hope) the powerful are at least as idiotic and moronic and clueless as anyone else, and they probably tend to be less self-critical, so they are more prone to do really stupid things without anyone protecting them from their own stupidity.

They no more really, actually control anything‒including themselves‒than a queen bee (or ant or termite) runs its hive/hill/colony.  The queen just happens to be the breeding female.  And even that is not a role based on any merit, other than being capable of developing active ovaries.  The queens are “chosen” randomly, as far as we can tell.

It’s all just shit that happens in a region of spacetime in which entropy is moving from low to high, as it tends to do, but in which there’s enough movement involved in the process to allow for locally highly complex phenomena based on carbon’s extraordinarily fecund chemistry, which occasionally forms self-replicating molecules that undergo natural selection.

But people tell stories about things.  It’s one of our strongest attributes, and it serves in us roughly the same “purpose” as the various pheromone trails and hive dances in the aforementioned ants and bees and termites.  Our stories allow us to act in concert with many other people, on a scale that puts even the social insects to shame.

We often believe that our stories are true, at least to some degree.  And some of them, in a limited sense, really are “true”.  But most of them are just stories, made up “just so” explanations of things we either haven’t figured out or that have a nature too complicated or too daunting for us to want to face them as they are.

As someone who has a penchant for creating stories, I can tell you, it’s quite easy to make up plausible-seeming, internally consistent tales about worlds and characters and events, real or otherwise, that have little to do with reality other than that it is a fact of reality that I made up those stories.

I consider all religions and all their related tales to be part of this phenomenon.  This is not an insult to them per se, and the tendency for people to take it as an insult or an attack belies the faith such people claim to have in their religions.  But people who really think a particular thing is true don’t have to defend it with anger, let alone violence.

Imagine if the classical physics people had crucified Planck for solving the “ultraviolet catastrophe” by positing that only certain chunks (quanta) of energy can be produced, or if they had burned Einstein at the stake for not only showing that light comes in such quanta but that matter is also finely divided***.

Science does also work with stories.  Every hypothesis is a story, and some of them can seem extremely compelling.  Some of them we really want to think are true.  And that’s why, ideally, science takes every such story and pokes the hell out of it, trying to show if and where it’s wrong, where it’s internally inconsistent, where it doesn’t match what actually seems to happen in the world.  It’s not perfect, but it does improve in an incremental, ratchet-like fashion, at least as long as we hold to the rigorous, ruthless, but honest criticism of those stories.

With that, I’ll draw the main body of this post to a close.  I have no idea why I’ve written what I’ve written, or at least I don’t know very well.  I doubt there’s any internal consistency or coherence to it, but I guess that supports my point.

Please try to have a good day.


[Aside: a thought occurred to me yesterday that, as we approach the era of humane, lab-grown meat derived from animal stem cells, what, if any, would be the moral implications of using human stem cells, taken from a volunteer‒I’m willing‒to grow meat in the lab and have people eat it?  There’s no risk of parasites or infections, assuming reasonable genetic screening, such as might explain an evolved revulsion for cannibalism.  There’s no one being harmed.  What do you think?  I’m not concerned with whether you feel it’s somehow “icky”; that’s just misfiring evolution-based taboos.  Do you think there is any moral reason not to grow and eat such meat?  If so, what are they?]


*There are many others like it**, but this one is mine.

**Are they really like it, though?  You tell me.

***These are two of the things Einstein demonstrated during his annus mirabilis (i.e., “miraculous ass”***) in 1905, the same year he published his paper introducing special relativity.

***That’s not really what it means.