“No more work to-night; Christmas Eve, Dick! Christmas, Ebenezer!”

critic

 

Okay, well, it’s Friday at last, and it’s “Christmas Eve eve” as I sometimes say.  It turns out that the office apparently isn’t going to be open tomorrow, which surprises me‒as is obvious, I guess.  I still could find out otherwise, I suppose, but I doubt it.

I’m writing this on my phone again, and I have been doing so most days this week.  I think I used my laptop on one of the days, perhaps Tuesday, but not on Wednesday, when I wrote my long and rather irritating post full of self-congratulation for deeds of the past that have no relevance to my current life.  That long-winded blather was from my phone, if you can believe it!

I actually slept comparatively well last night; I only finally woke up at about 3:50 this morning, which for me is about a two-hour lie-in.  I’m not even waiting for the first train of the day; I’m waiting for the second one!

I’m surprised that I slept quite so well yesterday, because I had an unusually bad day for pain‒or perhaps it would be better to say it was a good day for pain and thus a bad day for me.  The pain was focused in my right lower back down through my hip to the ankle and the arch and ball of my foot, but spreading up through to the upper back and shoulder blade and arm, and nothing that I did or took seemed to make more than a transient difference.

I was walking around the office like Richard the Third most of the day, when I was up.  We did get some very lovely cookies from my sister for the office‒she sends such packages often and they are beloved by all, and justly so‒but I couldn’t enjoy them as much as I wish I could have, because I was in a lot of pain and severely grumpy.

They were/are amazingly good, though.

I am still in a bit of accelerated pain this morning, but then I’m basically always in pain.  It’s not yet as bad as yesterday, at least, so keep your fingers crossed, please.  Or don’t if you’d rather not; I hardly think it actually has any effect on any outcome other than the configuration of your fingers.

I suppose it’s just a way for me to express my anxious hope mixed with fear and tension, and to invite some kind of shared emotional support from readers.  Though, of course, for that, it doesn’t make all that much sense, since how would I even know if any of you are crossing your fingers?  I suppose you could leave a comment saying that you are, but the very act of typing a comment must make it at least slightly less likely that your fingers are actually crossed, certainly while typing.

Anyway, I hope that my pain today is less than it was yesterday.  But even I personally will not be crossing my fingers, since I don’t think that gesture has any magical powers, so you shouldn’t feel obliged to do it, yourself, either.

Come to think of it, I don’t think anything has any magical powers.  My first thought about that is “more’s the pity”, but really, what would magical powers even be?  If they existed, they would be actual phenomena of nature, and would have some lawful underpinning and explanation.

That’s one thing I’ve always kind of been disappointed about in the Harry Potter books.  They take place in a school, and have genius characters like Dumbledore and Tom Riddle and Hermione, who surely would have curiosity toward the hows and wherefores of magic, yet there’s not even a hint of an explanation for how it works, why it works, what it actually is, or anything.  I think some touching upon that subject would have been very fun.

I mean, for instance, how does apparation work?  It involves a sensation of squeezing through something, but is that some form of hyperspace, or a wormhole, or what?  How do wands enhance or channel magical power from individuals gifted in magic?  How was that figured out for the first time?  Clearly people can do some magic without wands‒so, how necessary are they?

When did people begin to be able to do magic?  Clearly people haven’t always been able to do magic; there haven’t even always been people!  Was the ability to use magic some new, isolated mutation, like blue eyes, that spread through the population (as it surely would)?  Clearly it’s not some complex mutation, as it arises de novo in the human population, leading muggle-born witches and wizards to arise with some regularity.

Perhaps there is a complex of genes that, only when all present together (perhaps even only when homozygous) instantiate the ability to do magic.  Maybe most humans have some large fraction of the necessary genes‒after all, as I noted, the ability to use magic seems likely to have been a significant evolutionary advantage‒but it’s so easy to lose some necessary part of the biological (neurological?) machinery necessary through random mutation that most people are mutated slightly away from the complete set and so become muggles*.  Or, if born to witches and wizards they are given the derogatory term “squibs”.

I don’t recall how I got on this topic, but it is interesting, and I wish Rowling would at least have hinted at some studies or explanation, at least when discussing the Department of Mysteries.

Alas.

Anyway, since I apparently won’t be writing a post tomorrow, I would like to wish all of you who celebrate it‒in the words of the late, great hero, Dobby the house elf‒a very Harry Christmas**.  Maybe take a moment to read the Christmas scenes in the various Harry Potter novels.  Christmas at Hogwarts, for the students who stayed over the holidays, seems always to have been an interesting occasion, albeit not as fun as Halloween.  Halloween at Hogwarts would have been quite the thing to experience. The only close contender that readily comes to mind is Halloween with the Addamses.  That would be interesting!

I guess I’ll be back on Monday, then, though it is at least slightly possible that I could be wrong about tomorrow.  If I am, I’ll be writing a post, and it may be quite a grumpy one, though maybe not.  After all, what do I have to do with my time other than go to the office?  Not much, honestly.

Oh, well.

santa-whoand merry

 


*This raises the odd thought for me about what might happen if a cancer developed that, by chance, has a complete set of magical genes, in a muggle who had been almost complete.  Could one have a “magic tumor”?  I guess probably not, since it seems magic would be a collective function of many aspects of the nervous system, not a property of every individual cell.  Perhaps this is one reason why wizards can’t just fix visual impairment‒Harry Potter wears glasses, and no one ever even suggests that magic might be able to cure his vision. But the eyes are, quite literally, extensions of the central nervous system‒though the lenses aren’t, come to think of it‒and maybe tampering with the eyes through magic is particularly dangerous, or perhaps the nervous system always rejects such attempts.

**As an aside, I have to tell someone that, in the song Have Yourself a Merry Little Christmas, I’ve always tended to hear the line, “Faithful friends who are dear to us gather near to us once more,” as if they are singing, “…gather near to us one s’more”, and I think, “How are they going to share one s’more between a group of people?  I mean, it’s “friends” who are dear to “us”, which to me implies at least four people, total.  How can you split one s’more between four people?  Also, it would make a mess, with graham cracker crumbs and melted chocolate all over various hands and the floor and all that.  Anyway, I know that’s not what they’re saying, but every time I hear it, those thoughts go through my head.

Then there’s hope a great man’s memory may outlive his blog half a year.

Hello and good morning.  It’s Thursday, the day of the week on which I wrote my blog post even when I was writing fiction every other day of the week—well, apart from Sundays and the Saturdays when I  didn’t work.  I have not been writing any fiction recently.

I toyed with the idea the other day, but there doesn’t seem to be much enthusiasm for the notion, which I suppose is mirrored by my own lack of energy, or perhaps has its source in my lack of energy.  Or maybe they come from disparate but merely coincidentally parallel sources.  I don’t know, and though it’s mildly interesting, I don’t have energy or interest enough to try to figure it out.

I did work a bit on a new song yesterday, the one for which I had jotted down some lyrics a while back.  I have lost utterly the original tune, but I worked out a new one of sorts, and it seems okay.  I then worked out some chords for the first stanza, including some relatively sophisticated major sevenths and then major sixths of a minor chord that sounded nice, and which made me at least feel that I really have learned a little bit about guitar chords.  Then I figured out at least the chords I want for the chorus, which, among other things, throw a little dissonance in briefly, which is nice to up the tension.

I don’t know if I’ll get any further with it or not; I may just stop and let it lie.  It’s only perhaps the third time I’ve even picked up the guitar in months.  I was at least able to show myself that I can still play Julia, and Wish You Were Here, and Pigs on the Wing.  I had to fiddle a little to remind myself how to play Blackbird, but after a brief time I was able to bring it back, too.

So, it’s not all atrophied.  And I can still play the opening riff to my own song, Catechism, which I think is my best stand-alone riff.  My other guitar solos are mainly just recapitulations of the melody of the verse or chorus in their respective songs, but the one for Catechism is a separate little melody.

Actually, it occurs to me that I initially did a voice recording of the lyrics to the newish song as I thought of them, and when I did, I probably sang a bit of the tune that had come to my head.  Maybe I should listen to that and see if I like that melody better than the new one I came up with.  That would be a bit funny, if after the effort from yesterday to do a melody and chords I remembered the old one and just threw the new one away.

I suppose it really doesn’t matter much.  Even if I were to work out and record the song, and do accompanying parts and all that stuff, and publish it, I don’t think anyone is likely ever to listen to it much.  Maybe someday in the distant future, some equivalent of an archaeologist who unearths things lost in the web and internet will find the lost traces of my books or music or something, and they’ll be catalogued in some future equivalent of a virtual museum, among trillions of other collections of data that are recorded on line, but which will never seen by anyone for whom they might mean anything at all.

People sometimes say things like “what happens online is forever”, but as I’ve discussed before (I think), even if it’s true that things stored online remain and avoid simple deterioration of data thanks to the redundancy in the system, it doesn’t matter.  In principle, the sound of every tree falling in every wood has left its trace in the vibrational patterns of the world, and according to quantum mechanics, quantum information is never permanently lost, even if things fall into black holes*.

But of course, all that is irrelevant in practice, and comes back to collide with the nature of entropy and the degree to which most large-scale descriptions of a system are indistinguishable.  That picture of you with a funny face at that event years ago, which you tried to have a friend take down, but which had already been shared to a few other people, may in principle always be out there in the archives of Facebook or Twitter or whatever, but it doesn’t matter.  No one will ever notice it or probably even see it among the deluge of petabytes of data whipping around cyberspace every second.  You might as well worry about people being able to reconstruct the sound waves from when you sang Happy Birthday out of tune at your nephew’s fifth birthday party from the information left over in the state of all the atoms and molecules upon which the sound waves impinged.

It’s one of those seemingly paradoxical situations, rather like being in Manhattan.  There are very few places in New York City, and particularly in Manhattan, where one can actually be alone—even most apartments are tiny, and have windows that look out into dozens to hundreds of other people’s windows.  And yet, in a way, you are more or less always alone in Manhattan, or at least you are unobserved, because you are just one of an incomprehensible mass of indistinguishable humans.

Even “celebrities” and political figures, so-called leaders and statespeople, will all fade from memory with astonishing rapidity.  When was the last time you thought about Tip O’Neill?  And yet, for a while, he was prominent in the news more or less every day.  Do you remember where you were when William McKinley was assassinated?  No, because you were nowhere.  None of you existed in any sense when that happened, let alone when, for instance, Julius Caesar was murdered.

And what of the many millions of other people in the world at the time of McKinley or Caesar or Cyrus the Great or Ramses II?  We know nothing whatsoever of them as individuals.  Even the famous names I’m mentioning are really just names for most people.  There’s no real awareness of identity or contributions, especially for the ones who existed before any current people were born.

Last Thursday, I wrote “RIP John Lennon” and put a picture of him up on the board on which we post ongoing sales and the like.  The youngest member of our group, who is in his twenties, asked, “Who is John Lennon?”

He was not joking.

If John Lennon can be unknown to members of a generation less than fifty years after his death, what are the odds that anything any of us does will ever be remembered?

Kansas (the group, not the state) had it right:  “All we are is dust in the wind.  Everything is dust in the wind.”  The only bit they missed was that even the Earth will not last forever, and as for the sky…well, that depends on what you mean by the sky, I suppose.  The blue sky of the Earth, made so by light scattering off Nitrogen and Oxygen molecules, will not outlast the Earth, though there may be other blue skies on other planets.  But planets will not always exist.

As for the black night sky of space, well, that may well last “forever”, for what it’s worth.  But it will not contain anything worth seeing.

TTFN

Tip


*Leonard Susskind famously convinced Stephen Hawking that this was the case—and even won a bet in the process—though other luminaries were of course involved, including Kip Thorne, I believe, one of the masters of General Relativity.

Some thoughts (on an article) about Alzheimer’s

I woke up very early today‒way too early, really.  At least I was able to go to bed relatively early last night, having taken half a Benadryl to make sure I fell asleep.  But I’m writing this on my phone because I had to leave the office late yesterday, thanks to the hijinks of the usual individual who delays things on numerous occasions after everyone else has gone for the day.  I was too tired and frustrated to deal with carrying my laptop around with me when I left the office, so I didn’t.

I’m not going to get into too much depth on the subject, but I found an interesting article or two yesterday regarding Alzheimer’s disease.  As you may know, one of the big risk factors for Alzheimer’s is the gene for ApoE4, a particular subtype of the apolipoprotein gene (the healthier version is ApoE3).  People with one copy of the ApoE4 gene have a single-digit multiple of the baseline, overall risk rate for the disease, and people with 2 copies have a many-fold (around 80) times increased risk.

It’s important to note that these are multiples of a “baseline risk” that is relatively small.  This is a point often neglected when discussing the relative risks of a disease affected by particular risk factors when such information is conveyed to the general public.  If the baseline risk for a disease were one in a billion (or less), then a four-times risk and an eighty-times risk might be roughly equivalent in the degree of concern they should raise.  Eighty out of a billion is still less than a one in ten million chance for a disease; some other process would be much more likely to cause one’s deterioration and demise rather than the entity in question.

However, if the baseline risk were 1%‒a small but still real concern‒then a fourfold multiplier would increase the risk to one in 25.  This is still fairly improbable, but certainly worth noting.  An eighty-fold increase in risk would make the disease far more likely than not, and might well make it the single most important concern of the individual’s life.

Alzheimer’s risk in the general population lies between these two extremes, of course, and that baseline varies in different populations of people.  Some of that variation itself may well be due to the varying frequency of the ApoE4 gene and related risk factors in the largely untested population, so it’s tricky to define these baselines, and it can even be misleading, giving rise to false security in some cases and inordinate fear in others.  This is one example of how complex such diseases are from an epidemiological point of view, and highlight just how much we have yet to learn about Alzheimer’s specifically and the development and function of the nervous system in general.

Still, the article in question (I don’t have the link, I’m sorry to say) concerned one of the functions of the ApoE gene (or rather, its products) in general, which involve cholesterol transport in and around nerve cells.  Cholesterol is a key component of cell membranes in animals, and this is particularly pertinent in this case because the myelin in nerves is formed from the sort of “wrapped up” membranes of a type of neural support cell*.

cns myelin

This particular study found that the cells of those with ApoE4 produced less or poorer myelin around nerve cells in the brain, presumably because of that faulty cholesterol transport, and that the myelin also deteriorated over time.

Now, the function of myelin is to allow the rapid progression of nerve impulses along relatively long axons, with impulses sort of jumping from one space (a “Node of Ranvier”) between myelin sheath and another rather than having to travel all the way down the nerve, which a much slower process, seen mostly in autonomic nerves in the periphery.  When normally myelinated nerves lose their myelin, transmission of impulses is not merely slowed down, but becomes erratic and often effectively non-existent.

myelin in general

The researchers found that a particular pharmaceutical can correct for at least some of the faulty cholesterol transport and can thereby support better myelin survival.  Though this does not necessarily point toward a cure or even a serious disease-altering treatment over the long term, it’s certainly interesting and encouraging.

But of course, we know Alzheimer’s to be a complex disease, and it may ultimately entail many processes.  For instance, it’s unclear (to me at least) how this finding relates to the deposition of amyloid plaques, which are also related to ApoE, and are extracellular findings in Alzheimer’s.  Are these plaques the degradation products of imperfect myelin, making them more a sign than a cause of dysfunction, or are they part of the process in and of themselves?

Also, it doesn’t address the question of neurofibrillary tangles, which are defects found within the nerve cells, and appear to be formed from aggregates of microtubule-associated proteins (called tau protein) that are atypically folded and in consequence tend to aggregate and not to function and to interfere with other cellular processes, making them somewhat similar to prions**.  It’s not entirely clear (again, at least to me) which is primary, the plaques or the tangles, or if they are both a consequence of other underlying pathology, but they both seem to contribute to the dysfunction that is Alzheimer’s disease.

So, although potential for a treatment that improves cholesterol transport and supports the ongoing health of the myelin in the central nervous systems of those at risk for Alzheimer’s is certainly promising, it does not yet presage a possible cure (or a perfect prevention) for the disease.  More research needs to be done, at all levels.

Of course, that research is being undertaken, in many places around the world.  But there is little doubt that, if more resources were to be put into the study and research of such diseases, understanding and progress would proceed much more quickly.

The AIDS epidemic that started in the 1980s was a demonstration of the fact that, when society is strongly motivated to put resources into a problem, thus bringing many minds and much money to the work, progress can occur at an astonishing rate.  The Apollo moon landings were another example of such rapid progress.  Such cases of relative success can lead one to wonder just how much farther, how much faster, and how much better our understanding of the universe‒that which is outside us and that which is within us‒could advance if we were able to evoke the motivation that people have to put their resources into, for instance, the World Cup or fast food or celebrity gossip.

I suppose it’s a lot to expect from a large aggregate of upright, largely fur-less apes only one step away from hunting and gathering around sub-Saharan Africa that they collectively allocate resources into things that would, in short order, make life better and more satisfying for the vast majority of them.  All creatures‒and indeed, all entities, down to the level of subatomic particles and up to the level of galaxies‒act in response to local forces.  It’s hard to get humans to see beyond the momentary impulses that drive them, and this shouldn’t be surprising.  But it is disheartening.  That, however, is a subject for other blog posts.

I’ll try to have more to say about Alzheimer’s as I encounter more information.  Just as an example, in closing, another article I found on the same day dealt with the inflammatory cells and mediators in the central nervous system, and how they can initially protect against and later worsen the problem.  We should not be too surprised, I suppose, that a disease that leads to the insidious degeneration of the most complex system in the known universe‒the human brain‒should be complicated and multifactorial in its causation and in its expression.  This should not discourage us too much, though.  The most complicated puzzles are, all else being equal, the most satisfying ones to solve.


*The cell type that creates myelin in the peripheral nervous system (called Schwann cells) is different than the type that makes it in the central nervous system (oligodendrocytes), and this may be part of why Alzheimer’s affects the central nervous system mainly, whereas diseases like ALS (aka Lou Gehrig’s Disease), for instance, primarily affect the nervous system outside the brain.

**The overall shape of a protein in the body is a product of the ordering of its amino acids and how their side chains interact with the cellular environment‒how acidic or basic, how aqueous or fatty, how many of what ions, etc.‒and with other parts of the protein itself.  Some proteins can fold in more than one possible way, and indeed this variability is crucial to the function of proteins as catalysts for highly specific chemical reactions in a cell.  However, some proteins can fold into more than one, relatively stable form, one of which is nonfunctional.  In some cases, these non-functional proteins interact with other proteins of their type (or others) to encourage other copies of the protein to likewise fold into the non-functional shape, and can form polymers of the protein, which can aggregate within the cell and resist breakdown, sometimes forming large conglomerations.  These are the types of proteins that cause prion diseases such as “mad cow disease”, and they appear also to be the source of neurofibrillary tangles in people with Alzheimer’s disease.

The sweetest honey is loathsome in its own deliciousness. And in the taste destroys the appetite. Therefore, blog moderately.

Hello and good morning.  It’s Thursday again, so I return to my traditional weekly blog post, after having taken off last Thursday for Thanksgiving.  I’m still mildly under the weather, but I’m steadily improving.  It’s nothing like a major flu or Covid or anything along those lines, just a typical upper respiratory infection, of which there are oodles.  Most are comparatively benign, especially the ones that have been around for a while, because being not-too-severe is an evolutionarily stable strategy for an infectious agent.

An infection that makes its host too ill will keep that host from moving about and make itself less likely to be spread, to say nothing of an infection that tends to kill its host quickly.  Smart parasites (so to speak) keep their hosts alive and sharing for a looong time.  Of course, “smart” here doesn’t say anything about the parasite itself; viruses are only smart in the sense that they achieve their survival and reproduction well, but they didn’t figure out how to be that way—nature just selected for the ones that survived and reproduced most successfully.  It’s almost tautological, but then again, the very universe itself could be tautological from a certain point of view.

It’s an interesting point, to me anyway, to note that today, December 1st, is precisely one week after Thanksgiving.  Of course, New Year’s Day (January 1st, in case you didn’t know) is always exactly 1 week after Christmas.  It’s unusual for Thanksgiving to precede the first of December by a week, because the specific date of Thanksgiving varies from year to year (and, of course, if Thanksgiving were to fall on the 25th of November, December 1st would not be exactly one week later).  It’s an amusing coincidence; there’s no real significance to it, obviously, but I notice such things.

Anyway.

My sister asked me to write something about the vicissitudes of sugar (not her words), and though I don’t mean to finish the topic here today, I guess I’ll get started.  Apologies to those who are waiting for me to finish the neurology post, but that requires a bit more prep and care, and I’m not ready for it quite yet.  Life keeps getting in the way, as life does, which is one of the reasons I think life is overrated.

It’s hard to know where to start with sugar.  Of course, the term itself refers to a somewhat broad class of molecules, all of which contain comparatively short chains of carbon atoms, to which are bonded hydrogen and hydroxyl* moieties.

Most sugars are not so much actual free chains as they are wrapped up in rings.  The main form of sugar used by the human body is glucose, which is a six-membered ring with the rough chemical formula C6H1206.

glucose2

This is the sugar that every cell in the body is keyed to use as one of its easy-access energy sources, the one insulin tells the cells to take up when everything is working properly.  Interestingly enough, of course, though glucose is the “ready-to-use” energy source, it only provides about 4 kilocalories** per gram to the body, as compared to 9 kilocalories per gram for fats.

But the sugar we get in our diets is not, generally speaking, simple glucose.  It tends to be in the form of disaccharides, or sugars made of two combined individual sugars.  Sucrose, or table sugar, is a dimer of glucose and fructose, joined by an oxygen atom.

sucrose

Okay, I’m going to have to pick this up tomorrow.  I’ve gotten distracted and diverted by a conversation a few seats ahead of me.

There are two guys talking to each other at the end of this train car, and they are each seated next to a window on the opposite side of the train, so they’re basically yelling across the aisle to each other.  Their conversation is perfectly civil, and though they’re revealing a certain amount of ignorance about some matters, they are mainly displaying a clear interest in and exposure to interesting topics, from history to geography and so on.

At one point, one of the men started speaking of the pyramids and how remarkable their construction was, and I feared the invocation of ancient aliens…but then he followed up to say that, obviously, there were really smart people in ancient Egypt, just like we have smart people today who design and build airplanes and rockets and the like.  Kudos to him!

These men are not morons by any means.  They clearly respect the intellectual achievements of the past and present, and that’s actually quite heartening, because I think it’s obvious that neither one is extensively college-educated, if at all.

But why do they have their conversation from opposite sides of the train, so that everyone nearby has to hear it?  It’s thrown me off my course.

I’ll close just by saying that yesterday I finished rereading The Chasm and the Collision, and I want to note that I really think it’s a good book, and to encourage anyone who might be interested to read it.  The paperback is going for I think less than five dollars on Amazon, and the Kindle edition is cheaper still.  If you like the Harry Potter books, or the Chronicles of Narnia, or maybe the Percy Jackson books, I think you would probably like CatC.

CatC cover paperback

I’d love to think that there might be parents out there who would read the book to their kids.  Not kids who are too young—there are a few scary places in the story, and some fairly big and potentially scary ideas (but what good fairy tale doesn’t meet that description?).  It’s a fantasy adventure starring three middle-school students, though I’ll say again that, technically, it’s science fiction, but that doesn’t really matter for the experience of the story.

Most of my other stuff is not suitable for young children in any way—certainly not those below teenage years—and Unanimity and some of my short stories are appallingly dark (though I think still enjoyable).  If you’re old enough and brave enough, I certainly can recommend them; I don’t think I’m wrong to be reasonably proud of them.  But The Chasm and the Collision can be enjoyed by pretty much the whole family.  You certainly don’t have to be a kid to like it, or so I believe.

With that, I’ll let you go for now.  I’ll try to pick up more thoroughly and sensibly on the sugar thing tomorrow, with apologies for effectively just teasing it today.  I’m still not at my sharpest from my cold, and the world is distracting.  But I will do my best—which is all I can do, since anything I do is the only thing I could do in any circumstance, certainly once it’s done, and thus is the best I could do.

Please, all of you do your best, individually and collectively, to take care of yourselves and those you love and those who love you, and have a good month of December.

TTFN


*Hydroxyl groups are just (-OH) groups, meaning an oxygen atom and a hydrogen atom bonded together, like a  water molecule that lost one of its hydrogens.  This points back toward the fact that plants make sugar molecules from the raw building blocks of carbon dioxide (a source for the carbon atoms and some of the oxygen) and water (hydrogen and oxygen) using sunlight as their source of power and releasing oxygen as a waste product.  This was among the first environmental pollutants on the Earth—free oxygen—and it had catastrophic and transformative effects on not just the biosphere of the Earth but even on the geology.  The fact that the iron in our mines, for instance, is mainly in the form of rust is largely because of this plant-born presence of free oxygen in the atmosphere.

**A kilocalories is defined as the amount of energy needed to heat a kilogram of water by one degree centigrade.  We often shorten this term just to “calorie”, but that is actually only the amount of heat needed to raise a gram of water one degree centigrade (or 9/5 degrees Fahrenheit).  It’s worth being at least aware of the fact that what we tend to call calories are actually kilocalories.

You’ve got some nerve!

It’s Saturday, the 19th of November in 2022, and I’m going in to the office today, so I’m writing a blog post as well.  I’m using my laptop to do it, and that’s nice—it lets me write a bit faster and with less pain at the base of my right thumb, which has some degenerative joint disease, mainly from years of writing a lot using pen and paper.

The other day I started responding to StephenB’s question about the next big medical cure I might expect, and he offered the three examples of cancer, Alzheimer’s and Parkinson’s Disease.  I addressed cancer—more or less—in that first blog post, which ended up being very long.  So, today I’d like to at least start addressing the latter two diseases.

I’ll group them together because they are both diseases of the central nervous system, but they are certainly quite different in character and nature.  This discussion can also be used to address some of what I think is a dearth of public understanding of the nature of the nervous system and just how difficult it can be to treat, let along cure, the diseases from which it can suffer.

A quick disclaimer at the beginning:  I haven’t been closely reading the literature on either disease for quite some time, though I do notice related stories in reliable science-reporting sites, and I’ll try to do a quick review of any subjects about which I have important uncertainties.  But if I’m out of date on anything specific, feel free to correct me, and try to be patient.

First a quick rundown of the two disorders.

Alzheimer’s is a degenerative disease of the structure and function of mainly the higher central nervous system.  It primarily affects the nerves themselves, in contrast to neurologic diseases that interfere with supporting cells in the brain*.  It is still, I believe, the number one cause of dementia** among older adults, certainly in America.  It’s still unclear what the precise cause of Alzheimer’s is, but it is associated with the development of “cellular atypia made of what are called “neurofibrillary tangles” within the cell bodies of neurons, and these seem to interfere with normal cellular function.  To the best of my knowledge, we do not know for certain whether the plaques are what directly and primarily cause most of the disease’s signs and symptoms, or if they are just one part of the disease.  Alzheimer’s  is associated with gradual and steadily worsening loss of memory and cognitive ability, potentially leading to loss of one’s ability to function and care for oneself, loss of personal identity, and even inability to recognize one’s closest loved ones.  It is degenerative and progressive, and there is no known cure and there are few effective treatments that are not primarily supportive.

Parkinson’s Disease (the “formal” disease as opposed to “Parkinsonism”, which can have many causes, perhaps most notably the long-term treatment of psychiatric disorders with certain anti-psychotic medicines), is a disorder in which there is loss/destruction of cells in the substantia nigra***, a region in the “basal ganglia” in the lower part of the brain, near the takeoff point of the brainstem and spinal cord.  It is dense with the bodies of dopaminergic neurons, which there seem to modulate and refine motor control of the body.  The loss of these nerve cells over time is associated with gradual but progressive movement disorders, including the classic “pill-rolling” tremor, shuffling gait, blank, mask-like facial expression, and incoordination with tendency to lose one’s balance.  There are more subtle and diffuse problems associated with it, including dementia and depression, and like Alzheimer’s it is generally progressive and degenerative, and there is no known “cure”, though there are treatments.

Let me take a bit of a side-track now and address something that has been a pet peeve of mine, and which contributes to a general misunderstanding of how the nervous system and neurotransmitters work, and how complex the nature and treatment of diseases of the central nervous system can be.  This may end up causing this blog post to require at least two parts, but I think it’s worth the diversion.

I mentioned above that the cells of the substantia nigra are mainly dopaminergic cells.  This means that they are nerve cells that transmit their “messages” to other cells mainly (or entirely) using the neurotransmitter dopamine.  The term “dopaminergic” is a combination word, its root obviously enough being “dopamine” and its latter half, “ergic” relating to the Greek word “ergon” which means to do work.  So “dopaminergic” means those cells do their work using dopamine, and—for instance—“serotonergic” refers to cells that do their work using serotonin.  That’s simple enough.

But the general public seems to have been badly educated about what neurotransmitters are and do; what nerve impulses are and do; and what the nature of disorders, like for instance depression, that involve so-called “chemical imbalances” really entails.

I personally hate the term chemical imbalance.  It seems to imply that the brain is some kind of vat of solution, perhaps undergoing some large and complex chemical reaction that acquires some mythical state of equilibrium when it’s working properly, but when, say, some particular reactant or catalyst is present in too great or too small a quantity, doesn’t function correctly.  This is a thoroughly misleading notion.  The brain is an incredibly complex “machine” with hundreds of billions of cells interacting in extremely complicated and sophisticated ways, not a chemical reaction with too many or too few moles on one side or another.

People have generally heard of dopamine, serotonin, epinephrine, norepinephrine, and the like, and I think many people think of them as related to specific brain functions—for instance, serotonin is seen as a sort of “feel good” neurotransmitter, dopamine as a “reward” neurotransmitter, epinephrine and norepinephrine as “fight or flight” neurotransmitters, and so on.

I want to try to make it very clear:  there’s nothing inherently “feel good” about serotonin, there’s nothing inherently “rewarding” about dopamine, and—even though epinephrine is a hormone as well as a neurotransmitter, and so can have more global effects—there’s nothing inherently “fight or flight” about the “catecholamines” epinephrine and norepinephrine.

All neurotransmitters—and hormones, for that matter—are just complex molecules that have particular shapes and configurations and chemical side chains that make them better or worse fits for receptors on or in certain cells of the body.  The receptors are basically proteins, often combined with special types of “sugars” and “fats”.  They have sites in their structures into which certain neurotransmitters will tend to bind—thus triggering the receptor to carry out some function—and to which other neurotransmitters don’t bind, though, as you may be able to guess from looking at their somewhat similar structures, there can be some overlap.

dopamine

Dopamine

serotonin

Serotonin

epinephrine

Epinephrine

Neurotransmitters are effectively rather like keys, and their functions—what they do in the nervous system—are not in any way inherent in the neurotransmitter itself, but in the types of processes that get activated when they bind to receptors.

There is nothing inherently “rewarding” about dopamine, any more than there is anything inherently “front door-ish” to the key you use to unlock the front door of your house, or “car-ish” to the keys that one uses to open and turn on cars.  It’s not the key or the lock that has inherent nature, it’s whatever function is initiated when that key is put into that lock, and that function depends entirely on the nature of the target.  The same key used to open your door or start your car could, in principle, be used to turn on the Christmas lights in Rockefeller Center or to launch a nuclear missile.

Dopamine is associated with areas of the nervous system that function to reward—or more precisely, to motivate—certain behaviors, but it is not special to that function.  As we see in Parkinson’s Disease, it is also used in regions of the nervous system involved in modulating motor control of the body.  The substantia nigra doesn’t originate the impulses for muscles to move, but it acts as a sort of damper or fine tuner on those motor impulses.

Neurotransmitters work within the nervous system by being released into very narrow and tightly closed spaces between two nerve cells (a synapse), in amounts regulated by the rate of impulses arriving at the bulb of the axon.  Contrary to popular descriptions, these impulses are not literally “electrical signals” but are pulses of depolarization and repolarization of the nerve cell membrane, involving “voltage-triggered gates****” and the control of the concentration of potassium and sodium ions inside and outside the cell.

synapse

A highly stylized synapse

The receptors then either increase or decrease the activity of the receiving neuron (or other cell) depending on what their local function is.  It’s possible, in principle, for any given neurotransmitter to have any given action, depending on what functions the receptors trigger in the receiving cell and what those receiving cells then do.  However, there is a fairly well-conserved and demarcated association between particular neurotransmitters and general classes of functions of the nervous system, due largely to accidents of evolutionary history, so it’s understandable that people come to think of particular neurotransmitters as having that nature in and of themselves…but it is not accurate.

Okay, well, I’ve really gone off on my tangents and haven’t gotten much into the pathology, the pathophysiology, or the potential (and already existing) treatments either for Parkinson’s or Alzheimer’s.  I apologize if it was tedious, but I think it’s best to understand things in a non-misleading way if one is to grasp why it can be so difficult to treat and/or cure disorders of the nervous system.  It’s a different kind of problem from the difficulties treating cancer, but it is at least as complex.

This should come as no surprise, given that human nervous systems (well…some of them, anyway) are the most complicated things we know of in the universe.  There are roughly as many nerve cells in a typical human brain as there are stars in the Milky Way galaxy, and each one connects with a thousand to ten thousand others (when everything is functioning optimally, anyway).  So, the number of nerve connections in a human brain can be on the order of a hundred trillion to a quadrillion—and these are not simple switching elements, like the AND, OR, NOT, NAND, and NOR gates for bits in a digital computer, but are in many ways continuously and complexly variable even at the single synapse level.

When you have a hundred trillion to a quadrillion more or less analog switching elements, connecting cells each of which is an extraordinarily complex machine, it shouldn’t be surprising that many things can go wrong, and that figuring out what exactly is going wrong and how to fix it can be extremely difficult.

It may be (and I strongly suspect it is the case) that no functioning brain of any nature can ever be complex enough to understand itself completely, since the complexity required for such understanding increases the amount and difficulty of what needs to be understood*****.  But that’s okay; it’s useful enough to understand the principles as well as we can, and many minds can work together to understand the workings of one single mind completely—though of course the conglomeration of many minds likewise will become something so complex as likely to be beyond full understanding by that conglomeration.  That just means there will always be more to learn and more to know, and more reasons to try to get smarter and smarter.  That’s a positive thing for those who like to learn and to understand.

Anyway, I’m going to have to continue this discussion in my next blog post, since this one is already over 2100 words long.  Sorry for first the delay and then the length of this post, but I hope it will be worth your while.  Have a good weekend.


*For instance, Multiple Sclerosis attacks white matter in the brain, which is mainly long tracts of myelinated axons—myelin being the cellular wraparound material that greatly speeds up transmission of impulses in nerve cells with longish axons.  The destruction of myelin effectively arrests nerve transmission through those previously myelinated tracts.

**“Dementia” is not just some vague term for being “crazy” as one might think from popular use of the word.  It is a technical term referring to the loss (de-) of one’s previously existing mental capacity (-mentia), particularly one’s cognitive faculties, including memory and reasoning.

***Literally, black substance.

****These are proteins similar to the receptors for neurotransmitters in a way, but triggered by local voltage gradients in the cell membrane to open or close, allowing sodium and/or potassium ions to flow into and out of the cell, thereby generating more voltage gradients that trigger more gates to open, in a wave that flows down the length of the axon, initially triggered usually at the body of the nerve cell.  They are not really in any way analogous to an electric current in a wire.

*****You can call that Elessar’s Conjecture if you want (or Elessar’s Theorem if you want to get ahead of yourself), I won’t complain.

Some discussion of cancer–not the zodiac sign

Yesterday, reader StephenB suggested that I write about what I thought might be the next big medical cure coming our way—he suggested cancer, Alzheimer’s, and Parkinson’s diseases as possible contenders—and what I thought the “shape” of such a cure might be.  I thought this was an interesting point of departure for a discussion blog, and I appreciate the response to my request for topics.

[I’ll give a quick “disclaimer” at the beginning:  I’ve had another poor night.  Either from the stress of Monday night or something I ate yesterday (or both, or something else entirely) I was up a lot of last night with reflux, nausea, and vomiting.  So I hope I’m reasonably coherent as I write, and I apologize if my skills suffer.]

One hears often of the notion of a “cure for cancer”, for understandable reasons; cancer is a terrifying and horrible thing, and most people would like to see it gone.  However, my prediction is that there will never be “a” cure for cancer, except perhaps if we develop nanotechnology of sufficient complexity and reliability that we are able to program nanomachines unerringly to tell the difference between malignant and non-malignant cells, then destroy the malignant ones and remove their remains neatly from the body without causing local complications.  That’s a tall order, but it’s really the only “one” way to target and cure, in principle, all cancers.

Though “cancer” is one word, and there are commonalities in the diseases that word represents, most people know that there are many types of cancers—e.g., skin, colon, lung, breast, brain, liver, pancreatic, and so on—and at least some people know that, even within the broader categories there are numerous subtypes.  But every case of cancer is literally a different disease in a very real sense, and indeed, within one person, a single cancer can become, effectively, more than one disease.

We each* start out as a single fertilized egg cell, but by adulthood, our bodies have tens of trillions of cells, a clear demonstration of the power of exponential expansion.  Even as adults, of course, we do not have a static population of cells; there is ongoing growth, cell division/reproduction, and of course, cell death.  This varies from tissue to tissue, from moment to moment, from cell type to cell type, under the influence of various local and distant messengers, ultimately controlled by the body’s DNA.

Whenever a cell replicates, it makes a copy of its DNA, and one of each copy is sent into each daughter cell.  There are billions of base pairs in the human genome, so there are lots of opportunities for copying errors.  Thankfully, the cell’s proofreading “technology” is amazingly good, and errors are few and far between.  But they are not nonexistent.  Cosmic rays, toxins, other forms of radiation, prolonged inflammation, and simple chance, can all lead to errors in the replication of a precursor cell’s DNA, giving rise to a daughter cell with mutations, and when there are trillions of cells dividing, there are bound to be a number of them.

The consequences of such errors are highly variable.  Many of them do absolutely nothing, since they happen in portions of the genome that are not active in that daughter cell’s tissue type, or are in areas of “junk” DNA in the cell, or in some other way are inconsequential to the subsequent population of cells.  Others, if in just the wrong location, can be rapidly lethal to a daughter cell.  Most, though, are somewhere in between these two extremes.

The rate of cell division/reproduction in the body is intricately controlled, by the proteins and receptors in that cell, and the genes that code for them, and that code for factors that influence other portions of the genome of a given cell, and that make it sensitive or insensitive to hormonal or other factors that promote or inhibit cell division.  If a mutation in one of the regions of the cell that is involved in this regulatory process—either increasing the tendency to grow and divide or diminishing the sensitivity to signals that inhibit division—a cell can become prone to grow and divide more rapidly than would be ideal or normal for that tissue.  Any given error is likely to have a relatively minor effect, but it doesn’t take much of an effect to lead to a significant increase in the number of cells in a given cell type eventually—again, this is the power of exponential processes.

A cell line that is reproducing more rapidly will have more opportunities for errors in the DNA reproduction of its many daughter cells.  These new errors are no more likely to be positive, negative, or neutral generally than any other replication errors anywhere else in the body, but increased rate of growth means more opportunities** for mistakes.

If a second mistake in one of the potentially millions (or more) of daughter cells of the initial cell makes it yet more prone to divide rapidly than even the first population of mutated cells, then that population will grow and outpace the parent cells.  There can be more than one such daughter populations of cells.  And as the rate of replication/growth/division increases in a given population of cells, we have an increased chance of more errors occurring.  Those that become too deleterious will be weeded out.  Those that are neutral will not change anything in the short term (though some can make subsequent mutations more prone to cause increased growth rates).  But the ones that increase the rate of growth and division will rapidly come to dominate.

This is very much a microcosm of evolution by natural selection, and is a demonstration of the fact that such evolution is blind to the future.  In a sense, the mutated, rapidly dividing cells are more successful than their more well-behaved, non-mutated—non-malignant—sister cells.  They outcompete for resources*** against “healthy” cells in many cases, and when they gather into large enough masses, they can cause direct physical impairments to the normal function of an organism.  They can also produce hormones and proteins themselves, and can thus cause dysregulation of the body in which they reside in many ways.

Because they tend to accumulate more and more errors, they tend to become more dysfunctional over time.  And, of course, any new mutations in a subset of tumor cells that makes it more prone to divide unchecked, or that makes it more prone to break loose from its place of origin and spread through the blood and/or lymph of the body will rapidly become overrepresented.

This is the general story of the occurrence of a cancer.  The body is not without its defenses against malignant cells—the immune system will attack and destroy mutated cells if it recognizes them as such—but they are not perfect, nor would it behoove evolution (on the large scale) to select for such a strictly effective immune system, since all resources are always finite, and overactive immunity can cause disease in its own right.

But the specific nature of any given cancer is unique in many ways.  First of all, cancers arise in the body and genes of a human being, each of which is thoroughly unique in its specific genotype from every other human who has ever lived (other than identical twins).  Then, of course, more changes develop as more mutations occur in daughter cells.  Each tumor, each cancer, is truly a singular, unique disease in all the history of life.  Of course, tumors from specific tissues will have characteristics born of those tissues, at least at the start.  Leukemias tend to present quite differently from a glioblastoma or a hepatoma.

Because of these differences, the best treatments for specific cancers, even of classes of cancers, is different.  The fundamental difficulty in treating cancer is that you are trying to stop the growth and division—to kill—cells that are more or less just altered human cells, not all that different from their source cells.  So any chemical or other intervention that is toxic to a cancer cell is likely to be toxic to many other cells in the body.  This is why chemotherapy, and radiation therapy, and other therapies are often so debilitating, and can be life-threatening in their own right.  Of course, if one finds a tumor early enough, when it is quite localized, before any cells have broken loose—“metastasized”—to the rest of the body, then surgical removal can be literally curative.

Other than in such circumstances, the treatment of cancer is perilous, though not treating it is usually more so.  Everything from toxic chemicals to immune boosters, to blockers of hormones to which some cancers are responsive, to local radiation are used, but it is difficult to target mutated cells without harming the native cells to at least some degree.

In certain cases of leukemia, one can literally give a lethal dose of chemo and/or radiation that kills the bone marrow of a person whose system has been overwhelmed by malignant white blood cells, then giving a “bone marrow transplant”, which nowadays can sometimes come from purified bone marrow from the patient—thus avoiding graft-versus-host diseases—and there can be cures.  But it is obviously still a traumatic process, and is not without risk, even with auto-grafts.

So, as I said at the beginning, there is not likely to be any one “cure” for cancer, ever, or at least until we have developed technology that can, more or less inerrantly, recognize and directly remove malignant cells.  This is probably still quite a long way off, though progress can occasionally be surprising.

One useful thing cancer does is give us an object lesson, on a single-body scale, that it is entirely possible for cell lines—and for organisms—to evolve, via apparent extreme success, completely into extinction.  It’s worth pondering, because it happens often, in untreated cancers, and it has happened on the scale of species at various times in natural history.  Evolution doesn’t think ahead, either at the cellular level, the organismal level, or the species/ecosystem level.  Humans, on the other hand, can think ahead, and would be well served to take a cue from the tragedy of cancer that human continuation is not guaranteed merely because the species has been so successful so far.

Anyway, that’s a long enough post for today.  I won’t address matters of Parkinson’s Disease or Alzheimer’s now, though they are interesting, and quite different sorts of diseases than cancers are.  I may discuss them tomorrow, though I might skip to Friday.  But I am again thankful to StephenB for the suggestion/request, and I encourage others to share their recommendations and curiosities.  Topics don’t have to be about medicine or biology, though those are my areas of greatest professional expertise.  I’m pretty well versed in many areas of physics, and some areas of mathematics, and I enjoy some philosophy and psychology, and—of course—the reading and writing of fiction.

Thanks again.


*I’m excluding the vanishingly rare, and possibly apocryphal, cases of fused fraternal twins.

**There are also people who have, at baseline, certain genes that make them more prone to such rapid replication, or to errors in DNA replication, or to increased sensitivity to growth factors of various kinds, and so on.  These are people who have higher risks of various kinds of cancer, but even in them, it is not an absolute matter.

***Most tissues in the body have the inherent capacity and tendency to stimulate the development of blood vessels to provide their nutrients and take away their wastes.  Cancer cells are no exception, or rather, the ones that are do not tend to survive.  Again, it is a case of natural selection for those cell lines that are most prone to multiply and grow and gain local resources.

A call for topics

It’s Monday morning yet again, despite my best efforts‒the beginning of yet another pointless work week in the dreary tail bit of the year, when the sun sets at 5:31 pm local time, thanks to the outmoded “daylight savings time”, making people like me, who are already dysthymic/depressive and are also subject to some seasonal affective problems that much more unstable.  Spread the word: daylight savings time causes significant morbidity and mortality* and does no one much, if any, good.

I’m writing this on my cell phone again, or “smartphone” if you will (though dumbphone seems like a better term given the way most humans use theirs).  I deliberately didn’t bring my laptop to the house with me over the weekend.  It’s not as though I’m writing stories anymore; I’m just writing this ridiculous blog.  So there’s no particular impetus to make the writing process easier for me, as using the laptop does.  I might as well use the smaller, lighter device when I don’t feel like carrying the heavier one.

I had a reasonably boring weekend, which I guess is a good thing.  I watched a few movies, and I went on some comparatively long walks‒I think I totaled about 12 miles over the course of the two days.  I also spoke with my sister on the phone on Sunday, and that was good.

That’s about it.  That’s the extent of my non-work life.  It’s the best I have to offer, and it’s as like as not just to get worse as time passes.  But I was able to force myself to get almost eight hours of sleep on Friday night and Saturday night, thanks to Benadryl and melatonin.  Oh, and of course, I did my laundry on Sunday, as I always do.

Sorry, I know this is really boring so far.  I don’t know what to tell you.  I didn’t really have any subject in mind for today, other than my brief diatribe about daylight savings time and depression/seasonal affective disorder.  Obviously, it’s a topic that affects me significantly (no pun intended), but there’s otherwise not much for me to say about it.

Eliezer Yudkowsky has an interesting bit of insight into it that he gives as an illustrative case in his excellent book Inadequate Equilibria, dealing with, among other things, the reasons why no one has done research on much stronger light-based treatments for SAD.  But you can’t expect depressed people to take initiative to do remarkable things to help themselves, since a major part of the problem with depressive disorders is comparative inability to take positive action.

If anyone out there has any requests for subjects or topics for me to discuss in a blog post, I’d be more than willing to consider them, though if it’s not a subject about which I have any expertise, I may not be able to do anything worthwhile with it.  Still, I have a fairly broad knowledge base regarding general science, especially biology and physics.  I like mathematics, though I’m not that deeply knowledgeable about esoterica thereof‒a regretted failure of my youthful imagination when I was in college.  Similar things could be said about the deep aspects of computer science; I wish I had known how interesting the subjects were back then and so had pursued them more than I did.

Of course, I have a fair amount of personal knowledge in the reading and writing of fantasy/science fiction/horror, though I haven’t read any new stuff in a while.  I haven’t even read any of my own books in a long time.  I think the most recent horror I’ve read was Revival by Stephen King, which was pretty good.  I haven’t read much if anything in the way of new fantasy since Harry Potter and the Deathly Hallows.  I’m reasonably well versed in slightly older comic book lore, especially Marvel.  And of course, The Silmarillion, The Hobbit, and The Lord of the Rings are among my favorite books.

I enjoy Shakespeare, but I don’t consider myself any kind of scholar of the Bard.  I like his works and his words in a fairly straightforward fashion.  I also like Poe quite a lot, as you might have guessed from my recitation videos of some of his poems.

Anyway, that’s a quick summary of some of the subjects upon which I might at least feel justified in opining.  So, if anyone has any suggestions or requests in these or even other, tangentially related subjects, I would appreciate them.  I like to feel useful or productive in at least some way, so I can justify my existence to myself.  It isn’t easy.  I’m a much harsher judge of my usefulness or worth than Scrooge at his worst, and I expect no ghosts of past, present, and/or future to visit me to give me some epiphany that changes my character.

It would be nice if some rescue mission were to happen to save my soul, but I don’t see it as plausible, and I don’t think anyone thinks it’s in their interest‒or anyone else’s‒to save me, in any case.  So in the meantime I’m just stumbling along like a wind up robot that’s been forgotten by the child that wound it up, legs moving and shifting until the mechanism breaks or the spring finishes untightening.  And damn, that’s an annoyingly efficient spring.


*I don’t have the data for this, but I strongly suspect that, if the sun set at least a little later‒say an hour later, even‒things would be slightly easier for people with SAD.  It might be difficult to tease out the statistics, but SAD doesn’t just kill by increasing rates of suicide, though I’m pretty sure it does that.  People experiencing exacerbations of depression have higher rates of numerous other illnesses and accidents beyond the obvious. 

Never seem to find the time…

It’s Tuesday now, and I’m writing this post on my smartphone, because I couldn’t be arsed to bring my laptop back from work with me last night.  Perhaps this entry will therefore be more concise than usual, but I wouldn’t lay heavy money on it.  It’s more likely than winning the $1.9 billion Powerball Con Game, but that’s not saying much.  Getting struck by lightning during a shark attack is probably more likely than that.

There’s a full lunar eclipse in progress as I write this, and the umbra has about halfway covered the moon.  I took a snap with my smartphone as I left the house and then more when I got here to the train platform.  I’ll share some of them below.  They are not of very good quality‒and the first one is just streaks of light, because apparently I was too excited to keep my phone still while taking the picture‒but then again, in the days before smartphones, I wouldn’t have been able to take such a picture at all.

The last time I recall watching a lunar eclipse with any degree of attention was back when I was in either junior high or high school, and I had a very cheap telescope on our back deck (This was quite a bit later than the reminiscence I described yesterday).  I have to say, the one happening now is quite a bit more impressive than the one I remember.  The shadowed portion of the moon is almost completely black, and the encroaching edge of the Earth’s shadow is quite, quite different than the usual arc of the moon’s own phases.  It’s fascinating.

I forgot again to work on editing my audio recording of thoughts about time yesterday.  I feel like I want to make some excuse, and there are surely reasons, or at least causes, but it doesn’t really matter what they are.  So, now I’m going to try to rehash those verbal thoughts to give you all either a preview or‒more likely‒a replacement for the posting of those spoken words.

I was triggered to talk about time when watching a science video in which someone pointed out, as people often do, that we are able to travel rather freely in any of the three dimensions of space, but that our direction in time seems entirely one dimensional, and we don’t seem able to choose our direction or speed through it.  But this is a slightly misleading characterization of the situation, I thought, and that thought is not entirely original nor unique to me, but this is my way of thinking about it.

It’s true that, if we were in deep space, especially in one of the gargantuan intergalactic voids (where light from all the surrounding galaxies would be too faint to be visible), there would literally be nothing to differentiate up from down, left from right, forward from backward, or indeed, any of these axes of motion from the others.  But that’s not the situation in which we find ourselves.  We are on the surface of a planet, in the presence of a rather strong gravity “well”, and that changes very much the way we experience the three dimensions of space.

Ignoring the facts of terrain, and thinking back to before we had modern technology, it’s clear that, while we are basically free to move forward and back and left and right‒and indeed, we can swap those axes out arbitrarily‒we are not free to move up and down at will.

Even birds and insects and bats cannot freely move through the up-down dimension, not in the way they can move along the curved plane of the surface of the Earth.  It requires great effort for them to change their height, and they are limited by that effort and by the density of the air through which they swim.  Because we are near a source of strong gravity, there is a clear directionality to one of the dimensions of space, and the only reason we don’t keep falling down is that there’s a planet in the way, but if it weren’t there would be nothing pulling us in one direction.

In a somewhat analogous sense, the only reason there seems to be a directionality to time is that we are near (in time) the presence of a region of very low entropy:  The Big Bang.  Since that time, about 13.8 billion years ago, entropy has been steadily increasing, as is its tendency, for fairly simple, mathematical reasons that make the 2nd law of thermodynamics among the most unassailable of all principles of physics.

All the processes that cause us to experience a directionality to time are driven by the tendency for entropy to increase, and that includes the clumping of matter under gravity, the growth of biological organisms, the accumulation of memory, and the development of technology.  Increasing entropy‒on the largest scales‒is all that allows temporary decreases of entropy locally.  Put poetically, it is only the inevitability of death that allows life to exist at all.

But of course, in the future, as entropy increases, life and local order will be no more possible than they would be in intergalactic space.  Once entropy increases enough‒and the vast majority of the existence of our universe will be in such a state, just as most of space is not near the surface of a planet‒there will be no way even to know which direction of time would have corresponded to what we now think of as past and future, because the laws of physics are locally time-reversible.  Time in that epoch would be no more uni-directional than space is in the vastness of an intergalactic void.

What’s more, it’s clear based on special and general relativity that time is not purely one dimensional.  Time and space bleed into each other depending on relative motion and local spacetime curvature.  That which can curve is not, strictly speaking, entirely one-dimensional in a Euclidean sense.

All this makes me wonder if, perhaps, the Big Bang era is not strictly a “plane” orthogonal to the time dimension, but might in fact be the surface of a sphere…or, well, some manner of hypersphere in space time, the surface of which is all at one “moment” just as the surface of a planet is all‒more or less‒the same distance from its center.

If so, then the Big Bang need not have happened merely in one direction in time.  Others have toyed with ideas like this*, with the thought that there might be a sort of mirror image universe to ours, extending the other direction in time from us, its future analogous to our past.  I’ve even occasionally wondered if the (very slight) relative abundance of matter over antimatter in our direction of time would be mirrored by a relative abundance of antimatter in that universe**.

But on further thought, I’m led to wonder if there need be merely two mirror universes, delineated by the Big Bang, heading in opposite directions.  Perhaps there is a continuum of such directions, just as there is a continuum of “up” directions from the surface of the Earth.  Perhaps our expanding universe has more in common with the expanding size of a sphere around the Earth’s center, which gets larger and larger as one moves away from it, and the Big Bang is not so much a beginning of time or the universe as it is a local area of low entropy in time, allowing the existence of phenomena‒including life‒near its surface that experience a difference locally between past and future only because they exist in an entropy gradient.

Perhaps, far out in the “future” of the universe, there might exist other local entropy minima, in any direction in time from us‒directly ahead or even at right angles in time to us, or any combination thereof.  Of course, “reaching” them would be harder than traveling out into intergalactic space, given that they would probably exist across unguessable gulfs of timeless “time”***.

How would we even measure or pass through time in a region in which entropy was near-maximal and time was without any inherent direction?  Perhaps if it were possible to accelerate continuously to near enough the speed of light that one’s personal time slowed ever more and more, one could survive to arrive at a place where entropy would begin to decrease.  But what would that even be like?  Would one enter such a realm as if a traveler from its future, moving‒to any local residents‒backward through time?

I could go on and on about these ideas, and maybe I’ll explore them more in future (ha) posts, but for now, I’ve taken enough time (ha ha).  This was certainly not a concise blog post, but I hope it was at least intriguing.  I’d be interested to hear your own thoughts on such matters.

In closing, I’ll just ask the following thoroughly fanciful question that just popped into my head:  What would happen to a werewolf during a lunar eclipse?


*For instance, The Janus Point by Julian Barbour, deals with some similar concepts.  I haven’t finished reading the book, but I thought of the ideas I’m discussing before I’d encountered it, and my ideas are somewhat different, though far less expert than his.

**Though they would surely switch the terms, calling our antimatter their matter.

***And reaching the portion of our universe that heads in the opposite direction in time would seem to require exceeding the speed of light, which appears to be impossible‒though perhaps wormholes might lead to such places, if they in fact exist.

If “November” is the 11th month, then is the “second” day number 4?

It’s 4:35 on Wednesday morning, November 2nd, 2022, and it’s already 80 degrees (Fahrenheit) at the Hollywood train station and very muggy.  I’m dripping with sweat just from walking as far as the bench to wait for the earliest morning train*.  It’s ridiculous.  For this reason and others, I wish I had never moved to Florida.  In my opinion, it’s overall a “nice place to visit, but you wouldn’t want to live there”.  Or as many locals say: “Come on vacation, leave on probation”.  It really is a shame, because there is a tremendous amount of natural beauty here, but much of even that has been ruined by invasive species, the main one being Homo sapiens.

It’s on hot and muggy early November mornings such as this that I truly miss being in Michigan, where I grew up.  Say what you will about the Detroit area, at least there are fewer humans there now than there were in the past.  It can be somewhat depressing to see that, but boy, in Autumn all the trees along the side streets in my hometown looked spectacular, and you could walk from your door to the street without sweating.

If the Detroit area is too sad for you‒or too flat‒then you could go to upstate New York, where I went to university.  That was amazing in the Fall.  Walking back to the dorm down Libe Slope after class at this time of year was like seeing a fifty mile wide fireworks display happening in slow motion, spread out over many weeks.  Of course Winter was quite cold, bitter, and snowy there, but if you were adventurous, you could take a tray from the dining hall and “tray” down Libe Slope.  I never did that, myself; there was a road right at the bottom of the hill, and though it was not busy, it was hard not to think about careening uncontrollably into some passing salt truck.

Actually, they really did an amazing job keeping the roads clear in Ithaca in the winter.  They had to keep them clear.  There were many slopes in town that could have served as ski jumps if you’d put an upcurve at the bottom, so these had to be cleared pretty much as fast as the snow could fall.

Of course, while I have my complaints about Florida, I did come here of my own free will**, and have had many good times and good life events here, the most outstanding of which was the birth of my daughter.  I can’t ever complain about that.

My son was born in New York (not Ithaca) but we left before he was old enough really to remember it.  Both of my children are Florida kids, effectively.  I wonder how they would feel if someday they moved up North and experienced Autumn there for the first time, beginning to end.  Would they be as wowed by its beauty as I always have been, or would they feel a homesickness for the heat of the Sunshine State as the weather cooled and the days shortened?

Of course, the days don’t literally shorten, just daylight hours.  There are subtle variations and even occasional tiny diminutions of the day, as happened recently, but overall, the rotation rate of the Earth is going very steadily and gradually to slow, barring other inputs, so days will become longer.  If nothing else, since the planet’s mass is not perfectly symmetrical, as it turns it must radiate some miniscule amount of energy away in the form of gravitational waves, and the Moon/Earth orbital pairing will radiate some, too.

When I say “miniscule”, I’m guilty of severe (and ironic) understatement.  The sun will surely long since have gone through red giant and on to white dwarf status before there would be any appreciable loss of rotational energy from gravitational waves alone.  I can’t give you the numbers‒if anyone out there can, please share‒but it’s tiny, it’s wee, it’s verging on infinitesimal.

Speaking of small things and their opposites, yesterday’s post ended up being unusually long and exceptionally dreary***, so I’ll bring this one to a close now.  Thank you for your patience, thank you for reading, and if you have any comments about reactions to autumn, or to major changes of local climate due to moves throughout life, I would be interested to know about them.  No pressure.


*Yes, I came for the 4:45 am train, but only because there wasn’t an earlier one.  I couldn’t sleep.

**So to speak.  I’m provisionally convinced that there is no such thing as free will.  I could be wrong, of course, but it doesn’t really matter all that much.  As I like to say, I either have free will or I don’t, but it’s not like I have any choice in the matter.

***But nonetheless true.  I can’t pretend that it was an exaggeration nor that really, my mental health is just fine.  It is not.  It’s horrible.

Can we do better than recycling?

Well, I forgot to bring my little laptop back to the house with me yesterday, so I’m writing this blog post on Google Docs via Google Drive on my phone.  It’s very handy, obviously, but it’s not as good a word processor as MS Word, though it has its own relative advantages.  Also, it’s just easier to write using a full, true keyboard than with the simulated keyboard on a smartphone.

It’s not a good sign that I’ve forgotten my laptop.  It’s been years since I forgot it prior to recent weeks, but now I’ve forgotten it twice within about a month.  I am mentally quite foggy, it seems.  You all can probably tell that already, but it’s harder to recognize one’s own deterioration from within, since that with which one does the recognizing is that which is deteriorating.

How troublesome.

Despite not being at my best, I did have a somewhat interesting idea, yesterday‒not for the first time, though it’s become a bit more coherent with each iteration, as such thoughts seem to tend to do.  I was bringing some boxes out to the big dumpster that is reserved solely for cardboard, when it occurred to me‒again, not for the first time‒that we should not be recycling cardboard or paper.  Neither should we be sending it to landfills.  In landfills, of course, paper decays and decomposes, thereby releasing methane and carbon dioxide, so that’s not good.  But the process of recycling is wasteful and inefficient, producing pollution and releasing “greenhouse gases” gasses in its own right.

New paper and cardboard is made from trees grown on tree farms, or such is my understanding.  In other words, old growth forests don’t get cut down to make paper*, but rather, new trees are planted and grown, capturing CO2 from the atmosphere as they grow, though that process is slow and rather inefficient.  But paper and other such things can probably be made from other, faster-growing and even more robust alternatives.

One frequently hears of hemp being touted as a fast-growing source of cellulose and the like, and though I suspect that some of its touted miraculous attributes may be exaggerated, this one seems fairly straightforward.  It’s a rapidly growing plant, the fiber of which has been known to be useful for centuries.  It shouldn’t be too hard to use it for paper and cardboard, and in the meantime, fast-ish growing trees can continue to be planted and take some of the CO2 from the air.

Okay, so, if we don’t recycle it, what do we do with the paper and the cardboard?  We do what some carbon capture technologies are already doing with the carbon they remove from the air: we bury it deep in the earth, preferably in a way that prevents it from decomposing and releasing its carbon back into the atmosphere.  There are ways to do this, in principle, that should be rather cheap.  I would imagine that vacuum packing before deep burying might do the trick.

The ideal place to dispose of it‒indeed it would be a good way of disposing of much of our carbonaceous wastes, including our own bodies, when we die‒would be near a deep ocean subduction zone, where it would eventually be carried back into the mantle of the Earth to remain sequestered and redistributed for millions of years.  Of course, one would probably have to do such deep ocean “burials” on large scales to avoid it being a net detriment, carbon-wise.

Cremation certainly doesn’t make sense when it comes to atmospheric carbon, though it may be better for space considerations. It’s probably worse than burial for the overall environment.  But humans are superstitious about their bodies and the bodies of their relatives and whatnot, so convincing them to do something sensible with them might be a serious uphill battle.

Even plastic should probably not be recycled, except where that can be done in a way that produces something more cheaply and efficiently and in a less atmospherically costly way than making new plastic for particular uses, without subsidizing the process.  Better to do the deep burial thing with that as well.  Plastic can be an excellent carbon sink, and instead of recycling it, we can put more effort into producing neo-plastics from plants rather than petroleum, again removing carbon from the atmosphere.

It’s interesting how feel-good ideas of the past (and the present) can sometimes turn out to be more detrimental than beneficial.  But that’s why one must always assess and reassess every situation as it goes along, testing all knowledge against the unforgiving surface of reality, and not being afraid to rethink things.  At the very least, it can be fun.

I used to think it would be a great idea to breed and/or engineer bacteria or fungi that can digest plastics, but now I realize that this would release a vast quantity of new carbon dioxide and methane and the like into the atmosphere.  Better to have algae that trap carbon and then are converted into plastics, or fuel, or something similar.  At least for now.

Because solving one problem, assuming that even happens, will always lead to new, unforeseeable problems and questions that must be addressed.  But each new question faced and each new problem solved makes the knowledge and capacity of civilization greater.  There is no upper limit on how much can be known‒or if there is, it’s so far beyond what we do know that we cannot even contemplate it sensibly.  There is, however, a definite lower limit of knowledge (not counting “anti-knowledge” or stupidity, which is another point of exploration entirely), and that is zero‒a return to a state with no life, no mind, no information.

Some of us find that state enticing for ourselves, but when I’m feeling unusually generous, I think it would be a shame for civilization to come to naught.  There’s nothing in the laws of nature preventing it from happening, though, anymore than there’s anything preventing a reckless teenage driver from being killed in a car accident, no matter how immortal he feels.  It’s never too early to try to learn discipline and responsibility, to become more self aware and aware of the universe…but it can be too late.

Anyway, that’s enough for the day.  I hope I didn’t bore you.  Have a good day.


*More often, it seems, this is done to create new farmland, which is a separate issue.