Wandering through fields of deer

I work in a city in Florida called Deerfield Beach.  People often refer to it simply as “Deerfield”.  Being who I am, I can almost never hear or see that word without thinking something along the lines of “What kind of field is a deer field?”. Then I usually begin some lighthearted speculations on the matter.

I will now share some of these with you, because why should I be the only one to suffer from such stupidity?

I often speculate to myself that perhaps the deer field is a recently discovered quantum field, along the lines of the electron field and the gluon field and all the rest.  If that is the case, what we see as “deer” would be, fundamentally, just local disturbances or vibrations in the “deer field”.

Obviously the deer field interacts with the Higgs field, because although deer can be quite speedy, they never move anything close to the speed of light, and they can even be at rest; they clearly have a rest mass.  As everyone knows, “massless” particles, the ones that don’t interact with the Higgs, always travel at the speed of light*, which is just another term for the speed of causality.

Speaking of which, of course, an individual deer is very massive for a fundamental particle.  The median mass of a deer is around 50 kg.  Putting that in terms more typical of particle physics, it’s roughly 3 x 10^30 eV**.

To give you some perspective, the most massive of the quarks, the top quark, which is (I think) the most massive previously recognized fundamental particle is about 170 GeV (giga-electron-volts).  That’s 170 billion eV, or 170 x 10^9 eV, or 1.7 x 10^11 eV.  That would make a typical deer particle nearly 2 x 10^19 times as massive as a top quark.  Writing that out in terms that might hit home more powerfully, that’s 20,000,000,000,000,000,000 times as massive.

No wonder it’s never been produced in any of our particle accelerators!

Yet the deer field must have very weak coupling with other fields, because individual deer particles are extremely stable.  We can feel reasonably confident that not one single deer particle has decayed spontaneously into other, less massive particles in all of human history, because if it did, the energy released would dwarf the largest nuclear explosion ever set off by humans.

Recall that the explosive force of the original atom bombs at Trinity, Hiroshima, and Nagasaki was produced by the conversion of less than a gram of matter into radiant energy, yielding a blast equivalent to the explosion of about 20 thousand tons (aka 20 kilotons) of TNT.  The energy released by the “decay” of a single deer particle would be about 100,000 times as great, if my figuring is right, or 2 gigatons.  I’m sure you’re all aware that the Tsar Bomba, the largest ever nuclear explosion set off by humans, was “only” about fifty megatons, or about one fortieth as large.

So, don’t stand too close to a decaying deer…and “too close” would probably be “within a few hundred kilometers”.

All this leads me to speculate, given their mass and stability, that perhaps the deer is one of the theorized “supersymmetric” particles, thought to be paired with each of the more “typical” particles of the Standard Model, but which have not yet been detected in any particle accelerators‒again, given the rest mass of a deer, we should not be surprised.

I don’t know whether deer are fermions or bosons; my initial thought is that they would be spin-zero, since I’m not aware of deer showing, for instance, any tendency to align with magnetic fields.  Then again, maybe they’re too massive for spin-related magnetic alignment to be detectable.  They certainly appear to be electrically neutral, though again, if they had a charge comparable to an electron or proton, its effects might hardly be noticeable given their mass.

I would hope that particle physicists would flock‒or perhaps “herd” would be a better term‒to the places where these amazingly stable particles are plentiful, the better to study their characteristics.  Ironically, although I work in Deerfield, I have never seen a single deer particle there, but up north‒particularly in New Jersey‒I’ve seen many.

What is it about New Jersey and similar locales that leads to the local aggregation of so many of these ultra-massive “particles”, which seem likely to be primordial remnants of the big bang***?  Is it perhaps that they interact somewhat strongly with the prominent local corn fields?

Wait a minute!  Corn field?  What’s the nature of that quantum field and particle?!?!?

Anyway, this is the sort of shit that goes through my mind almost every time I see or hear the word “Deerfield”, and it’s only one example of that sort of thing.  There are countless others.

Just in case you ever wonder why I’m always so depressed.

deer-in-field


*The two most well-known such “massless” particles are the photon and the graviton.  Of course, the graviton has not ever been measured as an individual particle, but it has been confirmed‒as expected‒by LIGO, VIRGO, et al, that gravitational waves travel at the speed of light, and so are massless.  I can’t help think that’s a good thing, because if gravitons had/have mass, there would be what I would assume to be some quite complicated self-interactions‒gravitons would themselves interact strongly with the gravitational field‒that would make their theoretical characteristics and so on quite complicated.  The very fact that they carry energy means they must self-interact at some level, since energy interacts with gravity, but they are expected individually to have very low energy, gravity being far weaker than the other “forces” of nature.  Of course, gravity is in some ways not quite like the other forces in character, but don’t get me started on that.

**Short for electron volts, defined as the amount of energy gained by an electron from being accelerated through a potential difference of one volt.  It’s a measure of energy, and it’s used as a measure of mass as well, because in the realm of fundamental particles, E=mc2 really comes into its own.

***It’s hard to imagine any subsequent processes generating such particles, though perhaps supernovae could occasionally create a few.

Cycles both vicious and viscous

It’s Monday again, the start of a new work week.  I guess this must be the 4th week of the year, since Saturday was January 21st, and 21 is 3 times 7, and this year and month started on a Sunday.  I’m at the bus stop again, writing this on my phone again while waiting for the first bus.  It’s generally better, for me at least, to wait somewhere to which I’ve already traveled, rather than waiting before I travel.  That way I can just sit still until the next stage of my journey.

Unfortunately, this bus stop has a strong smell of human urine this morning.  I don’t know if that’s because the weekend just passed, and people get drunk and pee in inappropriate places on the weekend sometimes, or if that homeless person spent more time here than expected and had to pee during that time.  I’ve not noticed the smell before, so it doesn’t seem to be a frequent thing.  I suppose if it had rained there would probably not be any residual odor, but it’s not the rainy part of the year down here in south Florida.

I had thought to myself, if the homeless person were to have been lying out at the bus stop again, I would go to the other nearby stop that I had (internally) recommended to her a few days ago.  That’s where I usually get off the bus at the end of the day, so it wouldn’t be a strange one for me to use.

It is curious‒I don’t know if other people do this or notice it or what have you, but I often take slightly different routes when going to and from a place.  Some of that is probably just a byproduct of perception, in that certain paths look or seem easier from one angle compared to another.  They can even be easier to see from one direction compared to another.

Sometimes it’s a matter of lighting and timing, such as the fact that, on my way back to the train after work, I take a slightly parallel portion of the route (which in the morning just goes on down the main road) because there’s a nice, quieter, tree-lined block behind the regional courthouse, and in the evening, when there’s light and I’m done with the work day, it’s more pleasant to walk there.  It also goes directly to the side of the tracks where I catch the train in the evening, whereas when I’m getting off the train, it would require a significant detour.

All this is trivia, but my point is that having these different routes when going one direction compared to another seems to be ubiquitous, at least for me, and I suspect I’m not alone in this.  This means, of course, that the routes become a kind of circle, rather than simply a reversible, oscillating process.

Of course no macroscopic processes of that sort are actually reversible, anyway, because of friction and the creation of increasing entropy, but even if one could eliminate such things, a to-and-from trip that takes different routes could have a net gain or loss*‒I think loss would be most likely‒and this loss could be perpetual and steady.

It’s a bit like that economics or game theory or decision theory idea whereby if someone prefers place A to place B, and prefers place B to place C, but prefers place C to place A, one could effectively be induced to pay to go in an endless cycle, from A to C to B to A to C to B, etc.  Of course, it would be profoundly irrational for someone to do such a thing, but people get caught in even stupider cycles all the time, which are even more costly, but because they rarely pay attention to the nature of their actions as if from the outside, they often don’t even realize they’re doing something thoroughly irrational.

I return again to my musings on the myth of Sisyphus‒the actual myth, not the book by Camus, though I still haven’t answered his main question to my own satisfaction‒and how horrifying it is that Sisyphus is the one doing his own punishing.

Say what you will about the horrors of Prometheus’s fate, at least he was the passive, chained victim of it**.  That may not make it better, and it may indeed be worse, but it is different.  Sisyphus’s very mind has been changed, so that he feels an irresistible urge, or drive, to push his boulder, despite the fact that he never gets it to the top of the hill (or mountain or whatever) without it rolling back down again.

But, of course, we all do very similar things all the time.  We eat to stay alive, and that eating gives us some pleasure, but the pleasure is transitory (as it must be) so soon we feel the urge to seek food again, and continue the cycle, which just spirals its way from bassinet to coffin, with the only certain outcome being that entropy in the universe will have been increased as part of the process.

Of course, the very universe itself may well be Sisyphean in nature‒see for instance my musing on Conformal Cyclic Cosmology, though even Inflationary cosmology can produce endless recurrences and infinite repetition.  Heck, even the old-school Boltzmann type of heat death of a universe implicitly produced endless cycles as, eventually, entropy would occasionally dip low enough to regenerate all the “stuff” in a universe, before making its way back up again.

And, of course, if the universe were “closed”, which it doesn’t seem to be, it could expand, collapse, “bounce”, reexpand, etc.  And if some of the “braneworld” scenarios in M Theory are right, there’s a cycle of brane-universes smacking into one another, restarting the hot Big Bang conditions over and over as they do***.

I don’t know where I’m going with this discussion, but in a way, that demonstrates my point.  I write my blog post every workday, for no particular reason, but because various confluent and complex drives in my nervous system lead me to do it.  Lather, rinse, repeat as needed.

Except, it’s not really “needed” in any deep sense.  It’s just an urge.  Even life itself is just a habit.  And it’s not always a good one, is it?


*Of course, one’s potential energy returning to it’s original point in a reversible system means that no net “work” has been done, no matter what path has been followed, but I’m leaving aside such idealized systems…though at the tiniest level they may be more accurate representations of reality than any more “realistic” macroscopic analogy.

**Who else thought of The Big Lewbowski when reading that line?

***This is the sort of “collision” to which the title of The Chasm and the Collision refers.

Another restless wind inside a letter box

Okay, well, it’s Saturday, and I’m now, more or less, at the bus stop, waiting for the bus.

It’s mildly interesting that the Saturday schedule for my first bus of the day is the same as its weekday schedule.  That will get me to the Tri Rail station in time for the second train of the day‒they run on a reduced schedule on Saturdays‒which will board only about 20 minutes later than the one I’ve been catching during the week.  So that’s rather nice.  I don’t even really have to change my commuting schedule, even though it’s Saturday.

I appreciate not having to change my routine.

Speaking of not having to change my routine‒and of being “more or less” at the bus stop‒I’m not sitting down to write this because someone is using the bus stop bench as a place to lie down, or at least to recline.  I think it might be that shouty lady from earlier this week.

I’m quite frustrated that anyone is using a public spot, paid for to at least some degree by the people who ride the bus, as a place to lay out, but when I calm myself down, I can sympathize with the fact that she doesn’t have anyplace to go.  Still, why lie out at the bus stop at an intersection that’s busy even on Saturday mornings?

The main road is six lanes wide here, and though the crossroad is not as big, it’s still a pretty busy road.  I would think it would be preferable to go someplace where there was greater peace and quiet.

I suppose one might be more vulnerable in more secluded places, but one could pick a spot with relative care, and I would think it would be more pleasant.  Heck, just on the other side of the crossroad, there’s another stop with a bigger bench and a better shelter, where one would still be close to the intersection and protected by the relatively high traffic from at least any unobserved crime.

Sigh.  It’s so wonderful to have worked hard all one’s life and tried to do the right thing and be very highly educated and to have striven to be a benefit to the world and then be stuck at age 53 not being able to sit at the bus stop early Saturday morning because a homeless person is using it to recline…and to muse about the ins and outs and safety concerns for such a homeless person, because it’s not completely impossible one might be such oneself (I have been in close to that situation, sleeping in the back of a rental vehicle for which I was not paying on a few nights while out on bail).

I know that the universe promises us one thing and one thing only, and it certainly doesn’t make bargains or special deals with anyone.  But it’s still frustrating.  I feel like I’ve wasted so much time and effort.  I feel like I’m still wasting time and effort.

Of course, all time is wasted in some sense; in any case, it passes‒or we pass through it, or whatever‒no matter what we do in it.  And, of course, even the nature of time itself is unclear.  It certainly isn’t one vast, monolithic, singular thing that is the same for everyone in the universe.  As I’ve speculated before, it may even have more than one past-future orientation, just as up-down changes depending where you are on the surface of the Earth.

It’s partly because of that fact of time’s locality that one can actually model a universe that begins at a finite place‒say, the isolated collapse of a hypothetical inflaton field‒and yet becomes an infinite space to those within that bubble.  Because time is local and causality only proceeds at the speed of light, at least in our part of the universe, it can all depend on one’s point of view.

Of course, it’s by no means certain that inflationary cosmology describes the way our universe came to be, though it is internally consistent.  Other possible models include Roger Penrose’s Conformal Cyclic Cosmology‒which I like a lot, aesthetically*‒in which the accelerating expansion in a universe, leading to eventual increase of entropy to where nothing can really exist any longer, leads to or simply becomes the highly uniform, comparatively low entropy state of the next universe, just on locally small scales.  Entropy, after all, is not necessarily on a fixed, absolute measure, nor is space itself.  Entropy can be small in a tiny region that then expands to become a much larger one, still with low local entropy.

It’s a bit analogous, I think, to taking a number line and multiplying everything in it by two, so that the space between any two previously chosen points on the line is doubled, but the number line itself is just as infinite as it was before.

The nature of the real numbers being what it is, there’s an uncountable infinity of numbers between any two points on the real number line, and so there’s room to grow a universe of any size you might like from the space between any two locations on a number line‒or in a 4-D spacetime.

Penrose has posited that it would be conceivable for the residents of such a universe, if they knew and understood the kind of universe they were in, to leave behind messages in the very fabric of mass and energy arrangement in their universe for the people in the next universe‒nothing very complex, I would guess, but maybe just enough to make it clear that they had existed.

I’m not sure why people who were approaching the heat death of their particular universal iteration would bother with doing that, but maybe they would.  A bigger question to me would be, how would they target it?  If spacetime were expanding exponentially, as it seems to be doing even now, then every future “observable” universe would lie only within a tiny tiny tiny chunk of what was left of the previous universe.  So how would a previous universe’s intelligent life choose where to leave the message?  Would they try to encode it in every possible tiny region of their spacetime?  That would require engineering on a cosmic (but highly detailed) scale, and if you can do that, why not alter the expansion of the universe in the first place?

Of course, that’s not relevant to whether the notion of CCC is correct, just to the question of if such messages would be possible and how they might be carried out.  My more itchy question is, whence would the energy and particles of each new iteration of the ever-expanding universe arise?

In the Inflationary model of cosmology, all the immense energy that suffused our early universe was “created” when the hypothetical inflaton field underwent a phase transition and dropped to a lower energy state, so the local inflaton particles quickly decayed into all the particles of our more familiar quantum fields.

Inflation is not universally (ha ha) accepted, but certain aspects of it are certainly plausible and are supported by at least some data.  For instance, our universe is currently inflating, based on our best data and understanding.  That’s the Dark Energy stuff about which you’ve probably heard.

Exponential expansion is exponential expansion.  The doubling rate can change, but it still blows up at ever-increasing speeds.  If you compress or stretch your time axis, all exponential growth curves look the same.  It’s a little like that Conformal Cyclic Cosmology notion.

Anyway, as far as the source of the “reheating” of the universe in CCC as opposed to inflation, I doubt that Sir Roger Penrose has overlooked or missed that question.  He frikking brilliant‒even when he’s wrong he’s smarter than most of us are when we’re as right as we ever get**.  I just need to read a little more deeply into his model to figure out where that comes from.

Perhaps that will also allay my puzzlement about the “leaving a message” notion.  I simply haven’t finished his book on the subject.  It didn’t help that, as of last check, it wasn’t available in e-book format, and so I only have the paperback.  Not that there’s anything wrong with paperbacks, but it’s less convenient to carry 400+ of them around with you at any given time than Kindle format books, and so you’re less likely to have any one of them with you on any given day.

Oh, well.  I’ll see what I can do about learning more.  That’s rarely a waste of time, at least.

Wow, this post has really meandered from one thought to another, going truly across the universe‒and beyond, depending on how you define the word “universe”.  Perhaps it would be best to use “Omniverse” when describing the totality of all possible realities, as the wizard does in DFandD.

Speaking thereof, if any of you have read it and would like to make any comments about it, I’d be delighted to receive them, either here or on the blog post proper that entails my sharing of that story (so far).

In the meantime, my train should be here in 5 minutes (I rode the bus in between these two times).  My estimate of the schedule was correct, as is usually the case when I bother to check and when people and organizations keep to their own, voluntarily chosen schedules, on which numerous people act in reliance.  Don’t get me started on that topic.  I’ve already written way more than I would have expected from such inauspicious beginnings.

Have a nice weekend, all.  I won’t be posting tomorrow, barring the unforeseen, but I will be back on Monday‒again, barring the unforeseen.  Those unknown unknowns can strike at any time.  Take care, and be as prepared as you can reasonably be.

penrose by any other name


*This is no reason to think it’s more likely to be correct than any less aesthetically pleasing model, but it keeps it fun.

**He also looks rather a lot like my former Uncle Barney.  That’s neither here nor there, but I wanted to make sure I said it at some point.  So, there, now I have.

One Stone to bring them all and in Dark Energy bind them

Well, well, as the oil baron said, it’s Tuesday again, the 10th of January.  And two times five makes ten, so I guess this day has something to do with prime numbers other than just the year (the last 2 digits, anyway) and my age.

Of course, all numbers have to do with prime numbers, in a sense.  I’ve heard mathematicians say that prime numbers are the “elements” of the numbers (or of the whole numbers, at least, I suppose), comparable in a way to the entries in the periodic table.  But 1 (the number of this month, as it were, and surely the more fundamental building block of all the whole numbers) is not considered a prime, because of it were, then every number’s prime factorization could stretch to as long as you like, since any number times one, no matter how often you multiply it, is still the number with which you started.

Mentioning the elements/the periodic table reminds me of a joke that I sometimes see on shirts or mugs or similar that really irritates me every time I encounter it.  It might have been appropriate way back when someone first came up with it, but now it’s just too incorrect, given what we know, to be funny.  That joke is any version of the line, “Never trust an atom/element…they make up everything.”

It’s a silly little play on words, obviously enough, but the fact is, we know now that the elements/atoms don’t even come close to making up everything, so the joke doesn’t even work as a pseudo-nerdy pun.  Atoms, indeed all so-called baryonic matter (which to us might be thought of as “ordinary” matter*) make up only around 5% of the total mass/energy of the universe, according to the latest best estimates.

Another 25% (all these figures are rounded off a bit) of the universe’s mass/energy is so-called Dark Matter (which is dark only in the sense that the Ringwraiths are dark, being invisible, i.e. not interacting at all with light, nor with the strong force, nor (except neutrinos, if you’re counting them) the weak force, as far as anyone can tell).   They only definitely interact with gravity.  And, of course, according to General Relativity, gravity isn’t technically a force, it’s just the shape of spacetime**.

Speaking of spacetime, the remaining 70% of the mass/energy of the universe is what is called Dark Energy, though really that’s just a name that’s kind of sexy-cool, and it’s only “dark” in that it seems to have nothing to do with the electromagnetic fields (aka light).  This stuff, whatever it is, has characteristics consistent with the “cosmological constant” that Einstein supposedly considered his “greatest blunder”, though as it turns out, he was apparently right, albeit for the wrong reasons.

Yes, when you’re Einstein (you’re not, though) even your mistakes are remarkably fruitful, and eightyish years later they can end up being legitimate descriptions of the universe’s large-scale structure, function, and evolution***.

Of course, whether the Dark Energy is really that uniform energy of spacetime itself that creates a negative pressure throughout its reach and thus repulsive gravity, or if it’s some other process with roughly the same overall effect, we know it’s not what scientists had tried to describe using quantum field contributions, because that was too big by (if I remember correctly) about 123 orders of magnitude.  That’s a factor of 10 to the 123rd power, or a 1 followed by 123 zeroes.  That’s a number so big that if you set it down next to a googol in a form visible to the human eye, you wouldn’t even be able to see the googol.  It would be too vanishingly tiny.  So that’s not the right answer.

Anyway, that’s why I don’t like that joke about atoms or the elements.  It’s just too wrong to be funny.  And now that you know why it’s so wrong, you may be able to stop thinking it’s funny, too.  Am I not generous?  Are you not entertained?  I hope you’re not entertained by that joke, anyway.  People only tell that joke (or so I suspect) to try to make themselves look vaguely scientifically knowledgeable.  But in fact, they do the opposite.

Oh, well, I guess if they’re enjoying themselves…they’re not really doing too much harm…other than spreading misinformation regarding the structure and nature of matter and the cosmos, of course!

Ugh.  Why do I care?  What’s wrong with me?

Well, I know some of the answers to that last question, but knowing doesn’t help much.

I’m currently on the bus, by the way, approaching the train station.  It’s just another day.  Obviously, my recent setback has not resolved itself, and indeed, it may never do so to anyone’s satisfaction.  But I am at least just about done with this blog post in time for the train, which is now 5 minutes away.

I don’t think I’m going to be writing fiction again after this; I still haven’t even figured out how to check the results of the poll I put up (I haven’t tried, to be fair to me).  Oh, well.  Life is either so tragic that it’s comical or so comical that it’s tragic.  But then, at least, it’s over.

Of course, if the universe is infinite in space or in time (or both) at some level, any given life will just start over again, somewhere, somewhen, somehow, and no matter how big the distance between the two iterations, the individual won’t notice the passage of time.  Or it may be that our lives are fixed phenomena in a spacetime block universe as implied at least to some degree by General Relativity, and the instant our lives end, we may just start over again at the beginning, like a DVD (or Blu-ray) played on a loop, never doing anything different, never changing, never learning anything new we hadn’t learned the last time around.  It’s possible, in principle.  We don’t know if it’s true, though quantum mechanics suggests, at least, that it’s not the full picture.

Like the fella said, ain’t that a kick in the head?

einstein_sticks_his_tongue_1951


*As you can see, it’s hard to justify calling something that makes up only around a twentieth of the matter and energy in the universe “ordinary”.  You could be forgiven for calling it “familiar” matter, I would say.  That might be better.

**Maybe M. Night Shyamalan can make that movie.

***It’s a bit like the paper he did with Podolsky and Rosen that was intended to demonstrate that quantum mechanics was incomplete, i.e. that there must be “hidden variables” beneath the seeming randomness, using descriptions of what must happen to two particles produced by the same event but which head off in their usual opposite directions, and whose characteristics, due to conservation of charge, momentum, spin, etc. must be complementary.  Years later, J. S. Bell devised a famous theorem, a test by which one could ascertain whether Einstein was right in that there were hidden variables, or that the states of a particle truly happened randomly but that nevertheless the state of one constrained the state of the other of the pair, however distant.  And just last year, Alain Aspect et al got the Nobel Prize (it took a while) for their experiments confirming, using polarization of photon pairs produced by single quantum events, via Bell’s theorem, that Einstein was wrong, there are no hidden variables in the sense he suspected.  But Einstein’s (and Podolsky’s and Rosen’s) quite legitimate question set into motion the concept of quantum entanglement, a truly important idea in quantum mechanics, just as he had pioneered the early field of quantum mechanics itself in 1905 with his (Nobel Prize winning) paper demonstrating that light comes in what we call photons, the energy of each individual one was described by Planck’s equation of h time the frequency.  One of his other papers from that year used Brownian motion to demonstrate that atoms and molecules‒you know, those things that “make up everything”‒really must exist.  He also did a few somewhat interesting papers on the nature of the speed of light and how it relates to time and length and distance, and something about the equivalence of mass and energy****.  As Sabine Hossenfelder would put it…”Yeah, that guy again.”

****But of course, the paper “On the electrodynamics of moving bodies” didn’t win a Nobel prize, nor did it’s follow-up containing a certain formula relating “rest mass” to energy via the speed of light squared.  So those papers couldn’t have been that important.  Right?

“No more work to-night; Christmas Eve, Dick! Christmas, Ebenezer!”

critic

 

Okay, well, it’s Friday at last, and it’s “Christmas Eve eve” as I sometimes say.  It turns out that the office apparently isn’t going to be open tomorrow, which surprises me‒as is obvious, I guess.  I still could find out otherwise, I suppose, but I doubt it.

I’m writing this on my phone again, and I have been doing so most days this week.  I think I used my laptop on one of the days, perhaps Tuesday, but not on Wednesday, when I wrote my long and rather irritating post full of self-congratulation for deeds of the past that have no relevance to my current life.  That long-winded blather was from my phone, if you can believe it!

I actually slept comparatively well last night; I only finally woke up at about 3:50 this morning, which for me is about a two-hour lie-in.  I’m not even waiting for the first train of the day; I’m waiting for the second one!

I’m surprised that I slept quite so well yesterday, because I had an unusually bad day for pain‒or perhaps it would be better to say it was a good day for pain and thus a bad day for me.  The pain was focused in my right lower back down through my hip to the ankle and the arch and ball of my foot, but spreading up through to the upper back and shoulder blade and arm, and nothing that I did or took seemed to make more than a transient difference.

I was walking around the office like Richard the Third most of the day, when I was up.  We did get some very lovely cookies from my sister for the office‒she sends such packages often and they are beloved by all, and justly so‒but I couldn’t enjoy them as much as I wish I could have, because I was in a lot of pain and severely grumpy.

They were/are amazingly good, though.

I am still in a bit of accelerated pain this morning, but then I’m basically always in pain.  It’s not yet as bad as yesterday, at least, so keep your fingers crossed, please.  Or don’t if you’d rather not; I hardly think it actually has any effect on any outcome other than the configuration of your fingers.

I suppose it’s just a way for me to express my anxious hope mixed with fear and tension, and to invite some kind of shared emotional support from readers.  Though, of course, for that, it doesn’t make all that much sense, since how would I even know if any of you are crossing your fingers?  I suppose you could leave a comment saying that you are, but the very act of typing a comment must make it at least slightly less likely that your fingers are actually crossed, certainly while typing.

Anyway, I hope that my pain today is less than it was yesterday.  But even I personally will not be crossing my fingers, since I don’t think that gesture has any magical powers, so you shouldn’t feel obliged to do it, yourself, either.

Come to think of it, I don’t think anything has any magical powers.  My first thought about that is “more’s the pity”, but really, what would magical powers even be?  If they existed, they would be actual phenomena of nature, and would have some lawful underpinning and explanation.

That’s one thing I’ve always kind of been disappointed about in the Harry Potter books.  They take place in a school, and have genius characters like Dumbledore and Tom Riddle and Hermione, who surely would have curiosity toward the hows and wherefores of magic, yet there’s not even a hint of an explanation for how it works, why it works, what it actually is, or anything.  I think some touching upon that subject would have been very fun.

I mean, for instance, how does apparation work?  It involves a sensation of squeezing through something, but is that some form of hyperspace, or a wormhole, or what?  How do wands enhance or channel magical power from individuals gifted in magic?  How was that figured out for the first time?  Clearly people can do some magic without wands‒so, how necessary are they?

When did people begin to be able to do magic?  Clearly people haven’t always been able to do magic; there haven’t even always been people!  Was the ability to use magic some new, isolated mutation, like blue eyes, that spread through the population (as it surely would)?  Clearly it’s not some complex mutation, as it arises de novo in the human population, leading muggle-born witches and wizards to arise with some regularity.

Perhaps there is a complex of genes that, only when all present together (perhaps even only when homozygous) instantiate the ability to do magic.  Maybe most humans have some large fraction of the necessary genes‒after all, as I noted, the ability to use magic seems likely to have been a significant evolutionary advantage‒but it’s so easy to lose some necessary part of the biological (neurological?) machinery necessary through random mutation that most people are mutated slightly away from the complete set and so become muggles*.  Or, if born to witches and wizards they are given the derogatory term “squibs”.

I don’t recall how I got on this topic, but it is interesting, and I wish Rowling would at least have hinted at some studies or explanation, at least when discussing the Department of Mysteries.

Alas.

Anyway, since I apparently won’t be writing a post tomorrow, I would like to wish all of you who celebrate it‒in the words of the late, great hero, Dobby the house elf‒a very Harry Christmas**.  Maybe take a moment to read the Christmas scenes in the various Harry Potter novels.  Christmas at Hogwarts, for the students who stayed over the holidays, seems always to have been an interesting occasion, albeit not as fun as Halloween.  Halloween at Hogwarts would have been quite the thing to experience. The only close contender that readily comes to mind is Halloween with the Addamses.  That would be interesting!

I guess I’ll be back on Monday, then, though it is at least slightly possible that I could be wrong about tomorrow.  If I am, I’ll be writing a post, and it may be quite a grumpy one, though maybe not.  After all, what do I have to do with my time other than go to the office?  Not much, honestly.

Oh, well.

santa-whoand merry

 


*This raises the odd thought for me about what might happen if a cancer developed that, by chance, has a complete set of magical genes, in a muggle who had been almost complete.  Could one have a “magic tumor”?  I guess probably not, since it seems magic would be a collective function of many aspects of the nervous system, not a property of every individual cell.  Perhaps this is one reason why wizards can’t just fix visual impairment‒Harry Potter wears glasses, and no one ever even suggests that magic might be able to cure his vision. But the eyes are, quite literally, extensions of the central nervous system‒though the lenses aren’t, come to think of it‒and maybe tampering with the eyes through magic is particularly dangerous, or perhaps the nervous system always rejects such attempts.

**As an aside, I have to tell someone that, in the song Have Yourself a Merry Little Christmas, I’ve always tended to hear the line, “Faithful friends who are dear to us gather near to us once more,” as if they are singing, “…gather near to us one s’more”, and I think, “How are they going to share one s’more between a group of people?  I mean, it’s “friends” who are dear to “us”, which to me implies at least four people, total.  How can you split one s’more between four people?  Also, it would make a mess, with graham cracker crumbs and melted chocolate all over various hands and the floor and all that.  Anyway, I know that’s not what they’re saying, but every time I hear it, those thoughts go through my head.

Then there’s hope a great man’s memory may outlive his blog half a year.

Hello and good morning.  It’s Thursday, the day of the week on which I wrote my blog post even when I was writing fiction every other day of the week—well, apart from Sundays and the Saturdays when I  didn’t work.  I have not been writing any fiction recently.

I toyed with the idea the other day, but there doesn’t seem to be much enthusiasm for the notion, which I suppose is mirrored by my own lack of energy, or perhaps has its source in my lack of energy.  Or maybe they come from disparate but merely coincidentally parallel sources.  I don’t know, and though it’s mildly interesting, I don’t have energy or interest enough to try to figure it out.

I did work a bit on a new song yesterday, the one for which I had jotted down some lyrics a while back.  I have lost utterly the original tune, but I worked out a new one of sorts, and it seems okay.  I then worked out some chords for the first stanza, including some relatively sophisticated major sevenths and then major sixths of a minor chord that sounded nice, and which made me at least feel that I really have learned a little bit about guitar chords.  Then I figured out at least the chords I want for the chorus, which, among other things, throw a little dissonance in briefly, which is nice to up the tension.

I don’t know if I’ll get any further with it or not; I may just stop and let it lie.  It’s only perhaps the third time I’ve even picked up the guitar in months.  I was at least able to show myself that I can still play Julia, and Wish You Were Here, and Pigs on the Wing.  I had to fiddle a little to remind myself how to play Blackbird, but after a brief time I was able to bring it back, too.

So, it’s not all atrophied.  And I can still play the opening riff to my own song, Catechism, which I think is my best stand-alone riff.  My other guitar solos are mainly just recapitulations of the melody of the verse or chorus in their respective songs, but the one for Catechism is a separate little melody.

Actually, it occurs to me that I initially did a voice recording of the lyrics to the newish song as I thought of them, and when I did, I probably sang a bit of the tune that had come to my head.  Maybe I should listen to that and see if I like that melody better than the new one I came up with.  That would be a bit funny, if after the effort from yesterday to do a melody and chords I remembered the old one and just threw the new one away.

I suppose it really doesn’t matter much.  Even if I were to work out and record the song, and do accompanying parts and all that stuff, and publish it, I don’t think anyone is likely ever to listen to it much.  Maybe someday in the distant future, some equivalent of an archaeologist who unearths things lost in the web and internet will find the lost traces of my books or music or something, and they’ll be catalogued in some future equivalent of a virtual museum, among trillions of other collections of data that are recorded on line, but which will never seen by anyone for whom they might mean anything at all.

People sometimes say things like “what happens online is forever”, but as I’ve discussed before (I think), even if it’s true that things stored online remain and avoid simple deterioration of data thanks to the redundancy in the system, it doesn’t matter.  In principle, the sound of every tree falling in every wood has left its trace in the vibrational patterns of the world, and according to quantum mechanics, quantum information is never permanently lost, even if things fall into black holes*.

But of course, all that is irrelevant in practice, and comes back to collide with the nature of entropy and the degree to which most large-scale descriptions of a system are indistinguishable.  That picture of you with a funny face at that event years ago, which you tried to have a friend take down, but which had already been shared to a few other people, may in principle always be out there in the archives of Facebook or Twitter or whatever, but it doesn’t matter.  No one will ever notice it or probably even see it among the deluge of petabytes of data whipping around cyberspace every second.  You might as well worry about people being able to reconstruct the sound waves from when you sang Happy Birthday out of tune at your nephew’s fifth birthday party from the information left over in the state of all the atoms and molecules upon which the sound waves impinged.

It’s one of those seemingly paradoxical situations, rather like being in Manhattan.  There are very few places in New York City, and particularly in Manhattan, where one can actually be alone—even most apartments are tiny, and have windows that look out into dozens to hundreds of other people’s windows.  And yet, in a way, you are more or less always alone in Manhattan, or at least you are unobserved, because you are just one of an incomprehensible mass of indistinguishable humans.

Even “celebrities” and political figures, so-called leaders and statespeople, will all fade from memory with astonishing rapidity.  When was the last time you thought about Tip O’Neill?  And yet, for a while, he was prominent in the news more or less every day.  Do you remember where you were when William McKinley was assassinated?  No, because you were nowhere.  None of you existed in any sense when that happened, let alone when, for instance, Julius Caesar was murdered.

And what of the many millions of other people in the world at the time of McKinley or Caesar or Cyrus the Great or Ramses II?  We know nothing whatsoever of them as individuals.  Even the famous names I’m mentioning are really just names for most people.  There’s no real awareness of identity or contributions, especially for the ones who existed before any current people were born.

Last Thursday, I wrote “RIP John Lennon” and put a picture of him up on the board on which we post ongoing sales and the like.  The youngest member of our group, who is in his twenties, asked, “Who is John Lennon?”

He was not joking.

If John Lennon can be unknown to members of a generation less than fifty years after his death, what are the odds that anything any of us does will ever be remembered?

Kansas (the group, not the state) had it right:  “All we are is dust in the wind.  Everything is dust in the wind.”  The only bit they missed was that even the Earth will not last forever, and as for the sky…well, that depends on what you mean by the sky, I suppose.  The blue sky of the Earth, made so by light scattering off Nitrogen and Oxygen molecules, will not outlast the Earth, though there may be other blue skies on other planets.  But planets will not always exist.

As for the black night sky of space, well, that may well last “forever”, for what it’s worth.  But it will not contain anything worth seeing.

TTFN

Tip


*Leonard Susskind famously convinced Stephen Hawking that this was the case—and even won a bet in the process—though other luminaries were of course involved, including Kip Thorne, I believe, one of the masters of General Relativity.

Some thoughts (on an article) about Alzheimer’s

I woke up very early today‒way too early, really.  At least I was able to go to bed relatively early last night, having taken half a Benadryl to make sure I fell asleep.  But I’m writing this on my phone because I had to leave the office late yesterday, thanks to the hijinks of the usual individual who delays things on numerous occasions after everyone else has gone for the day.  I was too tired and frustrated to deal with carrying my laptop around with me when I left the office, so I didn’t.

I’m not going to get into too much depth on the subject, but I found an interesting article or two yesterday regarding Alzheimer’s disease.  As you may know, one of the big risk factors for Alzheimer’s is the gene for ApoE4, a particular subtype of the apolipoprotein gene (the healthier version is ApoE3).  People with one copy of the ApoE4 gene have a single-digit multiple of the baseline, overall risk rate for the disease, and people with 2 copies have a many-fold (around 80) times increased risk.

It’s important to note that these are multiples of a “baseline risk” that is relatively small.  This is a point often neglected when discussing the relative risks of a disease affected by particular risk factors when such information is conveyed to the general public.  If the baseline risk for a disease were one in a billion (or less), then a four-times risk and an eighty-times risk might be roughly equivalent in the degree of concern they should raise.  Eighty out of a billion is still less than a one in ten million chance for a disease; some other process would be much more likely to cause one’s deterioration and demise rather than the entity in question.

However, if the baseline risk were 1%‒a small but still real concern‒then a fourfold multiplier would increase the risk to one in 25.  This is still fairly improbable, but certainly worth noting.  An eighty-fold increase in risk would make the disease far more likely than not, and might well make it the single most important concern of the individual’s life.

Alzheimer’s risk in the general population lies between these two extremes, of course, and that baseline varies in different populations of people.  Some of that variation itself may well be due to the varying frequency of the ApoE4 gene and related risk factors in the largely untested population, so it’s tricky to define these baselines, and it can even be misleading, giving rise to false security in some cases and inordinate fear in others.  This is one example of how complex such diseases are from an epidemiological point of view, and highlight just how much we have yet to learn about Alzheimer’s specifically and the development and function of the nervous system in general.

Still, the article in question (I don’t have the link, I’m sorry to say) concerned one of the functions of the ApoE gene (or rather, its products) in general, which involve cholesterol transport in and around nerve cells.  Cholesterol is a key component of cell membranes in animals, and this is particularly pertinent in this case because the myelin in nerves is formed from the sort of “wrapped up” membranes of a type of neural support cell*.

cns myelin

This particular study found that the cells of those with ApoE4 produced less or poorer myelin around nerve cells in the brain, presumably because of that faulty cholesterol transport, and that the myelin also deteriorated over time.

Now, the function of myelin is to allow the rapid progression of nerve impulses along relatively long axons, with impulses sort of jumping from one space (a “Node of Ranvier”) between myelin sheath and another rather than having to travel all the way down the nerve, which a much slower process, seen mostly in autonomic nerves in the periphery.  When normally myelinated nerves lose their myelin, transmission of impulses is not merely slowed down, but becomes erratic and often effectively non-existent.

myelin in general

The researchers found that a particular pharmaceutical can correct for at least some of the faulty cholesterol transport and can thereby support better myelin survival.  Though this does not necessarily point toward a cure or even a serious disease-altering treatment over the long term, it’s certainly interesting and encouraging.

But of course, we know Alzheimer’s to be a complex disease, and it may ultimately entail many processes.  For instance, it’s unclear (to me at least) how this finding relates to the deposition of amyloid plaques, which are also related to ApoE, and are extracellular findings in Alzheimer’s.  Are these plaques the degradation products of imperfect myelin, making them more a sign than a cause of dysfunction, or are they part of the process in and of themselves?

Also, it doesn’t address the question of neurofibrillary tangles, which are defects found within the nerve cells, and appear to be formed from aggregates of microtubule-associated proteins (called tau protein) that are atypically folded and in consequence tend to aggregate and not to function and to interfere with other cellular processes, making them somewhat similar to prions**.  It’s not entirely clear (again, at least to me) which is primary, the plaques or the tangles, or if they are both a consequence of other underlying pathology, but they both seem to contribute to the dysfunction that is Alzheimer’s disease.

So, although potential for a treatment that improves cholesterol transport and supports the ongoing health of the myelin in the central nervous systems of those at risk for Alzheimer’s is certainly promising, it does not yet presage a possible cure (or a perfect prevention) for the disease.  More research needs to be done, at all levels.

Of course, that research is being undertaken, in many places around the world.  But there is little doubt that, if more resources were to be put into the study and research of such diseases, understanding and progress would proceed much more quickly.

The AIDS epidemic that started in the 1980s was a demonstration of the fact that, when society is strongly motivated to put resources into a problem, thus bringing many minds and much money to the work, progress can occur at an astonishing rate.  The Apollo moon landings were another example of such rapid progress.  Such cases of relative success can lead one to wonder just how much farther, how much faster, and how much better our understanding of the universe‒that which is outside us and that which is within us‒could advance if we were able to evoke the motivation that people have to put their resources into, for instance, the World Cup or fast food or celebrity gossip.

I suppose it’s a lot to expect from a large aggregate of upright, largely fur-less apes only one step away from hunting and gathering around sub-Saharan Africa that they collectively allocate resources into things that would, in short order, make life better and more satisfying for the vast majority of them.  All creatures‒and indeed, all entities, down to the level of subatomic particles and up to the level of galaxies‒act in response to local forces.  It’s hard to get humans to see beyond the momentary impulses that drive them, and this shouldn’t be surprising.  But it is disheartening.  That, however, is a subject for other blog posts.

I’ll try to have more to say about Alzheimer’s as I encounter more information.  Just as an example, in closing, another article I found on the same day dealt with the inflammatory cells and mediators in the central nervous system, and how they can initially protect against and later worsen the problem.  We should not be too surprised, I suppose, that a disease that leads to the insidious degeneration of the most complex system in the known universe‒the human brain‒should be complicated and multifactorial in its causation and in its expression.  This should not discourage us too much, though.  The most complicated puzzles are, all else being equal, the most satisfying ones to solve.


*The cell type that creates myelin in the peripheral nervous system (called Schwann cells) is different than the type that makes it in the central nervous system (oligodendrocytes), and this may be part of why Alzheimer’s affects the central nervous system mainly, whereas diseases like ALS (aka Lou Gehrig’s Disease), for instance, primarily affect the nervous system outside the brain.

**The overall shape of a protein in the body is a product of the ordering of its amino acids and how their side chains interact with the cellular environment‒how acidic or basic, how aqueous or fatty, how many of what ions, etc.‒and with other parts of the protein itself.  Some proteins can fold in more than one possible way, and indeed this variability is crucial to the function of proteins as catalysts for highly specific chemical reactions in a cell.  However, some proteins can fold into more than one, relatively stable form, one of which is nonfunctional.  In some cases, these non-functional proteins interact with other proteins of their type (or others) to encourage other copies of the protein to likewise fold into the non-functional shape, and can form polymers of the protein, which can aggregate within the cell and resist breakdown, sometimes forming large conglomerations.  These are the types of proteins that cause prion diseases such as “mad cow disease”, and they appear also to be the source of neurofibrillary tangles in people with Alzheimer’s disease.

The sweetest honey is loathsome in its own deliciousness. And in the taste destroys the appetite. Therefore, blog moderately.

Hello and good morning.  It’s Thursday again, so I return to my traditional weekly blog post, after having taken off last Thursday for Thanksgiving.  I’m still mildly under the weather, but I’m steadily improving.  It’s nothing like a major flu or Covid or anything along those lines, just a typical upper respiratory infection, of which there are oodles.  Most are comparatively benign, especially the ones that have been around for a while, because being not-too-severe is an evolutionarily stable strategy for an infectious agent.

An infection that makes its host too ill will keep that host from moving about and make itself less likely to be spread, to say nothing of an infection that tends to kill its host quickly.  Smart parasites (so to speak) keep their hosts alive and sharing for a looong time.  Of course, “smart” here doesn’t say anything about the parasite itself; viruses are only smart in the sense that they achieve their survival and reproduction well, but they didn’t figure out how to be that way—nature just selected for the ones that survived and reproduced most successfully.  It’s almost tautological, but then again, the very universe itself could be tautological from a certain point of view.

It’s an interesting point, to me anyway, to note that today, December 1st, is precisely one week after Thanksgiving.  Of course, New Year’s Day (January 1st, in case you didn’t know) is always exactly 1 week after Christmas.  It’s unusual for Thanksgiving to precede the first of December by a week, because the specific date of Thanksgiving varies from year to year (and, of course, if Thanksgiving were to fall on the 25th of November, December 1st would not be exactly one week later).  It’s an amusing coincidence; there’s no real significance to it, obviously, but I notice such things.

Anyway.

My sister asked me to write something about the vicissitudes of sugar (not her words), and though I don’t mean to finish the topic here today, I guess I’ll get started.  Apologies to those who are waiting for me to finish the neurology post, but that requires a bit more prep and care, and I’m not ready for it quite yet.  Life keeps getting in the way, as life does, which is one of the reasons I think life is overrated.

It’s hard to know where to start with sugar.  Of course, the term itself refers to a somewhat broad class of molecules, all of which contain comparatively short chains of carbon atoms, to which are bonded hydrogen and hydroxyl* moieties.

Most sugars are not so much actual free chains as they are wrapped up in rings.  The main form of sugar used by the human body is glucose, which is a six-membered ring with the rough chemical formula C6H1206.

glucose2

This is the sugar that every cell in the body is keyed to use as one of its easy-access energy sources, the one insulin tells the cells to take up when everything is working properly.  Interestingly enough, of course, though glucose is the “ready-to-use” energy source, it only provides about 4 kilocalories** per gram to the body, as compared to 9 kilocalories per gram for fats.

But the sugar we get in our diets is not, generally speaking, simple glucose.  It tends to be in the form of disaccharides, or sugars made of two combined individual sugars.  Sucrose, or table sugar, is a dimer of glucose and fructose, joined by an oxygen atom.

sucrose

Okay, I’m going to have to pick this up tomorrow.  I’ve gotten distracted and diverted by a conversation a few seats ahead of me.

There are two guys talking to each other at the end of this train car, and they are each seated next to a window on the opposite side of the train, so they’re basically yelling across the aisle to each other.  Their conversation is perfectly civil, and though they’re revealing a certain amount of ignorance about some matters, they are mainly displaying a clear interest in and exposure to interesting topics, from history to geography and so on.

At one point, one of the men started speaking of the pyramids and how remarkable their construction was, and I feared the invocation of ancient aliens…but then he followed up to say that, obviously, there were really smart people in ancient Egypt, just like we have smart people today who design and build airplanes and rockets and the like.  Kudos to him!

These men are not morons by any means.  They clearly respect the intellectual achievements of the past and present, and that’s actually quite heartening, because I think it’s obvious that neither one is extensively college-educated, if at all.

But why do they have their conversation from opposite sides of the train, so that everyone nearby has to hear it?  It’s thrown me off my course.

I’ll close just by saying that yesterday I finished rereading The Chasm and the Collision, and I want to note that I really think it’s a good book, and to encourage anyone who might be interested to read it.  The paperback is going for I think less than five dollars on Amazon, and the Kindle edition is cheaper still.  If you like the Harry Potter books, or the Chronicles of Narnia, or maybe the Percy Jackson books, I think you would probably like CatC.

CatC cover paperback

I’d love to think that there might be parents out there who would read the book to their kids.  Not kids who are too young—there are a few scary places in the story, and some fairly big and potentially scary ideas (but what good fairy tale doesn’t meet that description?).  It’s a fantasy adventure starring three middle-school students, though I’ll say again that, technically, it’s science fiction, but that doesn’t really matter for the experience of the story.

Most of my other stuff is not suitable for young children in any way—certainly not those below teenage years—and Unanimity and some of my short stories are appallingly dark (though I think still enjoyable).  If you’re old enough and brave enough, I certainly can recommend them; I don’t think I’m wrong to be reasonably proud of them.  But The Chasm and the Collision can be enjoyed by pretty much the whole family.  You certainly don’t have to be a kid to like it, or so I believe.

With that, I’ll let you go for now.  I’ll try to pick up more thoroughly and sensibly on the sugar thing tomorrow, with apologies for effectively just teasing it today.  I’m still not at my sharpest from my cold, and the world is distracting.  But I will do my best—which is all I can do, since anything I do is the only thing I could do in any circumstance, certainly once it’s done, and thus is the best I could do.

Please, all of you do your best, individually and collectively, to take care of yourselves and those you love and those who love you, and have a good month of December.

TTFN


*Hydroxyl groups are just (-OH) groups, meaning an oxygen atom and a hydrogen atom bonded together, like a  water molecule that lost one of its hydrogens.  This points back toward the fact that plants make sugar molecules from the raw building blocks of carbon dioxide (a source for the carbon atoms and some of the oxygen) and water (hydrogen and oxygen) using sunlight as their source of power and releasing oxygen as a waste product.  This was among the first environmental pollutants on the Earth—free oxygen—and it had catastrophic and transformative effects on not just the biosphere of the Earth but even on the geology.  The fact that the iron in our mines, for instance, is mainly in the form of rust is largely because of this plant-born presence of free oxygen in the atmosphere.

**A kilocalories is defined as the amount of energy needed to heat a kilogram of water by one degree centigrade.  We often shorten this term just to “calorie”, but that is actually only the amount of heat needed to raise a gram of water one degree centigrade (or 9/5 degrees Fahrenheit).  It’s worth being at least aware of the fact that what we tend to call calories are actually kilocalories.

You’ve got some nerve!

It’s Saturday, the 19th of November in 2022, and I’m going in to the office today, so I’m writing a blog post as well.  I’m using my laptop to do it, and that’s nice—it lets me write a bit faster and with less pain at the base of my right thumb, which has some degenerative joint disease, mainly from years of writing a lot using pen and paper.

The other day I started responding to StephenB’s question about the next big medical cure I might expect, and he offered the three examples of cancer, Alzheimer’s and Parkinson’s Disease.  I addressed cancer—more or less—in that first blog post, which ended up being very long.  So, today I’d like to at least start addressing the latter two diseases.

I’ll group them together because they are both diseases of the central nervous system, but they are certainly quite different in character and nature.  This discussion can also be used to address some of what I think is a dearth of public understanding of the nature of the nervous system and just how difficult it can be to treat, let along cure, the diseases from which it can suffer.

A quick disclaimer at the beginning:  I haven’t been closely reading the literature on either disease for quite some time, though I do notice related stories in reliable science-reporting sites, and I’ll try to do a quick review of any subjects about which I have important uncertainties.  But if I’m out of date on anything specific, feel free to correct me, and try to be patient.

First a quick rundown of the two disorders.

Alzheimer’s is a degenerative disease of the structure and function of mainly the higher central nervous system.  It primarily affects the nerves themselves, in contrast to neurologic diseases that interfere with supporting cells in the brain*.  It is still, I believe, the number one cause of dementia** among older adults, certainly in America.  It’s still unclear what the precise cause of Alzheimer’s is, but it is associated with the development of “cellular atypia made of what are called “neurofibrillary tangles” within the cell bodies of neurons, and these seem to interfere with normal cellular function.  To the best of my knowledge, we do not know for certain whether the plaques are what directly and primarily cause most of the disease’s signs and symptoms, or if they are just one part of the disease.  Alzheimer’s  is associated with gradual and steadily worsening loss of memory and cognitive ability, potentially leading to loss of one’s ability to function and care for oneself, loss of personal identity, and even inability to recognize one’s closest loved ones.  It is degenerative and progressive, and there is no known cure and there are few effective treatments that are not primarily supportive.

Parkinson’s Disease (the “formal” disease as opposed to “Parkinsonism”, which can have many causes, perhaps most notably the long-term treatment of psychiatric disorders with certain anti-psychotic medicines), is a disorder in which there is loss/destruction of cells in the substantia nigra***, a region in the “basal ganglia” in the lower part of the brain, near the takeoff point of the brainstem and spinal cord.  It is dense with the bodies of dopaminergic neurons, which there seem to modulate and refine motor control of the body.  The loss of these nerve cells over time is associated with gradual but progressive movement disorders, including the classic “pill-rolling” tremor, shuffling gait, blank, mask-like facial expression, and incoordination with tendency to lose one’s balance.  There are more subtle and diffuse problems associated with it, including dementia and depression, and like Alzheimer’s it is generally progressive and degenerative, and there is no known “cure”, though there are treatments.

Let me take a bit of a side-track now and address something that has been a pet peeve of mine, and which contributes to a general misunderstanding of how the nervous system and neurotransmitters work, and how complex the nature and treatment of diseases of the central nervous system can be.  This may end up causing this blog post to require at least two parts, but I think it’s worth the diversion.

I mentioned above that the cells of the substantia nigra are mainly dopaminergic cells.  This means that they are nerve cells that transmit their “messages” to other cells mainly (or entirely) using the neurotransmitter dopamine.  The term “dopaminergic” is a combination word, its root obviously enough being “dopamine” and its latter half, “ergic” relating to the Greek word “ergon” which means to do work.  So “dopaminergic” means those cells do their work using dopamine, and—for instance—“serotonergic” refers to cells that do their work using serotonin.  That’s simple enough.

But the general public seems to have been badly educated about what neurotransmitters are and do; what nerve impulses are and do; and what the nature of disorders, like for instance depression, that involve so-called “chemical imbalances” really entails.

I personally hate the term chemical imbalance.  It seems to imply that the brain is some kind of vat of solution, perhaps undergoing some large and complex chemical reaction that acquires some mythical state of equilibrium when it’s working properly, but when, say, some particular reactant or catalyst is present in too great or too small a quantity, doesn’t function correctly.  This is a thoroughly misleading notion.  The brain is an incredibly complex “machine” with hundreds of billions of cells interacting in extremely complicated and sophisticated ways, not a chemical reaction with too many or too few moles on one side or another.

People have generally heard of dopamine, serotonin, epinephrine, norepinephrine, and the like, and I think many people think of them as related to specific brain functions—for instance, serotonin is seen as a sort of “feel good” neurotransmitter, dopamine as a “reward” neurotransmitter, epinephrine and norepinephrine as “fight or flight” neurotransmitters, and so on.

I want to try to make it very clear:  there’s nothing inherently “feel good” about serotonin, there’s nothing inherently “rewarding” about dopamine, and—even though epinephrine is a hormone as well as a neurotransmitter, and so can have more global effects—there’s nothing inherently “fight or flight” about the “catecholamines” epinephrine and norepinephrine.

All neurotransmitters—and hormones, for that matter—are just complex molecules that have particular shapes and configurations and chemical side chains that make them better or worse fits for receptors on or in certain cells of the body.  The receptors are basically proteins, often combined with special types of “sugars” and “fats”.  They have sites in their structures into which certain neurotransmitters will tend to bind—thus triggering the receptor to carry out some function—and to which other neurotransmitters don’t bind, though, as you may be able to guess from looking at their somewhat similar structures, there can be some overlap.

dopamine

Dopamine

serotonin

Serotonin

epinephrine

Epinephrine

Neurotransmitters are effectively rather like keys, and their functions—what they do in the nervous system—are not in any way inherent in the neurotransmitter itself, but in the types of processes that get activated when they bind to receptors.

There is nothing inherently “rewarding” about dopamine, any more than there is anything inherently “front door-ish” to the key you use to unlock the front door of your house, or “car-ish” to the keys that one uses to open and turn on cars.  It’s not the key or the lock that has inherent nature, it’s whatever function is initiated when that key is put into that lock, and that function depends entirely on the nature of the target.  The same key used to open your door or start your car could, in principle, be used to turn on the Christmas lights in Rockefeller Center or to launch a nuclear missile.

Dopamine is associated with areas of the nervous system that function to reward—or more precisely, to motivate—certain behaviors, but it is not special to that function.  As we see in Parkinson’s Disease, it is also used in regions of the nervous system involved in modulating motor control of the body.  The substantia nigra doesn’t originate the impulses for muscles to move, but it acts as a sort of damper or fine tuner on those motor impulses.

Neurotransmitters work within the nervous system by being released into very narrow and tightly closed spaces between two nerve cells (a synapse), in amounts regulated by the rate of impulses arriving at the bulb of the axon.  Contrary to popular descriptions, these impulses are not literally “electrical signals” but are pulses of depolarization and repolarization of the nerve cell membrane, involving “voltage-triggered gates****” and the control of the concentration of potassium and sodium ions inside and outside the cell.

synapse

A highly stylized synapse

The receptors then either increase or decrease the activity of the receiving neuron (or other cell) depending on what their local function is.  It’s possible, in principle, for any given neurotransmitter to have any given action, depending on what functions the receptors trigger in the receiving cell and what those receiving cells then do.  However, there is a fairly well-conserved and demarcated association between particular neurotransmitters and general classes of functions of the nervous system, due largely to accidents of evolutionary history, so it’s understandable that people come to think of particular neurotransmitters as having that nature in and of themselves…but it is not accurate.

Okay, well, I’ve really gone off on my tangents and haven’t gotten much into the pathology, the pathophysiology, or the potential (and already existing) treatments either for Parkinson’s or Alzheimer’s.  I apologize if it was tedious, but I think it’s best to understand things in a non-misleading way if one is to grasp why it can be so difficult to treat and/or cure disorders of the nervous system.  It’s a different kind of problem from the difficulties treating cancer, but it is at least as complex.

This should come as no surprise, given that human nervous systems (well…some of them, anyway) are the most complicated things we know of in the universe.  There are roughly as many nerve cells in a typical human brain as there are stars in the Milky Way galaxy, and each one connects with a thousand to ten thousand others (when everything is functioning optimally, anyway).  So, the number of nerve connections in a human brain can be on the order of a hundred trillion to a quadrillion—and these are not simple switching elements, like the AND, OR, NOT, NAND, and NOR gates for bits in a digital computer, but are in many ways continuously and complexly variable even at the single synapse level.

When you have a hundred trillion to a quadrillion more or less analog switching elements, connecting cells each of which is an extraordinarily complex machine, it shouldn’t be surprising that many things can go wrong, and that figuring out what exactly is going wrong and how to fix it can be extremely difficult.

It may be (and I strongly suspect it is the case) that no functioning brain of any nature can ever be complex enough to understand itself completely, since the complexity required for such understanding increases the amount and difficulty of what needs to be understood*****.  But that’s okay; it’s useful enough to understand the principles as well as we can, and many minds can work together to understand the workings of one single mind completely—though of course the conglomeration of many minds likewise will become something so complex as likely to be beyond full understanding by that conglomeration.  That just means there will always be more to learn and more to know, and more reasons to try to get smarter and smarter.  That’s a positive thing for those who like to learn and to understand.

Anyway, I’m going to have to continue this discussion in my next blog post, since this one is already over 2100 words long.  Sorry for first the delay and then the length of this post, but I hope it will be worth your while.  Have a good weekend.


*For instance, Multiple Sclerosis attacks white matter in the brain, which is mainly long tracts of myelinated axons—myelin being the cellular wraparound material that greatly speeds up transmission of impulses in nerve cells with longish axons.  The destruction of myelin effectively arrests nerve transmission through those previously myelinated tracts.

**“Dementia” is not just some vague term for being “crazy” as one might think from popular use of the word.  It is a technical term referring to the loss (de-) of one’s previously existing mental capacity (-mentia), particularly one’s cognitive faculties, including memory and reasoning.

***Literally, black substance.

****These are proteins similar to the receptors for neurotransmitters in a way, but triggered by local voltage gradients in the cell membrane to open or close, allowing sodium and/or potassium ions to flow into and out of the cell, thereby generating more voltage gradients that trigger more gates to open, in a wave that flows down the length of the axon, initially triggered usually at the body of the nerve cell.  They are not really in any way analogous to an electric current in a wire.

*****You can call that Elessar’s Conjecture if you want (or Elessar’s Theorem if you want to get ahead of yourself), I won’t complain.

Some discussion of cancer–not the zodiac sign

Yesterday, reader StephenB suggested that I write about what I thought might be the next big medical cure coming our way—he suggested cancer, Alzheimer’s, and Parkinson’s diseases as possible contenders—and what I thought the “shape” of such a cure might be.  I thought this was an interesting point of departure for a discussion blog, and I appreciate the response to my request for topics.

[I’ll give a quick “disclaimer” at the beginning:  I’ve had another poor night.  Either from the stress of Monday night or something I ate yesterday (or both, or something else entirely) I was up a lot of last night with reflux, nausea, and vomiting.  So I hope I’m reasonably coherent as I write, and I apologize if my skills suffer.]

One hears often of the notion of a “cure for cancer”, for understandable reasons; cancer is a terrifying and horrible thing, and most people would like to see it gone.  However, my prediction is that there will never be “a” cure for cancer, except perhaps if we develop nanotechnology of sufficient complexity and reliability that we are able to program nanomachines unerringly to tell the difference between malignant and non-malignant cells, then destroy the malignant ones and remove their remains neatly from the body without causing local complications.  That’s a tall order, but it’s really the only “one” way to target and cure, in principle, all cancers.

Though “cancer” is one word, and there are commonalities in the diseases that word represents, most people know that there are many types of cancers—e.g., skin, colon, lung, breast, brain, liver, pancreatic, and so on—and at least some people know that, even within the broader categories there are numerous subtypes.  But every case of cancer is literally a different disease in a very real sense, and indeed, within one person, a single cancer can become, effectively, more than one disease.

We each* start out as a single fertilized egg cell, but by adulthood, our bodies have tens of trillions of cells, a clear demonstration of the power of exponential expansion.  Even as adults, of course, we do not have a static population of cells; there is ongoing growth, cell division/reproduction, and of course, cell death.  This varies from tissue to tissue, from moment to moment, from cell type to cell type, under the influence of various local and distant messengers, ultimately controlled by the body’s DNA.

Whenever a cell replicates, it makes a copy of its DNA, and one of each copy is sent into each daughter cell.  There are billions of base pairs in the human genome, so there are lots of opportunities for copying errors.  Thankfully, the cell’s proofreading “technology” is amazingly good, and errors are few and far between.  But they are not nonexistent.  Cosmic rays, toxins, other forms of radiation, prolonged inflammation, and simple chance, can all lead to errors in the replication of a precursor cell’s DNA, giving rise to a daughter cell with mutations, and when there are trillions of cells dividing, there are bound to be a number of them.

The consequences of such errors are highly variable.  Many of them do absolutely nothing, since they happen in portions of the genome that are not active in that daughter cell’s tissue type, or are in areas of “junk” DNA in the cell, or in some other way are inconsequential to the subsequent population of cells.  Others, if in just the wrong location, can be rapidly lethal to a daughter cell.  Most, though, are somewhere in between these two extremes.

The rate of cell division/reproduction in the body is intricately controlled, by the proteins and receptors in that cell, and the genes that code for them, and that code for factors that influence other portions of the genome of a given cell, and that make it sensitive or insensitive to hormonal or other factors that promote or inhibit cell division.  If a mutation in one of the regions of the cell that is involved in this regulatory process—either increasing the tendency to grow and divide or diminishing the sensitivity to signals that inhibit division—a cell can become prone to grow and divide more rapidly than would be ideal or normal for that tissue.  Any given error is likely to have a relatively minor effect, but it doesn’t take much of an effect to lead to a significant increase in the number of cells in a given cell type eventually—again, this is the power of exponential processes.

A cell line that is reproducing more rapidly will have more opportunities for errors in the DNA reproduction of its many daughter cells.  These new errors are no more likely to be positive, negative, or neutral generally than any other replication errors anywhere else in the body, but increased rate of growth means more opportunities** for mistakes.

If a second mistake in one of the potentially millions (or more) of daughter cells of the initial cell makes it yet more prone to divide rapidly than even the first population of mutated cells, then that population will grow and outpace the parent cells.  There can be more than one such daughter populations of cells.  And as the rate of replication/growth/division increases in a given population of cells, we have an increased chance of more errors occurring.  Those that become too deleterious will be weeded out.  Those that are neutral will not change anything in the short term (though some can make subsequent mutations more prone to cause increased growth rates).  But the ones that increase the rate of growth and division will rapidly come to dominate.

This is very much a microcosm of evolution by natural selection, and is a demonstration of the fact that such evolution is blind to the future.  In a sense, the mutated, rapidly dividing cells are more successful than their more well-behaved, non-mutated—non-malignant—sister cells.  They outcompete for resources*** against “healthy” cells in many cases, and when they gather into large enough masses, they can cause direct physical impairments to the normal function of an organism.  They can also produce hormones and proteins themselves, and can thus cause dysregulation of the body in which they reside in many ways.

Because they tend to accumulate more and more errors, they tend to become more dysfunctional over time.  And, of course, any new mutations in a subset of tumor cells that makes it more prone to divide unchecked, or that makes it more prone to break loose from its place of origin and spread through the blood and/or lymph of the body will rapidly become overrepresented.

This is the general story of the occurrence of a cancer.  The body is not without its defenses against malignant cells—the immune system will attack and destroy mutated cells if it recognizes them as such—but they are not perfect, nor would it behoove evolution (on the large scale) to select for such a strictly effective immune system, since all resources are always finite, and overactive immunity can cause disease in its own right.

But the specific nature of any given cancer is unique in many ways.  First of all, cancers arise in the body and genes of a human being, each of which is thoroughly unique in its specific genotype from every other human who has ever lived (other than identical twins).  Then, of course, more changes develop as more mutations occur in daughter cells.  Each tumor, each cancer, is truly a singular, unique disease in all the history of life.  Of course, tumors from specific tissues will have characteristics born of those tissues, at least at the start.  Leukemias tend to present quite differently from a glioblastoma or a hepatoma.

Because of these differences, the best treatments for specific cancers, even of classes of cancers, is different.  The fundamental difficulty in treating cancer is that you are trying to stop the growth and division—to kill—cells that are more or less just altered human cells, not all that different from their source cells.  So any chemical or other intervention that is toxic to a cancer cell is likely to be toxic to many other cells in the body.  This is why chemotherapy, and radiation therapy, and other therapies are often so debilitating, and can be life-threatening in their own right.  Of course, if one finds a tumor early enough, when it is quite localized, before any cells have broken loose—“metastasized”—to the rest of the body, then surgical removal can be literally curative.

Other than in such circumstances, the treatment of cancer is perilous, though not treating it is usually more so.  Everything from toxic chemicals to immune boosters, to blockers of hormones to which some cancers are responsive, to local radiation are used, but it is difficult to target mutated cells without harming the native cells to at least some degree.

In certain cases of leukemia, one can literally give a lethal dose of chemo and/or radiation that kills the bone marrow of a person whose system has been overwhelmed by malignant white blood cells, then giving a “bone marrow transplant”, which nowadays can sometimes come from purified bone marrow from the patient—thus avoiding graft-versus-host diseases—and there can be cures.  But it is obviously still a traumatic process, and is not without risk, even with auto-grafts.

So, as I said at the beginning, there is not likely to be any one “cure” for cancer, ever, or at least until we have developed technology that can, more or less inerrantly, recognize and directly remove malignant cells.  This is probably still quite a long way off, though progress can occasionally be surprising.

One useful thing cancer does is give us an object lesson, on a single-body scale, that it is entirely possible for cell lines—and for organisms—to evolve, via apparent extreme success, completely into extinction.  It’s worth pondering, because it happens often, in untreated cancers, and it has happened on the scale of species at various times in natural history.  Evolution doesn’t think ahead, either at the cellular level, the organismal level, or the species/ecosystem level.  Humans, on the other hand, can think ahead, and would be well served to take a cue from the tragedy of cancer that human continuation is not guaranteed merely because the species has been so successful so far.

Anyway, that’s a long enough post for today.  I won’t address matters of Parkinson’s Disease or Alzheimer’s now, though they are interesting, and quite different sorts of diseases than cancers are.  I may discuss them tomorrow, though I might skip to Friday.  But I am again thankful to StephenB for the suggestion/request, and I encourage others to share their recommendations and curiosities.  Topics don’t have to be about medicine or biology, though those are my areas of greatest professional expertise.  I’m pretty well versed in many areas of physics, and some areas of mathematics, and I enjoy some philosophy and psychology, and—of course—the reading and writing of fiction.

Thanks again.


*I’m excluding the vanishingly rare, and possibly apocryphal, cases of fused fraternal twins.

**There are also people who have, at baseline, certain genes that make them more prone to such rapid replication, or to errors in DNA replication, or to increased sensitivity to growth factors of various kinds, and so on.  These are people who have higher risks of various kinds of cancer, but even in them, it is not an absolute matter.

***Most tissues in the body have the inherent capacity and tendency to stimulate the development of blood vessels to provide their nutrients and take away their wastes.  Cancer cells are no exception, or rather, the ones that are do not tend to survive.  Again, it is a case of natural selection for those cell lines that are most prone to multiply and grow and gain local resources.