Author’s note for “I for one welcome our new computer overlords”

ifowonco final

I for one welcome our new computer overlords was the first new short story I wrote after having completed Mark Red, The Chasm and the Collision, and Son of Man.*  Despite what you might think, this was not a story that driven by its title, though that came along shortly after the story began, and I’ll deal with it first.  The title is a direct quote from Ken Jennings, who wrote it as his Final Jeopardy answer when he and his fellow all-time human Jeopardy champion lost to IBM’s Watson computer.  It was a good joke, referring back to an episode of The Simpsons, when news anchor Kent Brockman mistakenly thinks that a space shuttle mission is being attacked by a “master race of giant space ants,” adding, “and I for one welcome our new insect overlords.”  The obvious joke—particularly funny because Brockman’s conclusion is so ridiculous—is about how real people do sometimes, cynically, and in cowardly fashion, try to ingratiate themselves to powerful ruling classes or individuals.

Peter Lunsford, the main character of I for one welcome our new computer overlords, is no coward.  He’s a seemingly simple man—without college education, a widower, a loner, a phone salesman.  But he’s a voracious reader, and even more, he is a deeply thoughtful and intelligent person.  Because of his own experiences with irrationality, even in people he has loved, he pines for the advent of a higher class of mind, which he expects to come from the eventual creation of artificial intelligence.  But he’s by no means a misanthrope.  He laments the senselessness of much human behavior but has an optimistic attitude toward the possibilities inherent in human creativity.  He also has a deep sense of the tragedy of the loss of brilliant people like his wife who, because of the scars of her harsh background, self-sabotaged her future through a fatal drug overdose.  Thus, when Peter wins a nearly billion-dollar lottery jackpot, he uses it to create an educational program and a scholarship fund to help people like his wife avoid the tragic end she met, and to allow at least some of them reach their potential and make great contributions to the world.

The triggers for this story were discussions by neuroscientist, writer, and podcaster Sam Harris, of whom I am a fan.  Harris began to think publicly about dangers that might be posed to humanity by our possible creation of artificial intelligence; he recommended that we think very carefully about such dangers, so we can avoid potentially irreversible errors.  His concerns are shared by such luminaries as Max Tegmark, Elon Musk, and the late, great Stephen Hawking, in contrast to the quasi-Utopian attitudes of such writers and thinkers as Ray Kurtzweil.  Both points of view are worth considering, and it’s an issue I think we should approach with our eyes as wide open as we can possibly get them.  But when contemplating Harris, et al’s concerns, I couldn’t help thinking that, if a truly superior artificial intelligence were to make humans obsolete, would that be such a terrible thing?  Peter Lunsford is my proponent of that perspective.**

I wanted to write a story revolving around those concerns about artificial intelligence, but I didn’t want to write about a cliché takeover of the world by AI—in this, my title is deliberately ironic.  Personally, I suspect that ethics and morality are generally improved by higher intelligence, all other things being equal, so I think that artificial intelligences might be inherently more ethical and reserved than we humans, with all our non-rational evolutionary baggage.  In this, Ifowonco is a story of wish-fulfillment.  It’s my daydream of the possibility that someone winning a truly gargantuan sum of money might use it to deeply positive philanthropic effect, inspiring others to act likewise, then leading, through that beneficial action, to a great leap forward in intelligent life (yes, I would without embarrassment refer to AI as a form of life).

Of course, you can’t say that Ifowonco is a uniformly happy story.  It entails a (non-nuclear) World War III, the rejection of AI by the human race, and of course, Peter Lunsford’s willful self-destructiveness.  Overall, though, it’s optimistic.  Darrell White is my example of a brilliant, world-changing mind springing from the least promising of seeming circumstances, wanting only the opportunity and nurturing that would allow such a mind to flourish.  He and my imagined AIs represent of my personal conviction that reason and morality and vastly more powerful than their antitheses; I cite as evidence for this the fact that civilization continues to exist and grow, even though it’s so much easier to destroy than to create.

In some senses, Ifowonco is the most personal story that I’ve written hitherto.  Of course, any character in a story must be a reflection of some part of the mind of the author—a person incapable of dark thoughts could hardly write a believable villain, for instance.  But Peter Lunsford is the avatar of a large part of my personality, in both his positive and negative character attributes.  Though I’ve had almost twice as much formal education as Peter, that difference is inconsequential because of Peter’s incessant self-education.  There is, in fact, almost no daylight between Peter Lunsford and me (and what little there is must generally be in Peter’s favor).  I would even like to think that, were I to win a prize such as Peter wins, I would choose to do with it something like what he does; in this, also, the story is a form of wish-fulfillment.

Speaking, in closing, of wish fulfillment:  I deliberately made the reality of the second half of the story ambiguous.  Do Darrell White and his creations, and all that comes with them, even exist in this universe?  Or are he and those subsequent beings and events simply a species of dream that Peter has while his brain succumbs to hypoxia?

I know the answer to this question in the universe of the story—and yes, there is a correct answer—but I’m not going to tell you what it is.  I’d rather have you draw your own conclusions.  I think it’s more fun that way, and it may even be a useful tool for personal reflection, bringing us back to that whole question of consciousness that troubles thinkers like Sam Harris.  I’d be intrigued and delighted to hear any of your thoughts on the subject, so feel free to send them my way, either here, or on Facebook, or on Twitter.  I wish you well.


* Just this week I released the audio of this story, now available to enjoy, for free, here on my blog.

** I don’t have the concerns, which Harris does, about the possibility that AI could be highly intelligent and competent but might nevertheless not be conscious, for two reasons:  First, I strongly suspect that consciousness is a natural epiphenomenon of highly complex information processing involving internal as well as external monitoring and response, though I’m far from sure; and second, I can’t be philosophically certain even that other humans are conscious (I think they are, but this extrapolation is based on my own experience and their apparent similarity to me), but it doesn’t seem to matter much for the purposes of their function in the world.

Please leave a comment, I'd love to know what you think!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s