Curing the Poison of “Rankism”

I got a close look at the poison of “rankism” at the age of seven, when my classmate Arlene was sent to the hall for the whole school day. Arlene lived on a farm and wore the same dress to school each day. When she spoke, it was in a whisper. Our teacher, Miss Belcher, began every day with an inspection of our fingernails. One day she told Arlene to go to the hall and stay there until her fingernails were clean. I wondered how she could clean her nails out there, without soap or water. If there was no remedy in the hall, then the reason for sending Arlene out there must be to embarrass her and scare the rest of us.

Later, filing out to the playground, we snuck glances at her. She must have heard the snickering as we passed – hiding her face against the wall as I remember it, and trying to make her­self look small. I told my mother what had happened to Arlene, and, as I must have hoped, she made sure the same thing didn’t happen to me.

Other kids whom my classmates regarded as safe targets for abuse included Frank, who was shamed as a “faggot”; Jimmy, who had Down’s syndrome and was ridiculed as “retarded”; and Tommie and Trudy who were teased about their weight. The N-word was used only warily, typically from the safety of the bus that carried our all-white basketball team home in the wake of defeat to a school that fielded players who were black.

Not belonging to any of the groups that were targeted for abuse, I was spared – until I got to college. There I realized that higher education was less about the pursuit of truth than about establishing another pecking order. I found myself caught up in games of one-upmanship, and was reminded of my classmates once again.

The toxic relationships described above are all based on traits that mark people out for abuse, whether in terms of class, sexuality, disability, body shape, color or academic standing. And even if you fall on the privileged side of these traits you can still be treated as a nobody by people who want to make themselves feel superior. I call this “rankism”, and it’s the cancer that’s eating away at all our relationships.

Emily Dickinson spoke about this problem in her “nobody” poem:

I’m nobody! Who are you?
Are you nobody, too?
Then there’s a pair of us – don’t tell!
They’d banish us, you know!

As she notes, nobodies look for allies, and stand on constant guard against potential banishment. As social animals, banishment has long been tantamount to a death sentence for us. It’s no wonder we’re sensitive to even the slightest of indignities.

Dignity matters because it shields us from exclusion. It assures us that we belong, that there’s a place for us, that we’re not in danger of being ostracized or exiled. Dignity is the social counterpart of love.

In a seminal work of the modern women’s movement, Betty Friedan wrote of “the problem without a name.” A few years later the problem had indeed acquired a name – it was “sexism” – and from then on women knew both what they were for (equal dignity and equal rights) and what they were against (indignity and inequality). That’s why pinning a name on any behavior that poisons relationships is the first step towards delegitimizing it.

NoRankismAs president of Oberlin College in Ohio during the early 1970s, I saw a non-stop parade of “nobodied” groups find their voices and lay claim to equal dignity: African Americans, Asian Americans, Native Americans, women, homosexuals, and people with disabilities. In every case, the inferior social rank that had been assigned to these groups was challenged and came to be seen as groundless, though clearly discrimination of all these kinds remains widespread. Our view of human nature doesn’t change overnight, but it does evolve over generations. The process typically begins with martyrdom and culminates in legislation. In between come years of nitty-gritty organization. But once enough people stand up for their dignity it’s not long until they become a force to be reckoned with.

The task confronting us today is to delegitimize “rankist” behaviors just as we are doing with other forms of oppression. That means all of us – you and me – giving up our claims to superiority. It means no more putting down of other individuals, groups or countries. It means affirming the dignity of others as if it were our own. Sounds familiar? It’s the “golden rule” of dignity which rules out degrading anybody else. When denigrating behaviors are sanctioned, potential targets (and who isn’t one at some point?) must devote their energy to protecting their own dignity. A culture of indignity takes a toll on health, creativity and productivity, so organizations and societies that tolerate rankism handicap themselves.

The cancer of rankism persists as a residue of our predatory past. But, for two reasons, the predatory strategy isn’t working any more. First, the weak are not as weak as they used to be, so picking on them is less secure. Using weapons of mass disruption, the disenfranchised can bring modern life to a stop. Humiliation is more dangerous than plutonium.

Second, the power that “dignitarian” groups can marshal exceeds that of groups that are driven by brute force and fear. When everyone has a place that is respected, everyone can work for the group as well as for themselves. “Dignity for all” is a winning strategy because it facilitates cooperation. Recognition and dignity are not just nice things to have, they are a formula for group success, and their opposites are a recipe for infighting, dysfunctionality and failure. If we can put the spotlight on rankism and purge our relationships of this poison, then not only we will spare people from humiliation, we’ll also increase the creativity of ourselves and our communities.

One of the sources of Lady Gaga’s fandom is that she’s a leader of the dignity movement. The kid who protests when one of his classmates is “nobodied” is another, all the more so if he or she is able to do so in a way that protects the dignity of the perpetrator. When victims of rankism respond in kind to their abusers, they’re unwittingly perpetuating a vicious cycle. The only way to end such cycles is to respect the dignity of the perpetrators while leaving no doubt that their behaviors are unacceptable.

In a dignitarian society, no-one is taken for a nobody. Acting superior – putting others down – is regarded as pompous and self-aggrandizing. Rankism, in all its guises, is uncool.

Our age-old survival strategy of opportunistic predation has reached its sell-by date. A vital part of our defense against this strategy is not to give offense in the first place. Going forward, the only thing as important as how we treat the Earth is how we treat each other.


Robert W. Fuller is an author and independent scholar from Berkeley, CA. His recent novel The Rowan Tree is now available as an audiobook at Amazon, iTunes, and audible.com. The Rowan Tree is also available in paperback as well as Kindle and other ebook formats.

Ducking Death; Surviving Superannuation

This is the sixth and final post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.

We must believe in free will. We have no choice.
– Isaac Bashevis Singer

What Kind of Computer Is the Brain?

Computers can’t do everything humans do—not yet, anyway—but they’re gaining on us. Some believe that, within this century, human intelligence will be seen as a remarkable, but nonetheless primitive, form of machine intelligence. Put the other way round, it’s likely that we will learn how to build machines that do everything we do—even create and emote. As computer pioneer Danny Hillis famously put it, “I want to build a machine who is proud of me.”

The revolutions wrought by the Copernican and Darwinian models shook us because they were seen as an attack on our status. Without proper preparation, the general public may experience the advent of sophisticated thinking machines as an insult to human pride and throw a tantrum that dwarfs all prior reactionary behavior.

At the present time, there are many candidate models of brain function, but none is so accurate and complete as to subsume all the others. Until the brain is understood as well as the other organs that sustain life, a new sense of self will co-exist with the old.

baby mirrorThe computer pioneer John von Neumann expressed the difference between the machines we build and the brains we’ve got by dubbing them “serial” and “parallel” computers, respectively. The principal difference between serial and parallel computers is that the former carry out one command after another, sequentially, while in the latter thousands of processes go on at once, side by side, influencing one another. Every interaction—whether with the world, with other individuals, or with parts of itself—rewires the menome. The brain that responds to the next input differs, at least slightly, from the one that responded to the last one. When we understand how brains work well enough to build better ones, the changes to our sense of self will swamp those of prior intellectual revolutions.

The genome that characterizes a species emerges via a long, slow Darwinian process of natural selection. The menomes that characterize individuals also originate via a Darwinian process, but the selection is among neural circuits and occurs much more rapidly than the natural selection that drives speciation. That the brain can be understood as a self-configuring Darwinian machine, albeit one that generates outcomes in fractions of a second instead of centuries, was first appreciated in the 1950s by Peter Putnam. Though the time constants differ by orders of magnitude, Putnam’s functional model of the nervous system recognized that the essential Darwinian functions of random variation and natural selection are mirrored in the brain in processes that he called random search and relative dominance.

In 1949, Donald O. Hebb enunciated what is now known as the “Hebb Postulate,” which states that “When an axon of cell A excites a cell B and repeatedly and persistently takes part in firing it, some growth process or chemical change occurs in one or both cells such that A’s efficiency in firing B is increased.” Peter Putnam’s “Neural Conditioned Reflex Principle” is an alternative statement of Hebb’s postulate, and involves an expansion of it to include the establishment and strengthening of inhibitory or negative facilitations, as well as the excitatory or positive correlations encompassed in the Hebb Postulate. The Hebb-Putnam postulate can be summed up as “Neurons that fire together wire together.”

The reason replicating, or even simulating, brain function sounds like science fiction is that we’re used to relatively simple machines—clocks, cars, washing machines, and serial computers. But, just as certain complex, extended molecules exhibit properties that we call life, so sufficiently complexity and plasticity is likely to endow neural networks with properties essentially indistinguishable from the consciousness, thought, and volition that we regard as integral to selfhood.

We shouldn’t sell machines short just because the only ones we’ve been able to build to date are “simple-minded.” When machines are as complex as our brains, and work according to the same principles, they’re very likely to be as awe-inspiring as we are, notwithstanding the fact that it will be we who’ve built them.

Who isn’t awed by the Hubble telescope or the Large Hadron Collider at CERN? These, too, are “just” machines, and they’re not even machines who think. (Here I revert to who-language. The point is that who or what-language works equally well. What is uncalled for is reserving who-language for humans and casting aspersions on other animals and machines as mere “whats.” With each passing decade, that distinction will fade.

The answer to “Who am I?” at the dawn of the age of smart machines is that, for the time being, we ourselves are the best model-building machines extant. The counter-intuitive realization that the difference between us and the machines we build is a bridgeable one has been long in coming, and we owe it to the clear-sighted tough love of many pioneers, including La Mettrie, David Hume, Mark Twain, John von Neumann, Donald Hebb, Peter Putnam, Douglas Hofstadter, Pierre Baldi, Susan Blackmore, David Eagleman, and a growing corps of neuroscientists.

Yes, it’s not yet possible to build a machine that exhibits what we loosely refer to as “consciousness,” but, prior to the discovery of the genetic code, no one could imagine cellular protein factories assembling every species on the tree of life, including one species—Homo sapiens—that would explain the tree itself.

The Self Is Dead. Long Live the Superself.

The generalization of the self-concept to the superself is unlikely to receive a reception much different from that accorded Twain’s What Is Man?.

The co-creation characteristic of the superself will be scorned as collectivism, if not socialism. Reciprocal dignity will be ridiculed as utopian. Asking “What am I?” instead of “Who am I?” will be dismissed as reductive, mechanistic, and heartless.

Although the superself incorporates the witness, and so has a religious provenance, it’s fair to ask if it will ever speak to the heart as traditional religious models have done. It’s not easy coming to terms with life as a property of inanimate matter, arranged just so, and it will likely be even more difficult to accept ourselves as extended, self-conscious, willful machines.

Many will feel that this outlook is arid and bleak, and want to know: Where’s the mystery? How about love? Doesn’t this mean that free will is an illusion? Awe and wonder and the occasional “Eureka!” may be enough for science, but religious models have offered fellowship, absolution, forgiveness, salvation, and enlightenment. People of faith will want to know what’s holy in this brave new world.

The perspectives of religion and science on selfhood, though different, are not incompatible. Without oversimplifying or mystifying either, it’s possible to identify common ground, and, going forward, a role for both traditions. I propose such a collaboration in Religion and Science: A Beautiful Friendship?.

My guess is that once we’re in the presence of machines that can do what we do the model of selfhood we’ll settle on will be even more fecund than the traditional one. That co-agency replaces individual volition will not undermine a sense of purpose, though it will require a redefinition of personal responsibility. There’s no reason to think that machines that are sophisticated enough to outperform us will evoke less wonder and reverence than organisms that have arisen via natural selection. Mystery does not attach itself exclusively to human beings. Rather, it inheres in the non-human as well as the human, in the inanimate as well as the animate. As Rabbi Abraham Heschel notes, “Awe is an intuition of the dignity of all things, a realization that things not only are what they are but also stand, however remotely, for something supreme.”

Contrary to our fears, the capacity of superselves for love, fellowship, and agency will be enlarged not diminished. As the concept of superself displaces that of individual selfhood, the brotherhood of man and its operating principle—equal dignity for all—become self-evident and self-enforcing. Nothing in this perspective bars belief in a Deity for those so inclined. Having said that, it’s implicit in this way of beholding selfhood that if there were a God, He’d want us to behave as if there weren’t. Like any good parent, He’d want to see us wean ourselves and grow up.

The superself, with its inherent co-creation and co-agency, not only transforms our relationships with each other, it also provides a new perspective on death. As mentioned, it’s arguable whether selves survive the death of the bodies in which they’re encoded. But, survivability is much less problematic for superselves. Why? Because they are dispersed and so, like the Internet that was designed to survive nuclear war, provide a more redundant and robust defense against extinction. As William Blake noted three centuries ago:

The generations of men run on in the tide of Time,
But leave their destin’d lineaments permanent for ever and ever.

In the same sense that the soul is deemed to survive the death of the individual, the wenome survives the disintegration of the body and the mind. The absence of a particular individual, as defined by a unique genome and menome, puts hardly a dent in the wenome. The building blocks of superselfhood can be thought of as genes, memes, and wemes. All three encodings are subject to evolutionary pressure.

Although some may feel this reformulation of selfhood asks them to give up the store, it will gradually become apparent that it’s only the storefront that requires a do-over. To give up standalone selfhood in exchange for a open-ended leadership role in cosmic evolution is a trade-off that many will find attractive.

As Norbert Wiener, the Father of Cybernetics, wrote in 1949:

We can be humble and live a good life with the
aid of machines, or we can be arrogant and die.

Robert W. Fuller is an author and independent scholar from Berkeley, CA. His most recent book is The Rowan Tree: A Novel.

What is Man?

This is the fifth post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.

“What Is Man?” is the title of a little book by Mark Twain. He held it back for twenty years because he knew the public would hate it. The “what” in the title foreshadows its discomfiting message.

Twain broke with the tradition of asking “Who Am I?” and its species-wide variant “Who Is Man?” on the grounds that a “who-question” is a leading question. It predisposes us to expect the answer to be a sentient being, not unlike ourselves, “whom” we’re trying to identify.

Twain’s answer was that Man is a machine, and he was right about the public reception accorded his thesis: the twentieth century was no more ready for Mark Twain’s mechanistic perspective than the eighteenth had been for Julien Offray de La Mettrie’s metaphor of “Machine Man.”

baby mirrorThe rejection accorded the works of La Mettrie and Twain is not surprising because it’s implicit in our idea of a machine that at least experts understand how it works. Only in the twentieth century did science gain an understanding of the body and we’re just beginning to understand the workings of the mind. Twain’s trepidation in anticipation of public scorn is reminiscent of Darwin’s procrastination in publishing his theory of evolution with its shocking implication that we were descended from apes.

At the dawn of the twenty-first century, Twain’s answer is no more popular than it was with his contemporaries. But recent research has produced a growing awareness that Mark Twain, while he may have been a killjoy was, as usual, ahead of his time.

Twentieth-century science has shown that humans, like other animals, function according to the same principles as the cosmos and everything in it. The Hindu seers who proclaimed, “I Am That” were onto something. Man does not stand apart from the rest of the cosmos. He is made of the same stuff and governed by the same laws as everything else. The gap between “I” and “That” does indeed seem to be narrowing.

As curmudgeons like Twain have delighted in pointing out, Man is in fact quite unexceptional. We do not live at the center of the universe: Copernicus and Galileo pointed out that it does not revolve around us. Humans are just one of many animals: Darwin, Wallace, and others placed us, kicking and screaming, in the company of apes. But, having eaten several servings of humble pie, surely no one will take it amiss if we allow ourselves one small brag.

Although not exceptional in ways we once believed, we are exceptionally good at building tools and machines. And that includes machines that do what we do. Machines that dig, sow, and reap. Machines that kill and machines that save lives. Machines that calculate, and, projecting, machines who think. Our brains will soon be viewed as improvable, constrained as they were by the stringent conditions of self-emergence via natural selection, gestation in a uterus, and birth through a baby-sized aperture in the pelvis.

No higher intelligence seems required to create life, including human life. What we revere as life is “just” a property of a handful of chemicals, RNA and DNA holding pride of place among them. But, that’s not a bad thing, because if we’ve come this far without intelligent design, the sky’s the limit once we lend our own inventiveness to the evolutionary process.

This has long been foreseen, but never accepted. Once we get used to it, this perspective will enable us to reduce suffering on a scale only dreamt of. Why? Because the lion’s share of human suffering can be traced to false self-conceptions. The indignities that foul human relationships, at every level, from interpersonal to international, stem from a model of autonomous selfhood in which self is pitted against self.

Rather than masking the indissoluble interconnectedness of selves—as the notion of individual selfhood does—superselfhood embraces it. It’s not just that we can’t do anything without help; we can’t even be apart from continual imitation. Entropic forces disintegrate any identity that is not shored up through a mimetic process of mutual recognition. Since mimesis is distorted and undermined by indignity, reciprocal dignity gradually, but ineluctably, displaces opportunistic predation as a strategy for optimizing group efficiency and productivity. As a source of inefficiency, malrecognition—with all its attendant dysfunctionality—will be rooted out much as we made it our business to combat malnutrition once we understood its toll.

Martin Luther King, Jr. gave expression to this emergent morality when he wrote: The arc of the moral universe is long, but it bends toward justice.

The Superself: Genome, Menome, and Wenome

[This is the fourth post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.]

Why are you unhappy?
Because 99.9 percent
Of everything you think,
And of everything you do,
Is for yourself—
And there isn’t one.
— Wei Wu Wei

The Superself: Genome, Menome, and Wenome

The ‘illusion of individuality’ operates at two levels: First, the universal level of the ego, the constructed “I” which is a developmental imperative in the early years, yet the object of unlearning in later years in many traditions (particularly Buddhist); and the other, socio-economic, level of western individualism. Most of us, most of the time, cannot comprehend the implications of communal culture to human wellbeing.
— David Adair

To recap, the genome is the blueprint for our physical body. The menome is the connectome of the nervous system. By analogy, the wenome is the connectome of everything else, most importantly the cultural web of personal and social relationships in which we’re immersed and entangled. The wenome comprises the rules, customs, rituals, manners, images, tunes, songs, languages, laws, constitutions, and institutions that define the culture by which our genome and menome are conditioned.

In this view, our selves are far more extensive than we’ve been led to believe. They extend beyond our own bodies to include what we think of as other selves and the world. We live in the minds of others, and they in ours.

The situation is analogous to memory. We think of our memories as located in our heads and bodies but when you drive to town, it’s the road that holds the memory of the route, reminding you at every turn how to proceed.

baby mirrorSo, too, is selfhood dispersed. It resides not only in the genome and the menome, but in the wenome. Much of the information we require in order to function is stored outside our bodies and brains—in other brains, books, maps, machines, objects, databases, the Internet, and the cloud. We’re dependent on these inputs to muster enough excitation to reach the threshold of emission of specific behaviors. Our genome and menome can not form in the absence of other genomes and menomes. The self does not stand alone, but rather is widely dispersed, encompassing, most immediately, our social milieu, and ultimately the entire cosmos.

As the illusory nature of autonomous selfhood becomes evident, and the full extent of the interdependence of selves becomes undeniable, our sense of selfhood will shift outward, from the limited identifications of the past to an amalgamation of these traditional facets of selfhood—the superself.

Recognition and Malrecognition
As mentioned, an inability to recruit recognition from others cripples an identity. That’s why solitary confinement is torture. Recognition is to the formation of identity as nutrition is to the building of the body. Put the other way round, malrecognition, like malnutrition, is injurious, and can be fatal. Think of the juvenile murderer sentenced to a life in prison, or orphans whose development is stunted by lack of an adult model. On the plus side, there are the benefits to children who grow up in the company of curious, creative adults.

In acknowledgement of the analogy between programming a computer and raising a child, both processes are described as culminating in a launch. In the world of computers, “failure to launch” belies the existence of a bug in the software that crashes the computer. In raising children, failure to launch reveals that an embryonic identity has not found a niche in which it can garner enough recognition to develop. As nutritional deficiencies limit physical development, recognition deficiencies cripple identity formation. We became aware of the terrible costs of malnutrition in the twentieth century. The twenty-first will witness an analogous awakening to the crippling effects of malrecognition.

To address the epidemic of malrecognition that now afflicts humankind, it helps to shift our vantage point from within to without, from subjective to objective, from introspection to inspection. If we interpret the menome as software that is continually being modified, then we can debug, patch, and rewrite it until the “program” no longer crashes the “computer.”

If this seems reductive and mechanistic, recall that before we understood the heart was a pump made of muscle, it was regarded as the seat of the soul. It’s hard to imagine surgery to the soul, but the muscle that pumps our blood is now routinely repaired. In that spirit, the mind can be viewed as a kind of computer (albeit one we are just now beginning to understand).

We balked at the seeming loss of the exceptional status implicit in Darwin’s theory of evolution, but eventually made peace with the incontrovertible fact of our simian ancestry. We shall follow the same arc as we come to see our selves as holders of an historic role in the lineage of ever-smarter machines, to wit the role of building machines that are smarter than we ourselves! This could be the final step in achieving a humility consonant with our actual place in the cosmos. There’s no better preparation for facing such an apparent comedown as to revisit a question posed by Mark Twain—What Is Man?—and we’ll do that in the next and penultimate post in this series.

“Self” Is a Misnomer

[This is the third post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.]

As suggested in the two preceding posts in this series, selfhood was on the ropes even before postmodernism delivered the knockout blow.

Postmodernism’s Coup de Grace to the Self

Humpty Dumpty sat on a wall,
Humpty Dumpty had a great fall.
All the king’s horses and all the king’s men
Couldn’t put Humpty together again.

In recent decades, deconstructing selfhood has become a cottage industry (with headquarters in Paris). The “fall” that postmodernism has inflicted on our commonsense notion of selfhood is as irreversible as Humpty Dumpty’s. Three examples follow:

While acknowledging that the philosopher David Hume scooped him by centuries, the novelist John Barth points out that the person who did things under his name decades ago seems like a Martian to him now:

How glibly I deploy even such a fishy fiction as the pronoun I, as if–although more than half of the cells of my physical body replace themselves in the time it takes me to write one book, and I’ve forgotten much more than I remember about my childhood, and the fellow who did things under my name forty years ago seems as alien to me now in many ways as an extraterrestrial–as if despite those considerations there really is an apprehensible antecedent to the first person singular. It is a far-fetched fiction indeed, as David Hume pointed out 250 years ago.
–John Barth

The novelist Milan Kundera exposes the common fallacy that the self can be detached from its unique history. Read Kundera’s comment and you’ll never again hear yourself saying, “If I were you…” without realizing that the premise can never be met so the only proper recipient of your advice is yourself.

Who has not sometimes wondered: suppose I had been born somewhere else, in another country, in another time, what would my life have been? The question contains within it one of mankind’s most widespread illusions, the illusion that brings us to consider our life situation a mere stage set, a contingent, interchangeable circumstance through which moves our autonomous, continuing “self.” Ah, how fine it is to imagine our other lives, a dozen possible other lives! But enough daydreaming! We are all hopelessly riveted to the date and place of our birth. Our “self” is inconceivable outside the particular, unique situation of our life; it is only comprehensible in and through that situation.
–Milan Kundera

Theater critic John Lahr observes that selfhood is a confabulation dependent on the agreement of others.

The ‘I’ that we confidently broadcast to the world is a fiction–a jerry-built container for the volatile unconscious elements that divide and confound us. In this sense, personal history and public history share the same dynamic principle: both are fables agreed upon.
–John Lahr

The glue that holds the “jerry-built” identity together is recognition; the cement that fortifies it against disintegration is agreement. I’ll return shortly to the indispensible part played by other selves in the creation and maintenance of our own.

“Self” Is a Misnomer

The very name–self–is a misnomer, and it’s a whopper. How so?

baby mirrorAt the beginning of the twentieth century, Charles Cooley observed that “We live in the minds of others without knowing it.” If we live in others’ minds, surely others live in ours.

The word “self” carries strong connotations of autonomy, individuality, and self-sufficiency. It’s as if it were chosen to mask our interdependence. It’s hardly an exaggeration to say that in buying into this notion of selfhood, humankind got off on the wrong foot.

The self does not stand alone; it is not a thing, let alone a thing in itself. Rather, we experience selfhood as a renewable capacity to construct and field identities. Like evanescent particles in a cloud chamber, the existence of the self is inferred from its byproducts.

The “self” may appear to act alone but it depends on input from other selves to manifest agency. There’s more to selfhood than our genome and our menome. We’ve overlooked a crucial element of selfhood–inputs from other selves–without which the menome, starved for recognition, is stillborn.

As our genome needs nutrition to build our body, so our menome depends on recognition from others to create and husband a viable identity. The autonomous self and individual agency are both illusory. Contrary to the name we call it by, the self is anything but self-sufficient.

The Co-Creation of Identity

To exist is to coexist.
–Gabriel Marcel

As Cooley and others have pointed out, we may first recognize our own nascent identity as what someone else–a parent, teacher, or friend–sees taking shape within us. One of the primary responsibilities of parents is the incubation of identity in the next generation. No wonder we love our parents and teachers: it is they who have coaxed our starter self onto the world stage and indicated a niche where it might thrive.

As collaborators in the formation of others’ identities, we repay the debt we owe those who, by reflecting an incipient identity back to us, served as midwife to our own.

Perhaps because they sense the creeping disintegration of their story, the elderly often feel the need to rehearse it. Listening to them recount their anecdotes is an act of compassion. Those who lend us their ears are involved not only in the creation of the identity that serves as our face to the world, but also in its maintenance. Personas, like magnetic poles, are not created, nor do they endure, in isolation.

The discovery of the profound interdependence of selves obviously has a bearing on our relationships. In the following posts, I’ll explore the implications of the co-creation of each others’ selves.

Am I a Home for Identities?

[This is the second post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.]

baby mirrorIn the first post in this series, we disentangled the notion of selfhood from the body, the mind, and the witness. Another common mistake is to identify a current identity as our “real” self. With age, most people realize that they are not the face they present to the world, not even the superposition of the various identities they’ve assumed over the course of their lifetime.

By my late thirties, I had accumulated enough personal history to see that I had presented several quite different Bobs to the world. Principal among my serial identities were student, teacher, and educator. Alongside these occupational personas were the familial ones of son, husband, and father. As Shakespeare famously noted:

All the world’s a stage,
And all the men and women merely players:
They have their exits and their entrances;
And one man in his time plays many parts …

Like many an Eastern sage, Shakespeare saw that we assume a series of parts while at the same time watching over ourselves as if we’re a member of the audience. That is, we both live our lives and, at the same time, witness our selves doing so. We don’t stop there: we even witness ourselves witnessing.

We know that our current persona will eventually give way to another. In contrast, the self ages little, perhaps because it partakes of the detached agelessness of the witness.

Distinct identities are strung together on the thread of memory, all of them provisional and perishable. No less fascinating than the birth, life, and death of our bodies are the births, lives, and deaths of these makeshift, transient identities. Reincarnation of the body is arguable; metamorphosis of identity is not.

The witness’s detachment facilitates the letting go of elements of identity in response to changing circumstances. As we age, the feeling that life is a battle is gradually replaced with the sense that it’s a game played with a shifting set of allies and opponents who, upon closer examination, are unmasked as collaborators. Without opposition, we might never notice the partiality and blind spots inherent in our unique vantage point.

The more flexible, forgiving attitude that results when we see our self as a home for transient identities turns out to be the perspective we need to maintain our dignity in adversity and accord it to others in theirs. Former antagonists—which may include colleagues, spouses, and parents—come to be seen as essential participants in our development, and we in theirs.

To keep an identity in working order, we continually emend and burnish it, principally by telling and retelling our story to ourselves and anyone who’ll listen. Occasionally, our narrative is revised in a top to bottom reformulation that in science would be called a paradigm shift. Though most incremental changes are too small and gradual to be noticed over months or even years, they add up, and suddenly, often in conjunction with a change in job, health, or relationship, we may come to see ourselves quite differently, revise our grand narrative, and present a new face to the world. Whole professions—therapy, coaching, counseling—have grown up to help people weather such identity crises.

It is tempting to think of the self as simply a home for the identities we adopt over our lifetime, but on reflection, this, too, falls short. Our self is also the source of the identities that sally forth as our proxies. That is, we experience the self as more than a retirement home for former identities; it’s also the laboratory in which they’re minted, tested, and from which they step onto the stage. One can think of the self as a crucible for identity formation.

Before examining this process, we consider two more candidates for the mantle of selfhood: the soul and pure consciousness.

Am I My Soul?

If selfhood, as currently understood, has a shortcoming, it’s its mortality. We grudgingly accept physical aging, but who has not balked at the idea of the apparent extinction of his or her self upon physical death? Alas, our precious but nebulous self—whatever it may be—appears to expire with the demise of our body.

To mitigate this bleak prospect, many religions postulate the existence of an immortal soul, and go on to identify self with soul. After we’ve clarified the concept of selfhood, we’ll discover that, even without hypothesizing an immortal soul, death loses some of its finality and its sting.

Am I Consciousness?

A last redoubt for the self as we’ve known it is to identify it as pure, empty consciousness. But what exactly is consciousness? Arguments run on about whether animals have it, and if so how much, without ever clarifying what consciousness is. Moreover, identifying one’s self as pure consciousness is just another identification, namely that of systematically dis-identifying with everything else.

Even if you don’t find pure, empty consciousness a bit spare or monotonous, there’s another problem with equating it with selfhood. Whatever it may be, stripped-down consciousness is deficient in agency, and agency—that is, not just being, but doing—is inextricably connected to selfhood because mentation does not occur apart from its potential to actualize behavior. To think is to rehearse action without triggering it. Thought involves the excitation of motor neurons, but below the threshold at which the actions those neurons enervate would be emitted. In computer parlance, thought is virtual behavior.

In the next post, I’ll bring in the postmodern perspective, which will complete the deconstruction of naïve selfhood, and set the stage for a self that’s congruent with the findings of both traditional introspection and contemporary neuroscience.

Who Am I?

[This is the first post in the series Why Everything You Know about Your “Self” Is Wrong. The series explores how our understanding of selfhood affects our sense of individuality, our interpersonal relationships, and our politics.]

baby mirrorConfusion about fundamental notions such as selfhood, identity, and consciousness distorts personal relationships, underlies ideological deadlock, aggravates partisan politics, and causes unnecessary human suffering.

A better understanding of selfhood holds the promise of resolving perennial quarrels and putting us all on the same side as we face the challenges in a global future, not least of which will be coming to terms with machines who rival or surpass human intelligence.

While we all casually refer to our self, no one knows quite what that self is. Nothing is so close at hand, yet hard to grasp as selfhood. To get started, think of your self as who or what you’re referring to when you use the pronouns “me,” “myself,” or “I.”

Am I My Body?

As infants, we’re taught that we are our bodies. Later, we learn that every human being has a unique genomic blueprint that governs the construction, in molecular nano-factories, of our physical bodies. But we do not derive our identity from our genome or from the body built according to that blueprint. By the time of adolescence, most of us, though still concerned about physical appearance, and in particular sexual attractiveness, have begun to shift our primary identity from our body to the thoughts and feelings that we associate with our minds.

Am I My Mind?

The mind is embodied in the connectivity of the central and autonomic nervous systems that determine our behavior, verbal and otherwise. By analogy with the genome, the map of neural connections is sometimes referred to as the connectome. The connectome for an individual can be called the menome (rhymes with genome).

Like our genome, our menome has Homo sapiens written all over it. And, like the genome, every menome is unique. Unlike the relatively stable genome, the menome is always changing.

As we’ll see, the menome isn’t the whole of selfhood any more than the genome. Before going beyond the menome, however, let’s take a look at one of the mind’s most noteworthy features: its ability to witness itself. Could the witness be what we mean when we refer to our self?

Am I My Witness?

I am an other.
– Arthur Rimbaud

The witness is a neutral, observational function of mind. It should not be thought of as a little observer in our heads, but rather as a cognitive function of the nervous system, namely that of monitoring the body and the mind. By childhood’s end, no one lacks this faculty, though in some it seems more active than in others.

The elderly will tell you that although their bodies and minds have aged, their witness has not. Even in old age, it remains a youthful, detached, outspoken observer. Whether ignored or embraced, the witness continues to whisper the truth to us as long as we live.

For example, it’s the witnessing faculty that notices that we’re ashamed or prideful, or, possibly, losing our hair or our memories. Without judging us, it registers outcomes and thereby provides evidence we need to manage.

The witness stands apart from the rush of worldly life, overhearing our thoughts and observing our actions. Although it has no rooting interest, it records the successes and failures, and the comings and goings, of the personal identities that we field in the game of life.

When the spectacle of life becomes intense, the witness often recedes into the background, but continues observing through thick and thin. So long as we remember that the witness is not an ethereal being in our heads—the ghostly “captain of our soul”—but a function, or an application, of the nervous system, it does no harm to personify it as a detached reporter of the spectacle that is our life.

The critical inner voice we sometimes hear scolding us is not that of the witness, which is indifferent to our ups and downs. Self-accusation is rather the result of internalizing others’ judgments. In contrast, the witness neither blames nor praises no matter what we do or what others think of us. While not given to displays of emotion, the witness is our closest ally. It may whisper rather than yell, but it speaks truth to power.

Some people identify the self as the witness, that is, they see themselves as that part of the mind that watches over the rest and reports its findings. While self-surveillance is essential to maturation, the witness is but one mental function among many. We sell ourselves short if we equate self with witness. The witness is no more the whole self than a smartphone is one of its apps.

The signature application of mind is to fashion serviceable identities. That is, to put together a persona that, by virtue of its contribution to others, gets us into the game and, once we’re on the field, garners enough recognition to secure a position. I’ll develop this idea in a series of posts that follow.

A word about the umbrella title: Why Everything You Know about Your “Self” Is Wrong. While everything you know about yourself is certainly not wrong, in fact, it’s probably right, that’s not what the title says and not what it means. Rather, this series of posts focuses on common misconceptions regarding selfhood. The focus is not ourselves—our personal histories—but rather our selves—that is, what we mean by “me,” “myself,” or “I.”

The Evolution of Moral Models

[This is the 13th in the series Religion and Science: A Beautiful Friendship.]

When religion has committed itself to a particular science model, it has often been left behind as the public embraced a new model. That’s the position in which the Catholic Church found itself in defending Ptolemy’s geocentric model of the solar system against the simpler heliocentric model of Copernicus. It’s the situation in which supporters of “creationism”—and its offspring, “intelligent design”—find themselves today.

Many contemporary religious leaders do not make this mistake, although those who do get a disproportionate amount of attention. Religious leaders who cheerfully cede the business of modeling nature to science are no longer rare. Neither they nor the scientists who study these matters, many of whom are themselves people of faith, see any contradiction between the perennial wisdom embodied in the world’s religions and, say, Darwin’s theory of evolution, the geological theory of plate tectonics, or the Big Bang theory of the cosmos.

It may surprise some that the father of modern cosmology, George Lemaitre, was a priest. When asked how he reconciled his faith and his science, he wrote:

The writers of the Bible were…as wise or as ignorant as their generation. Hence it is utterly unimportant that errors of historic or scientific fact should be found in the Bible….

Father Lemaitre showed that Einstein’s general relativity predicted an expanding universe. Einstein, convinced that the universe was static, modified his theory to avoid this implication. Later, when the universe was found to be expanding as Lemaitre had predicted, Einstein withdrew the modification, declaring it the biggest blunder of his life.

Tenzin Gyatso, the Dalai Lama, put it unequivocally in an op-ed in The New York Times, “If science proves some belief of Buddhism wrong, then Buddhism will have to change.”

That any of the currently accepted scientific theories could, in principle, be incorrect or incomplete is taken for granted by the scientific world. To insist, for example, that the theory of evolution is “just a theory” is only to state what every scientist knows and accepts. Of course, it’s a theory. What else could it be? But it’s an extremely well-tested theory and it makes sense to use it unless and until we have something manifestly superior. A society that rejects the theory of natural selection, Newton’s laws, or the standard model of elementary particle physics because they make no claim to being absolute truths, shoots itself in the foot.

Just as religion finds itself challenging contemporary science when it identifies with discarded nature models, so it must expect to compete for hearts and minds with evolving social and political models when it clings to antiquated moral codes. Here the case is not as clear-cut as with most nature models because it is typically much harder to demonstrate the superiority of a new social, political, or moral model than it is of a new nature model. The evidence is often ambiguous, even contradictory, partly because shifting personal preferences play a much larger, often hidden, role. As everyone who has argued politics is aware, the “facts” cited by partisans in support of their policy choices are often as debatable as the policies themselves.

Like nature models, political, social, and moral models originate in human experience, and, as experience accumulates, they evolve. Typically, the models we’ve inherited from the past were formulated over centuries, if not millennia. One reason that religious models generally lag behind the emerging social consensus is that the morals espoused by religion have usually proven useful over long periods of time and have become deeply entrenched. Hence, the first impulse is a conservative one, and often takes the form of shaming or coercing non-conformists into toeing the line.

The predilections of rebellious youth notwithstanding, tradition is not always wrong. What are now seen as traditional values earned their stripes in competition with alternative precepts that lost out. But, in basing morality on scripture, instead of evidence, people of faith belie a lack of faith in the findings of their own sages and prophets. Instead, why not see these prophets as futurists and judge their prophecies against the evidence? The question then becomes: Are their predictions confirmed or contradicted by experience? The answer may not be immediately apparent, but looking for an answer in a context that respects evidence is a lot more productive than invoking ambiguous scripture on one side or the other.

In this view, the term “moral” does not gain its legitimacy by virtue of its status as “received wisdom,” engraved in holy writ. Rather, the body of moral law is a prescriptive model of morality based on close observation, intuition, and extrapolation. Prophets like Moses, Buddha, Lao Tzu, Mo Tzu, Jesus, Mohammed, Sankara, and others are seen as perceptive moral philosophers with an uncanny knack for the long view.

As in science, virtually simultaneous, independent discovery of the same moral truths is not uncommon. Then and now, moral precepts can be understood as intuitive extrapolations based on empirical observations of cause and effect.

Take, for example, the commandment, “Thou shalt not kill.” It’s not hard to imagine that witnesses to tit-for-tat cycles of revenge killings concluded that “not killing” was the way to avoid deadly multi-generational feuds, and that someone—tradition credits Moses—packaged this discovery (along with other similar moral precepts) for his contemporaries and, unwittingly, for posterity.

From a modeling perspective, it’s plausible that all ten commandments were assembled from the combined wisdom of people who, drawing on the oral and written history of past and current generations, and bearing close witness to their own psychological and emotional dynamics, realized that certain individual behaviors ran counter to personal stability and undermined group solidarity, thereby making the community vulnerable to exploitation and domination by more cohesive groups. They labeled these practices “immoral,” anticipating that over time economic, psychological, social, and political forces would bring about either the elimination or relative decline of groups that countenanced them.

The Ten Commandments and other moral precepts are recorded in the world’s holy books. Distilled and refined through the ages, they constitute the moral foundation of human societies. If somehow they were to disappear from consciousness and we had to start over (think of William Golding’s novel Lord of the Flies), we would, by trial and error and with much bloodshed, gradually rediscover some of them from scratch and discard those that, in the meantime, circumstances had rendered obsolete.

Although some attribute moral principles to divine revelation, that’s just one explanation and it’s unverifiable. We may instead think of them as having been discovered in the same way that we discover everything else—through careful observation and verification. Having demonstrated their value in reducing suffering and/or in maintaining social stability, they were then elevated to special status, not unlike the process that results in the formulation and promulgation of successful science models, theories, rules, and laws.

A given rule of thumb can stand as shorthand for the whole body of observations and reasoning that undergirds it, in the same way that Newton’s laws encapsulate classical dynamics. The moral principles of religion represent an accumulation of proverbial injunctions that function as reminders and ethical guides.

As with all models, so with models of morality: close follow-up scrutiny may bring exceptions to light. Exceptions have long been sanctioned to the commandment “Thou shalt not kill”—to wit, capital punishment and warfare. But Moses may yet have the last word. As we move into the twenty-first century, the global trend to abolish capital punishment is unmistakable. Likewise, the inefficacy of war as an instrument of foreign policy is becoming clearer, and, as it does, the frequency of wars is diminishing (as documented by Steven Pinker in The Better Angels of Our Nature: Why Violence Has Declined).

In the next post, I’ll explain why I think ending the stand-off between science and religion is worthwhile, and suggest some of the elements of a deal that would enable them to cooperate going forward.

Religion and Science

[All twenty posts in this series have now been collected into a free eBook which can be downloaded at Religion and Science: A Beautiful Friendship? Thank you for your interest in this series.]

Why One God Is Better Than Ten

[This is the 6th in the series Religion and Science: A Beautiful Friendship.]

The most incomprehensible thing about the universe is that it is comprehensible.
– Albert Einstein

With the idea of god, early humans were imagining someone or something who knows, who understands, who can explain things well enough to build them. Now then, if God knows, then maybe, just maybe, we can learn to do what He does. That is, we too can build models of how things work and use them for our purposes.

The idea of modeling emerges naturally from the idea of god because with the positing of god we’ve made understanding itself something we can plausibly aspire to. There has probably never been an idea so consequential as that of the world’s comprehensibility. Even today’s scientists marvel at the fact that, if we try hard enough, the universe seems intelligible. Not a few scientists share Nobel-laureate E. P. Wigner’s perplexity regarding the unreasonable effectiveness of mathematics in the natural sciences.

Comprehensibility does not necessarily mean that things accord with common sense. Quantum theory famously defies common sense, even to its creators. Richard Feynman is often quoted as saying, “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” But a theory doesn’t need to jibe with common sense to be useful. It suffices that it account for what we observe.

Our faith in the comprehensibility of the world around us mirrors our ancestors’ faith in godlike beings to whom things were intelligible. Yes, it was perhaps a bit presumptuous of us to imagine ourselves stealing our gods’ thunder, but Homo sapiens has never lacked for hubris.

Genesis says that after creating the universe, God created Man in his own image. The proverb “Like father, like son” then accounts for our emulating our creator, and growing up to be model builders like our father figure.

In contrast to polytheism, where a plethora of gods may be at odds, monotheism carries with it the expectation that a single god, endowed with omniscience and omnipotence, is of one mind. To this day, even non-believers, confounded by tough scientific problems are apt to echo the biblical, “God works in mysterious ways.” But, miracle of miracles, not so mysterious as to prevent us from understanding the workings of the cosmos, or, as Stephen Hawking famously put it, to “know the mind of God.”

Monotheism is the theological counterpart of the scientist’s belief in the ultimate reconcilability of apparently contradictory observations into one consistent framework. We cannot expect to know God’s mind until, at the very least, we have eliminated inconsistencies in our observations and contradictions in our partial visions.

This means that the imprimatur of authority (e.g., the King or the Church or any number of pedigreed experts) is not enough to make a proposition true. Authorities who make pronouncements that overlook or suppress inconsistencies in the evidence do not, for long, retain their authority.

Monotheism is therefore not only a powerful constraint on the models we build, it is also a first step toward opening the quest for truth to outsiders and amateurs, who may see things differently than the establishment. Buried within the model of monotheism lies the democratic ideal of no favored status.

To the contemporary scientist this means that models must be free of both internal and external contradictions, and they must not depend on the vantage point of the observer. These are stringent conditions. Meeting them guides physicists as they seek to unify less comprehensive theories in a grand “theory of everything,” or TOE. (A TOE is an especially powerful kind of model, and I’ll say more about them later.)

There’s another implication of monotheism that has often been overlooked in battles between religion and science. An omniscient, unique god, worthy of the name, would insist that the truth is singular, and that it’s His truth. In consequence, there cannot be two distinct, true, but contradictory bodies of knowledge. So, the idea of monotheism should stand as a refutation of claims that religious truths need not be consistent with the truths of science. Of course, some of our beliefs—be they from science or religion—will later be revealed as false. But that doesn’t weaken monotheism’s demand for consistency; it just prolongs the search for a model until we find one that meets the stringent condition of taking into account all the evidence.

It’s said that it takes ten years to get good at anything. Well, it’s taken humans more like ten thousand years to get good at building models. For most of human history, our models lacked explanatory power. Models of that kind are often dismissed as myths. It’s more fruitful to think of myths as stepping stones to better models. We now understand some things far better than our ancestors, and other things not much better at all. But the overall trend is that we keep coming up with better explanations and, as more and more of us turn our attention to model building, our models are improving faster and our ability to usurp Nature’s power is growing. To what purpose?

Religion offers a variety of answers to this question and we’ll examine some of them in subsequent posts. Religion has also famously warned us to separate the wheat from the chaff, and we must not fail to apply this proverb to beliefs of every kind, including those of religion itself.

Religion and Science

[All twenty posts in this series have now been collected into a free eBook which can be downloaded at Religion and Science: A Beautiful Friendship? Thank you for your interest in this series.]

How to Keep Your Balance When There’s No Place to Stand and Nothing to Hold On To

[This is the 5th in the series Religion and Science: A Beautiful Friendship]

Know you what it is to be a child? … it is to believe in belief….
– Francis Thompson, 19th c. British poet

We don’t forget our first ah-ha experience any more than we forget our first kiss. The difference is we have some idea of what to expect from a kiss, but we don’t know what to make of an enlightening incident. The experience lingers in memory as something special, but since we can’t account for it, we’re apt to keep it to ourselves.

Only in my thirties did I realize that an experience I’d had in my teens was the analogue of that first kiss. About six years after discovering that our third grade science book contained mistakes, it struck me that anything could be wrong. There were no infallible truths, no ultimate explanations.

In high school we were learning that science theories and models were not to be regarded as absolute truths, but rather taken to be useful descriptions that might someday be replaced with better ones. I accepted this way of holding scientific truth—it didn’t seem to undercut its usefulness. But I still wanted to believe there were absolute, moral truths, not mere assumptions, but unimpeachable, eternal verities. My mother certainly acted as if there were.

But one day, alone in my bedroom, I had the premonition that what was true of science applied to beliefs of every sort. I realized that, as in science, political, moral, or personal convictions could be questioned and might need amending or qualifying in certain circumstances. The feeling reminded me of consulting a dictionary and realizing that there are no final definitions, only cross references. I remember exactly where I was standing, and how it felt, when I discovered there was no place to stand, nothing to hold on to. I felt sobered, yet at the same time, strangely liberated. After all, if there were no absolutes, then there might be an escape from what often seemed to me to be a confining social conformity.

With this revelation, my hopes for definitive, immutable solutions to life’s problems dimmed. I shared my experience of unbelief with no one at the time, knowing that I couldn’t explain myself and fearing others’ mockery. I decided that to function in society I would have to pretend to go along with the prevailing consensus—at least until I could come up with something better. For decades afterwards, without understanding why, I was drawn to people and ideas that expanded my premonition of a worldview grounded not on immutable beliefs, but rather on a process of continually improving our best working assumptions.

Science Models Evolve

It’s the essence of models that they’re works in progress. While nothing could be more obvious—after all, models are all just figments of our fallible imaginations—the idea that models can change, and should be expected to yield their place of privilege to better ones, has been surprisingly hard to impart.

Until relatively recently we seem to have preferred to stick to what we know—or think we know—no matter the consequences. Rather than judge for ourselves, we’ve been ready to defer to existing authority and subscribe to received “wisdom.” Perhaps this is because of a premium put on not “upsetting the apple cart” during a period in human history when an upright apple cart was of more importance to group cohesiveness and survival than the fact that the cart was full of rotten apples.

Ironically, our principal heroes, saints and geniuses alike, have typically spilled a lot of apples. Very often they are people who have championed a truth that contradicts the official line.

A turning point in the history of human understanding came in the seventeenth century when one such figure, the English physician William Harvey, discovered that the blood circulates through the body. His plea—“I appeal to your own eyes as my witness and judge”—was revolutionary at a time when physicians looked not to their own experience but rather accepted on faith the Greek view that blood was made in the liver and consumed as fuel by the body. The idea that dogma be subordinated to the actual experience of the individual seemed audacious at the time.

Another milestone was the shift from the geocentric (or Ptolemaic) model (named after the first-century Egyptian astronomer Ptolemy) to the heliocentric model (or Copernican) model (after the sixteenth-century Polish astronomer Copernicus, who is regarded by many as the father of modern science).

Until five centuries ago, it was an article of faith that the sun, the stars, and the planets revolved around the earth, which lay motionless at the center of the universe. When the Italian scientist Galileo embraced the Copernican model, which held that the earth and other planets revolve around the sun, he was contradicting the teaching of the Church. This was considered sacrilegious and, under threat of torture, he was forced to recant. He spent the rest of his life under house arrest, making further astronomical discoveries and writing books for posterity. In 1992, Pope John Paul II acknowledged that the Roman Catholic Church had erred in condemning Galileo for asserting that the Earth revolves around the Sun.

The Galileo affair was really an argument about whether models should be allowed to change without the Church’s consent. Those in positions of authority often deem acceptance of their beliefs, and with that the acceptance of their role as arbiters of beliefs, to be more important than the potential benefits of moving on to a better model. For example, the discovery of seashells on mountaintops and fossil evidence of extinct species undermined theological doctrine that the world and all living things were a mere six thousand years old. Such discoveries posed a serious challenge to the Church’s monopoly on truth.

Typically, new models do not render old ones useless, they simply circumscribe their domains of validity, unveiling and accounting for altogether new phenomena that lie beyond the scope of the old models. Thus, relativity and quantum theory do not render Newton’s laws of motion obsolete. NASA has no need for the refinements of quantum or relativistic mechanics in calculating the flight paths of space vehicles. The accuracy afforded by Newton’s laws suffices for its purposes.

Some think that truths that aren’t absolute and immutable disqualify themselves as truths. But just because models change doesn’t mean that anything goes. At any given time, what “goes” is precisely the most accurate model we’ve got. One simply has to be alert to the fact that our current model may be superseded by an even better one tomorrow. It’s precisely this built-in skepticism that gives science its power.

Most scientists are excited when they find a persistent discrepancy between their latest model and empirical data. They know that such deviations signal the existence of hitherto unknown realms in which new phenomena may be discovered. The presumption that nature models are infallible has been replaced with the humbling expectation that their common destiny is to be replaced by more comprehensive and accurate ones.

Toward the end of the nineteenth century, many physicists believed they’d learned all there was to know about the workings of the universe. The consensus was that between Newton’s dynamics and Maxwell’s electromagnetism we had everything covered. Prominent scientists solemnly announced the end of physics.

There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.
– Lord Kelvin (1900)

Then a few tiny discrepancies between theory and experiment were noted and as scientists explored them, they came upon the previously hidden realm of atomic and relativistic physics, and with it technologies that have put their stamp on the twentieth century.

Albert Einstein believed that the final resting place of every theory is as a special case of a broader one. Indeed, he spent the last decades of his life searching for a unified theory that would have transcended the discoveries he made as a young man. The quest for such a grand unifying theory goes on.

In the next post in the series, I’ll consider some distinguished models of religious provenance, and explain why I think they needn’t duck evidentiary tests any more than science models do.

Religion and Science

[All twenty posts in this series have now been collected into a free eBook which can be downloaded at Religion and Science: A Beautiful Friendship? Thank you for your interest in this series.]