top of page

How We Fool Ourselves

Knowing ourselves must surely include knowing not only that but how much our very sense of ourselves is a “mythic reality” by which we convince others of things about ourselves by first convincing ourselves. In the dance of deception, the evolutionary arms race means that we all get pretty good at figuring out when someone is being a “fake,” a “poser,” or any of a number of ways to be less than honest and genuine. This is also why trust is so important, as well as reputation, as being taken as good at our word is not only important to being taken seriously, but to being paid attention to at all. These are also important for the confidence game we all play to make ourselves better than we were, by first acting like we already are. Unfortunately the best way to do this is by fooling ourselves first. Even if this may be, in some sense, epistemically pretty dangerous, it also means we can more readily be taken to be honest and forthright. Having fooled ourselves first, we actually believe it. It is dangerous mainly because we render ourselves ignorant or deceived, so it may turn out to be a major barrier to self-knowledge, especially if we do it very often (and how would we know?). It is also ethically problematic. One doesn’t have to lie to fool someone, and we are often taken to be, and feel ourselves to be perfectly honest and forthright, when we are actually just as much a dupe as those we have (now unknowingly) duped.

For Erik Erikson (Young Man Luther) becoming an adult means reconstructing the past in a way that leads to the present, accounting for one’s behavior as if it were intentional. This is how it is made intelligible, rather than the childish, or perhaps more commonly, adolescent account where, when asked why you did something, the only response is “I dunno.” This doesn’t mean falsifying the past, but we must use fictional and imaginative power to “make sense” of the facts as we remember them. Without this, there can be no sense of a unitary self behind one’s behavior. External, objective events don’t need to occur in the form of a story. But this means, from the empirical point of view, that narratives are always selections, fabrications, and constructions and to that extent are always fictional. In Opa Nobody, Sonya Huber tries to make sense of her own compulsive social activism by researching the life of her grandfather, who was an anti-Nazi activist in the 1920s and 30’s. She is very careful to document as much as she can with interviews, and historical documents, but finds that she cannot make sense out of it without adding the elements she needs to shape events into a story. What is troublesome with her book is that the very virtue of her clarity about what is documented and what is not, regularly interferes with the narrative, making it far more difficult to follow it, to make sense out of it all. Moreover the very motivation for writing it is not so much to understand the causal effects of the events surrounding her grandfather’s life on her life, but to understand the meaning of her own.

Stories can always be told in different ways by different people, and sometimes by the same one. I often entertained students by telling them a story of my arrival at our college as a descent into the hell of a third-rate college in a cultural desert, blindsided by a tenure denial, to vouchsafe the birth of my daughter, and then retelling the same story as my triumphant arrival at a post that would connect me to a generation of eager young minds, and enable me to pursue the interdisciplinary teaching and scholarship that would garner professional accolades and speaking engagements all over the world. Both stories are true. We learn how to tell our stories by first being told stories about ourselves by others, like the story my mother so often told about the early appearance of my first full sentence at age two on a street in Minneapolis: “Big car make Johnny go boom.” Stories may include actual events, or fail to do so, and there is a facticity that constrains falsehood-telling in stories, including the ones told by historians, without which history makes little sense. But narrative truth is not always historical truth. Narrative truth is about providing external descriptions of the world to be judged not by their veridicality, but more like by their verisimilitude. Robert Coles, in his work on the moral imagination (The Call of Stories), highlights the integrative functions of stories in healing what is sick or broken, bringing together what is shattered, helping us cope with stress, and propelling movement toward fulfillment and maturity. Even the Freudian concept of unconscious intention can be understood not as the causal power which explains an event, but as the retelling of an otherwise incomprehensible action as if it were intentional. As an interpretive principle, it may simply be useful to act as if there are no accidents. Why do you think Freud won the Goethe Prize for literature, rather than a Nobel for scientific discovery? No one may really ever know what Opa Nobody’s intentions really were, but we need to impute what makes sense of his actions.

There is a problem, of course, in distinguishing between the narrative latitude that produces greater verisimilitude and the violation of what historians (and by extension, those trying to hold to the historical truth) call “facticity.” There is an honesty claimed in calling autobiography and memoir non-fiction. There is an implied contract with one’s readers which is violated by knowingly inventing things, just as there is an expectation in normal conversation that using the personal pronoun “I” means you aren’t just making it up (though one can write first person fiction). But such understandings are rendered problematic by the likelihood that we may simply have deceived ourselves first. This is a problem not only, but especially, when we are confident in our beliefs, part of the confidence game in which truth is not vouchsafed by confidence and may, indeed, be betrayed by it. There are a number of neurological deficits that result in confabulation, where a patient may invent answers to questions, or accounts of events, fully believing themselves to be telling the truth, like a stroke victim detailing his attendance at a recent conference when he has not left the hospital. Rather than admitting ignorance, even normal people can make up an answer to a question or an account of events that is not true, and express it with conviction. In a detailed study of such cases, William Hirstein, in Brain Fiction, argues that there is a separation, even at the level of brain function, between the capacity to invent a plausible-sounding story, and a normal checking process that allows us to recognize the story is a fantasy. Of course what sounds plausible under a veil of ignorance may not be so plausible to one better informed. Nevertheless, before we start ranting about adolescents or “emerging adults” talking “out their asses” about things they don’t really understand, we’d best be aware of the possibility that we all do it, more often than we think. As we saw in our discussion of introspective failure, normal people regularly confabulate accounts of mental processes of which they are unaware. A confabulation is merely an ill-grounded epistemic claim that its author does not know is ill-grounded. We all can and do vary the strength of our epistemic criteria, say between friendly talk and testifying in a courtroom.

There really is no final “fact checking” that makes some memories more reliable than others. Indeed, the evidence suggests that for memories that are important to us, and therefore are more frequently recalled, our greater confidence may actually betray greater unreliability, a rather troubling paradox. Every time a memory is brought back to consciousness, it gets reconsolidated in the context of recall. Memories that are repeatedly reconstructed over time always run the risk of being increasingly altered. Consider the research on so-called “flashbulb memories” of where we were and what we were doing during major public events, like the airplanes crashing into the World Trade Towers on September 11, 2001. It turns out we get all sorts of things wrong in these memories, even getting the emotional experience wrong 58% of the time only a year later. The more often we recall and retell an experience, the less accurate it may be, so perhaps “flashbulb” memories are more confident only because of how often they have been retold, like the “fishing stories” that get better every time. We all have our repertoire of favorite stories. On some level we know we are making them better stories each time we tell them; unfortunately the stories for which this may be the most true are the ones which may be more central or important to our beliefs about ourselves, and hence also remembered with the greater confidence, though their reliability may be all that much more regularly eroded. Memory is all about reconstruction. It is not a record.

Eyewitness testimony, which was once the gold standard in court, is troublesome. Your memory is adaptive. Information learned after the original event gets incorporated into your story of an event. Elizabeth Loftus (Eyewitness Testimony) has documented a lot of this experimentally, where a witness who is asked “how fast were the cars going when they came into contact” versus “...when they smashed into each other” is not only likely to report slower speeds, but is less likely to remember the screech of metal and broken glass on the scene a week later, even if neither witness saw these things initially. Do we retain a core of “the facts” and reconstruct the rest? Or is there a darker possibility that even “the facts” can get altered? Loftus herself discovered one of her own childhood memories to have been false. She once convinced Alan Alda that he disliked hard-boiled eggs because he had gotten sick on them as a child, a memory that had been implanted with tactical suggestions. Interviewed ten years later, students reporting an “identity crisis” in young adulthood will often say things like, “What crisis? This is the kind of person I am, and I was always headed this way.” What makes something a story is that there is some kind of conflict, followed by a resolution of the climactic tension. Was the original “crisis” what happened, or is it about how we tell a story? Their current “story” likely still includes some dramatic tension, but it is no longer a young adult crisis. Indeed, in the current story, there was no crisis. Ask any developing individual to read from a journal or from emails of a few years back about something that seemed a major crisis at the time, and they will often respond with something like “oh my god, I thought that was a crisis? It is nothing compared to what I am going through now.” So what will they think in five years about their current difficulties? Even what seemed the “gist” of an event might look different in retrospect. Our current story is always a rewriting of our past, often to justify our present life, just as each generation rewrites its history. So I think that even what seem to be our most reliable memories may be more unmoored than we think. Even if multiple retellings of a story make us increasingly confident, they are likely to result in a story that has been rewritten multiple times. Our very confidence may be a sign of the unreliability of the memory, however important or central to our identities we take it to be. Bodily connections might provide better moorings of our memories in our pasts, as Odysseus is identified by his bodily scars. Or by what other people remember that can be verified by concrete artifacts. Others might thus be involved in the “checking” of even our autobiographical memories, as Doonesbury’s Zonker is corrected that his memory of Woodstock was from seeing the movie, as he was not actually there.

We are generating accounts of our behavior almost constantly with the left-hemisphere language-generation system Michael Gazzaniga (The Mind's Past) calls the “interpreter.” Along with Roger Sperry, Gazzaniga was one of the original researchers on “split-brain” patients, who have had the communication between the two hemispheres severed. One of the patients, whose linguistic left hemisphere has been shown a picture of a chicken shed, selects a picture of a chicken with his right hand (controlled by the left hemisphere); simultaneously shown a snow scene to the right hemisphere, he selects a picture of a shovel with his left hand (controlled by the non-linguistic right hemisphere). Asked why he pointed to the shovel, the “interpreter” generates a consistent account: “That’s because you need a shovel to clean out the chicken shed.”

Gazzaniga connects the left-hemisphere “interpreter” to lying and self-deception. He asks about the usefulness of a left-hemisphere “spin doctor” when we are such lousy liars: anxious, guilt-ridden, and sweaty. The autonomic signs of lying are connected to the very orbitofrontal processes that normally provide the “checks” on the explanations produced by the “interpreter,” which lead to doubts and which, in normal circumstances, prevent them from becoming solidified as beliefs. If the “interpreter” is what normally keeps our personal story together, it may only do so by virtue of our learning to lie to ourselves. The function of the orbitofrontal cortex is the role it plays in the application of standards -- social, ethical, and religious -- leading to our feeling the revulsion about actions or emotions that do not rise to those standards. Our normal state, in which we might say we do not know or do not remember something, appears to be due not to a lack of potentially false accounts being generated, but to their active suppression produced by checks upon potential errors. Knowing when we do not know something may only come after generating and rejecting a series of potential answers. With damage or inhibition of some of these frontal regions, we do not produce emotions strong enough to rein in inappropriate impulses, the incarnation of our commonsense notion of conscience. Frontal patients often lose an emotional component to thought, showing lower skin-conductance responses than that needed for the negative “somatic markers” (Damasio Descartes' Error) which might otherwise inhibit the formation of an intent or action. While my professional successes might suggest an intact left orbitofrontal cortex, my relative comfort with social behavior that others might consider outrageous or offensive would be consistent with my being hit over my right brow with a baseball bat at age 14. With damage to my right orbitofrontal cortex, I might well exhibit the same symptoms belied by the capacity for social outrage that I made into the lifestyle of a flamboyant professor, who regularly pushed students beyond their comfort zones. Nevertheless, as William Hirstein (2005) suggested, even normal intelligence and foresight might rely on confabulation and self-deception to keep at bay the painful truths of mortality, our insignificance in an immense universe, and the potential for tragedy. As Ernest Becker said over 50 years ago, The Denial of Death is a central fact of human psychology.

Confabulators set their thresholds for belief too low, or cannot do the appropriate checks because of brain-damage. But the demand for both truth and the usefulness of a belief may conflict even in the normal, undamaged brain. Even self-boundaries can vary. Some classical research by Rubin Gur and Harold Sackheim showed conscious recognition of one’s voice contracting after failure (where we can fail to identify a recorded voice as our own) and rising after success (where we may falsely identify other voices as our own), despite unconscious recognition (as shown by galvanic skin response). Clearly, an important part of of what scientific training does is to discipline and raise the doubt level, to voluntarily raise one’s thresholds, at least in appropriate contexts. The caution reserved for professional conferences and published work is not only likely to be high, but substantially aided by a community of people double-checking. Scientists are regularly frustrated when laypersons or journalists ignore their qualified answers and oversimplify, a problem found in secondary sources, and even textbooks, where less advanced levels of understanding may require simplification. Such considerations may be crucial for political and policy discussions, particularly where time-limited crises may necessitate acting on the basis of less-than-certain knowledge. In such cases full scientific caution may well constitute “pathological doubt,” and need to be guided by non-empirical questions which can still be subject to rational evaluation.

Still, it is deceiving others that has the clearest value in a socially interdependent world, so much so that authors like Nicholas Humphrey have argued that the arms race of deception and its detection may have provided powerful selective pressure for the evolution of intelligence. Hence, we are likely to be pretty good at it, Why self-deception? Because, as those “practiced at the art of deception” well know, it is far easier to deceive others if you deceive yourself first. Despite the epistemic costs of all the ways we are rendered unknowing (or wrong) about ourselves and others, there are both the obvious biological advantages to survival and reproduction, and the psychological benefit of feeling better and being happier. This all makes “knowing ourselves” nightmarishly complex, in particular because of all the ways our self-deceptions may systematically reduce the extent to which “knowing ourselves” is even to our advantage. While acting on the basis of what is untrue can lead to unpleasant consequences, as long as the cost-benefit proportion weighs more heavily in our favor, the strategy is a winner. We give off fewer cues of intentional deception, reduce the cognitive load by being unaware of part of the truth, and have an easy defense against detection in the denial of intent. The evolutionary logic of self-deception is detailed extensively by Robert Trivers in The Folly of Fools, in which he categorizes its varieties, including self-inflation, derogating others, moral superiority, illusions of control, and false internal narratives. Part of us may still register accurate assessments of self and other, so there may often be a rather more complex self, separated into public and private aspects that may interact.

Self-deception is not a contradiction in terms because our “full actual selves,” to use Owen Flanagan’s (Consciousness Reconsidered) pregnant concept, is composed of many parts. We are all well aware, for example, that our self-represented self probably cannot be fully held in consciousness all at one time, which is why we may take a few days to make an important decision. We can also become aware of ways in which our behavior is not consistent with our self-representation, hence commonplace experiences of not feeling or behaving “like myself today.” We all know that we can sometimes actively decide to “not think about something” (suppression) while we focus on another task, as I did when I returned and finished leading a research seminar just after learning that my father had suddenly died. Self-deception may include a variety of ways in which we preferentially exclude true information about ourselves or about reality more generally, in different degrees of consciousness, for varying lengths of time, from momentarily having something “slip your mind,” to forgetting it entirely. Not having attended to or encoded something in the first place is likely, of course, to be far more efficient, and we may get as good at it as we do pitching unopened junk mail into the trash. Of course, as we also know, even when deception cannot be detected against background behavior, there are a range of behavioral cues that may be generated by the degree to which our bodies betray consciousness of our deception. Nervousness, exertions of control belied by overacting, overcontrol, rehearsed responses, and displacement, are among the ways we may judge others or ourselves as being less than genuine. The critical cues are the ones due to cognitive overload, which even Freud pointed out was one of the costs of our defensive mechanisms, reducing creativity and spontaneity.

As Eduardo Gianetti points out in Lies We Live By, that while self-deception may often be a curse, it is also a source of the commitments we make to futures we cannot know. In this may reside some of the greatest accomplishments of our species, as well as “the savage, inexplicable hope which feeds us and sustains our lives”

(Gianetti). It may also be important to the personal commitments we make to each other, enabling us to obtain greater goods than could be attained without it, and intimacies that may be the best source of genuine self-knowledge, and of the possibilities of embracing the better angels of our nature. While kin selection and reciprocity may provide some understanding of moral relationships, even non-zero sum reciprocity (Wright Non-Zero) cannot account for the kind of good provided by deep friendships or life partnerships where help is given when there is nothing to be gained. This is a commitment strategy involving a kind of “futures trading” that includes commitments to future actions which would not then be rationally self-interested. Under such circumstances we might realistically hope to get help when we need it most, when we are sick, alone, or poor, rather than only when we are able to reciprocate. This is the obverse of the logic of Mutual Assured Destruction that might well have kept the world from annihilation during the Cold War generation. Call it Mutual Assured Benefit.

Why would we believe that our partners would not, when push comes to shove, do what is rationally most self-interested and simply cut us loose? We believe that they will act in ways beyond self-interest only if the initial signals of commitment are accompanied by the irrational displays of behavior and emotion that help build our assurance that they would actually follow through. Of course, it might also be in their interest to deceive us about such commitments by themselves being self-deceived, something which our own strategies might well take into account, and they ours. Given that such commitments can provide goods not otherwise obtainable, there may be selective advantages for those able to give and receive them, which would provide an evolutionary shaping of the capacity for passionate, emotional commitment. It also provides the complicated dance of deceptive versions of such expression, of the detection of such deception, and self-deception, which makes out relational lives so poignantly baffling. Nevertheless, our beliefs about the possibility of such commitments are what make them possible; without the ability to give this kind of deep trust, one cannot get it. Fortunately, most of us have proof of such commitments inn our parents, if not in the love they share with each other, then in what they give us with no expectation of return. No wonder our early attachments are so predictive of early adult intimacies. People not socialized with experiences of the trustworthy, or who have repeatedly had their trust betrayed, may not be capable of such commitments. There is a hell for children. It doesn't go away in adults.

bottom of page