265 7 4MB
English Pages 166 [167] Year 2020
UNDERSTANDING THE HUMAN MIND
Drawing on current research in anthropology, cognitive psychology, neuroscience, and the humanities, Understanding the Human Mind explores how and why we, as humans, find it so easy to believe we are right—even when we are outright wrong. Humans live out their own lives effectively trapped in their own mind and, despite being exceptional survivors and a highly social species, our inner mental world is often misaligned with reality. In order to understand why, John Edward Terrell and Gabriel Stowe Terrell suggest current dual-process models of the mind overlook our mind’s most decisive and unpredictable mode: creativity. Using a three-dimensional model of the mind, the authors examine the human struggle to stay in touch with reality—how we succeed, how we fail, and how winning this struggle is key to our survival in an age of mounting social problems of our own making. Using news stories of logic-defying behavior, analogies to famous fictitious characters, and analysis of evolutionary and cognitive psychology theory, this fascinating account of how the mind works is a must-read for all interested in anthropology and cognitive psychology. John Edward Terrell is internationally known for his pioneering research and publications on human biological and cultural diversity, social network analysis, human biogeography, and the peopling and prehistory of the Pacific Islands. Gabriel Stowe Terrell is studying industrial relations with an emphasis on conflict resolution techniques, organizational behavior, and labor history.
“Why are humans so good at self-deception? What does that remarkable ability mean for our future on this planet? Terrell and Terrell provide a brilliantly provocative and honest assessment of the human condition and mind. Weaving insights from various scientific disciplines, from anthropology to neuroscience, they compellingly argue that evolution has bestowed humans with a handful of advantages, advantages that imperil humanity as a whole. This book is a remarkable achievement given both the breadth and complexity of ideas contained within, and their fidelity to those ideas in making them digestible and resonant with non-experts. It is a must read.” — Lane Beckes, Ph.D., Associate Professor of Neuroscience and Psychology, Bradley University “Explorers Terrell and Terrell take us on a guided tour of our own minds and the marvelous advantages and hidden uncertainties of our human commitment to social life. Informed by contemporary psychology and neuroscience, the Terrells’ collaboration offers original insights and perspectives on human nature and the future of our species illustrated using familiar, everyday experiences and stories. As a practicing psychologist, I think readers will benefit personally from the wellspring of meaning that flows from “knowing thyself ” in this illuminating way. The compelling conclusion asks us to consider “Do I need to do something? Should I look again?”—to which I would add “Should I read this book?” Yes, yes, and again yes.” — Thomas L. Clark, Ph.D., psychologist in private practice, Tallahassee FL “Terrell and Terrell draw upon deep time, trans-oceanic cultural research, and inter-generational cooperation in this bold, vivid work on self-persuasion and delusion. Brain function, social process, and ideology come together here in evolutionary perspective as the same topic in fresh, smooth prose, recruiting familiar characters from fiction. This engaging but humbling study contends that human creative thinking—including selective perception, logical reasoning, and dreaming—is also dangerous thinking. It urges us to recheck our own accruing presumptions, showing why it’s vital we do.” — Parker Shipton, Ph.D., Professor, Department of Anthropology, Research Fellow, African Studies Center, Boston University
UNDERSTANDING THE HUMAN MIND Why You Shouldn’t Trust What Your Brain is Telling You
John Edward Terrell and Gabriel Stowe Terrell
First published 2020 by Routledge 52 Vanderbilt Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2020 Taylor & Francis The right of John Edward Terrell and Gabriel Stowe Terrell to be identified as authors of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data A catalog record for this title has been requested ISBN: 978-0-367-85580-2 (hbk) ISBN: 978-0-367-85578-9 (pbk) ISBN: 978-1-003-01376-1 (ebk) Typeset in Bembo by codeMantra
QUESTION Why are all of us so good at believing what we want to believe (even when we don’t know we want to believe it)?
Taylor & Francis Taylor & Francis Group
http://taylorandfrancis.com
CONTENTS
List of Figures Acknowledgements: The Powers of More than Two
ix xi
3 Human Failings in Reasoning: Why Do You Trust Yourself?
24
viii Contents
Index
151
FIGURES
Taylor & Francis Taylor & Francis Group
http://taylorandfrancis.com
ACKNOWLEDGEMENTS The Powers of More than Two
We all know that two heads can be better than one. We ourselves are also quite sure that two heads are often not enough to get anything worth doing done well. We thank the staff and cuisine at Willalby’s Cafe, 1351 Williamson St, Madison, WI for facilitating our weekly writers’ breakfast on Saturday mornings while the two of us were crafting this book. Others have given us sustenance of another sort, namely their time, attention, and helpfully critical thoughts. We thank in particular Lane Beckes, Emma Cieslik, Tom Clark, Jane Connolly, Helen Dawson, Mark Golitko, Jim Koeppl, Scott Lidgard, Jonathan Maynord, Francie Muraski-Stotz, Neal Matherne, Esther Schechter, Pat Shipman, Parker Shipton, Uliana Solovieva, and John Stowe. It’s true what they say. It takes a community to raise a child. It also takes friends to write a book.
Taylor & Francis Taylor & Francis Group
http://taylorandfrancis.com
GETTING STARTED
The authors of this book are father and son. We suspect such a writing partnership is unusual enough to call for explanation. Let us begin, therefore, at the beginning. John and Jane, his twin, were born five minutes apart and a month premature in Washington, D.C., in 1942. John nearly died shortly after birth, but he managed to pull through more or less unscathed. He has never let Jane forget she is older than he is. Gabriel (Gabe) was born to a single mother in Lambaré, Paraguay, in 1990. The following year, John went down to Asunción, the capital city, to adopt Gabriel when he was 13 months old. That is a story in itself, but not one for here and now. John is seen by more than a few nowadays as a feisty senior citizen. Gabe qualifies as a millennial, but unlike the view of his generation as lazy, selfabsorbed, and without aspirations, he is definitely not afraid to challenge conventional wisdom. Regardless of how others see us, we both like to see ourselves as two thoughtful but kindly guys. Now it happens that beyond being father and son, we actually like talking to one another. By profession, John is an anthropologist who has been working in the South Pacific ever since he went out to New Zealand as a Fulbright Fellow in 1965. He has been trying to figure out what it means to be human ever since he took his first college anthropology course four years earlier. As a South American adoptee growing up in Wisconsin where the old dictum blood is thicker than water is still believed by many to make sense, Gabe has long known that for those of us living north of the Rio Grande, adoption is often seen as dangerous and a second-rate way of becoming a family (Terrell and Model 1994). Given these biographical facts, is it any wonder that the two of us might want to write a book together? But why this book? Chiefly for two reasons.
2 Getting Started
First, Gabe’s mother in Paraguay tried to raise him successfully for the first five and a half months of his infancy. Thereafter, however, she had placed him in foster care in Asunción. He spent the next seven and a half months living mostly in a crib in a room filled with other babies in neighboring cribs. We have no firm idea what life was like for him all those months in a crib beyond feeding and diaper changes. We do know that he was evidently out of his crib so rarely that when John and he were introduced to one another for the first time in a hotel in Asunción, his feet were puffy and showed scant evidence of his ever trying to walk. We do know he only learned that vital set of motor skills after reaching his new home in Wisconsin on November 1st. Now without trying to go into clinical detail, the first year of life for any infant is a critically formative time. This is when babies begin to master the emotional and cognitive fundamentals of human social life. To be confined to a crib in a room filled with cribs may not be intentionally cruel, but social isolation of this sort can put any infant at risk. Hence one of the reasons we decided to write this book as a partnership is that by doing so we hoped we could find out more about what anthropology, psychology, and the cognitive sciences have discovered about why someone like Gabe had to struggle at times—as a child in grade school, and later, too, as a teenager—to know how much to trust the words, deeds, and reactions of other people. One reason for our writing partnership, therefore, is that we were able to take a journey together of self-discovery about what it means to be human. Our second reason is less personal. In his provocative book Powers of Two: Finding the Essence of Innovation in Creative Pairs (2014), Joshua Wolf Shenk has sought to document how, contrary perhaps to popular wisdom, the intellectual chemistry of two minds working together can spark a level of creativity and innovation far above and beyond the ordinary. He may well be right about the remarkable creativity of such dynamic pairs as John Lennon and Paul McCartney, or Marie and Pierre Curie, but as we explore in the chapters to follow, how Shenk accounts for the apparently elevated chemistry between such famous pairs of high achievers strikes us as simplistic. After all, Shenk himself acknowledges that 2 is not a magic number (2014, xxii–xxiii). Therefore, here is a hint about what follows in this book. All of us who are human struggle to get our thoughts, feelings, and hopes outside the confines of our own bony skulls. Elaborate explanations aside, it is a no-brainer to think that having at least one other person to talk to who is actually ready, willing, and able to listen to us and respond both positively and productively is a human connection that is literally priceless. What this ancient truth meant for the two of us is easily said. Although John took on the job
Getting Started 3
of writing down most of what there is to read in this book, almost all of the thinking behind these words was done as a shared enterprise between the two of us, father and son.
Works Cited Shenk, Joseph Wolf (2014). Powers of Two: Finding the Essence of Innovation in Creative Pairs. New York: Houghton Mifflin Harcourt. Terrell, John Edward and Judith Modell (1994). Anthropology and adoption. American Anthropologist 96: 155–161. doi:10.1525/aa.1994.96.1.02a00130
1 HOW YOUR MIND WORKS Travels in Wonderland
As we have often done, the two of us were having breakfast on Saturday at our favorite morning place on Williamson (“Willy”) Street in Madison, Wisconsin. It was the middle of March several years ago. Our conversation went something like this, although we won’t swear we can remember it now, word for word. “But, Pop, do you think I should trust him?” “How should I know, Gabe? Isn’t this the question we all struggle with every time we have to work with anybody to get something done?” “What do you mean?” “Well, let’s face it: we may be the smartest animals in the world, but evolution hasn’t made us mind-readers. Like every other creature on Earth, however intelligent or not, all that any of us can do when we want to know for sure if others are telling us the truth is watch and listen to them. We can try to make an educated guess about what they are really thinking and intending to do. But that’s as far as it goes.” “Yeah, I agree. But I think there’s more to it than that.” “I think you just lost me, Gabe.” “Don’t you recall that last week, you were telling me how hard it is, even for someone like you, who trained as an anthropologist, to feel confident that they have learned enough about the ways of people born and raised in another society to start seeing the world the way they do?” “I stand by that.” “You quoted what one of your professors back in graduate school said about what anthropologists do to decide if they should trust their insights into how other people see the world. They ask themselves two questions. Can you finally explain things and events the way they do? Can you act in ways those people see as right and proper?”
How Your Mind Works 5
“Yes, that was Ward Goodenough at the University of Pennsylvania, and yes, that’s basically what he said” (Goodenough 1967; Terrell 2000). “O.K., then here’s my point. If trusting other people is not about the truth of what they are thinking and saying, but instead only about whether we believe what they are saying and doing, what about ourselves? Should we even trust what our own brains are telling us?” (Carruthers 2017). “Well, we know Freud didn’t seem to think so. Yet maybe the better question is can we trust ourselves? Is there anything about human nature and the brain that we should be worried about?” That was back then. This is now. Both of us have since learned that the right answer to this last question is decidedly “yes.” We have written this book because the more we explored this timeless question, the more we saw that there are indeed good reasons not to trust even what your own brain is telling you about the world around you and about your travels through it from birth to death. One reason is obvious once you hear it.
We All Live in a Yellow Submarine A scull is a type of oar used to propel a boat through the water. The same word can also mean the boat itself, either a rowing boat designed for one person or a light racing boat, or shell, with one, two, or four rowers, each with two oars, one in each hand. Alternatively, of course, a skull is the bony casing surrounding the brain that guides its owner through life. Although these two words—scull and skull—sound the same while obviously meaning two quite different things, we find it hard not to think of them both when we are thinking about what it means to be human. We have come to see the human skull, the cranium, as like a classic Old Town 3-person touring canoe, or perhaps the gondola of a hot air balloon floating through the air a thousand feet off the ground. Or more realistically, not as a touring canoe or a balloon’s basket but rather as a small yellow submarine with a crew of three inside named Sherlock, George, and Alice (we will have more to say about these three before we end this chapter). Why a submarine? Because this select crew of three is hidden away and largely cut off from the world. Locked inside the confines of the human skull, they can only learn about what is happening outside their submarine from what is being piped in through their body’s senses, serving, figuratively speaking, as their periscope, sonar equipment, and radio transceiver. Why are we telling you this? Because we are willing to bet you have never thought about your brain this way. We want to change that. Why? Because this is a book about human nature and the brain. History shows that what humans are like—or should be like—as a species has always been controversial. At times, disagreements about human nature have developed into deadly conflicts,
6 How Your Mind Works
lynchings, and tragic episodes of mass genocide. We do not think we need to argue that the human brain has more than a little to do with what makes us not only one of the Earth’s most successful species but perhaps also one of its most deadly. But what exactly is it about that juicy mass of brain tissue up there on top of our shoulders that lets us behave in such conflicting ways? And more to the point, why are the two of us so convinced that you should not trust even what your own brain is telling you?
Know Thyself There is nothing new about asking what it means to be human. Consider the Ancient Greek aphorism γνῶθι σεαυτόν, “Know thyself.” This sounds like wise advice, but what, pray tell, is a self? How do you get to know it? Similarly, in 1637, the great French philosopher René Descartes famously observed je pense, donc je suis, “I think, therefore I am” (Birhane 2017). This is one of the foundational ideas of modern Western philosophy, although it is usually rendered not in French but rather in Latin as cogito, ergo sum. It is comforting to be told in any language that we actually exist, and we are not merely a figment of someone else’s frenzied imagination. Yet what does this famous Frenchman’s observation tell us? Aren’t we left wondering what it means to think? Scientists in recent decades have constructed laboratory machines that are nearly magical in how well they help us see past all the flesh and bone tissues surrounding the living human brain to discover what is happening physiologically therein when someone is asleep, awake, or daydreaming. These machines are a lot better than prying open a skull to take a look inside (Godwin et al. 2017), but it is not enough to see what is going on up there, however you decide to do it. The challenge lies in making sense of what you are seeing.1 Questions and concerns such as these are what got both of us engaged in writing this book. We are convinced that nobody can understand how and why we behave the way we do as individuals and societies without knowing how the human brain works. Furthermore, nobody can understand how their brain works without also understanding how humans evolved biologically to become a strikingly social species (most species on earth are not social at all beyond getting together at least long enough to reproduce and, if need be, raise their young). Finally, and most important, nobody can understand why we humans are so often our own worst enemies without also understanding how evolution has inadvertently set out a dangerous trap that all of us can easily fall into. What trap is this? Evolution has given us the brain power to see in our mind’s eye what may be “out there in the real world” at any given moment of the day or night that we need to be aware of. We are also smart enough to imagine what we could create out there if we put our talents and social skills as human beings together to reshape the world to suit our individual and collective needs
How Your Mind Works 7
and whims. This rosy picture, however, hides a trap: a substantial Catch-22 to all this evolved human cleverness. What’s the catch? Endowing us with the talents—and hence, the freedom— to remake the world around us in the way we want it to be has also made it possible for each of us to overreach, stumble, ignore the warning signs of pending troubles, and ultimately fail. Furthermore, our biologically evolved ability to imagine in our mind’s eye what is not yet out there in the world but could be there if we made it happen means that evolution has also given us the ability to argue and even fight with one another, not only about how things could be but also about how they should be. Human history is full of arguments of this destructive kind, some of which have been deadly indeed. Arguments not about what is but about what ought to be.
Three Propositions There is a saying in Latin, dating back at least to the mid-seventeenth century, that by now may sound clichéd but is true, nonetheless. Scientia potentia est, “knowledge is power.” The chances are good that nobody in their right mind easily accepts the possibly disturbing fact that their brain is so well sealed within a hard bony vault called the skull that they have no direct knowledge of the outside world. Yet, however hard this is to believe, what each of us knows about the world around us gets into our brain solely and simply in the form of electrical impulses filtering in through our bodily senses, those neural pathways that are popularly, although somewhat inaccurately, said to be five in number: sight, hearing, taste, smell, and touch (Reber et al. 2017). Or, said more graphically perhaps, like that crew of three traveling within our imaginary yellow submarine, the human brain learns about the outside world through impulses that have to be converted into useful information before they can be of any true value. How this happens—how the brain transforms impulses into meaningful information—is still mostly a mystery even nowadays, despite the substantial advances that have been made in recent years in our scientific understanding of how the brain works as a biologically constructed thinking machine (Lupyan and Clark 2015). This is not, however, a book about the intricacies of modern neuroscience and its technological wizardry. Instead, we want to put on the table three basic propositions about what it means to be human. We will be using these propositions in the chapters ahead to explore both the strengths and the dangers of how the human brain engages with the world around us and with others of our kind. First proposition. The most fundamental of our three propositions follows directly from our saying that the human skull is like a yellow submarine (or the small gondola of a hot air balloon). This first proposition may come across to you as little more than good old-fashioned common sense (Kandel 2019; Simon 1986). We live in two different worlds at the same time.
8 How Your Mind Works
One of these is the world outside of us that is tangible, risky, and demanding. The other world exists only in the brain’s private kingdom between our ears. This is a place where we spend much of our time, awake or asleep. Here, we form our opinions, however wise or silly. Reach conclusions sound or foolish. Imagine even impossible things and events. Decide on what, if any, actions to take that may knowingly or not impinge upon our safety, sanity, and survival. Second proposition. The late Nobel Laureate Herbert A. Simon made major contributions to an astonishing range of academic fields, including economics, management theory, artificial intelligence (AI), cognitive science, and philosophy. Half a century ago, he wrote insightfully about how the world we humans live in “is much more a man-made, or artificial, world than it is a natural world” (Simon 1969, 3). To use our own words rather than his, what all of us see around us may not be a figment of the human imagination, but it is largely a product of our own fertile imaginations, nonetheless. An obvious example would be what the island of Manhattan at the heart of New York City is like today and what it was like back in 1626, when, according to legend and maybe history, too, the Dutchman Peter Minuit purchased this now famous piece of real estate for 60 guilders worth of trade goods. The outer world we live in is mostly one of our own making. Gifted by evolution as we have been, therefore, what this second proposition suggests is that we have succeeded to a high degree as a species at reshaping and dumbing down the world around us to meet our needs, our goals, our desires, and our whims. By dumbing down, we mean that humans are remarkably skilled at making the world we inhabit far less risky and uncertain—and more predictable—than it otherwise would be. As we will explore in later chapters, sometimes, this is very, very good. Sometimes, this is nothing short of dangerous, perhaps deadly ( Junger 2016). Without getting too far ahead of what we will be talking about down the road, if the world we humans have created is mostly man-made—mostly artificial, to use the word Simon favored—then for humans to behave rationally need not necessarily mean we are also being sensible about the costs & benefits of doing what we have decided to do. Briefly put, evolution has not made us good at tracking the consequences of what we are doing to make the world our own. Said another way, being rational and being sensible are not necessarily the same thing, mentally speaking. Third proposition. Alice’s Adventures in Wonderland was first published in 1865. This is undoubtedly Lewis Carroll’s most beloved book, thanks in part to Walt Disney Studios and its 1951 cartoon version that beautifully captured the logical nonsense of Carroll’s rich fantasy world of talking rabbits, smiling cats, and unlikely occurrences. We call the last of these linked propositions the
How Your Mind Works 9
Wonderland hypothesis, after this famous children’s story. Here it is in its most elementary form: We live in our own minds all of the time and must struggle to be in touch with what is happening in the outside world. This third proposition is central to what we will be talking about in the chapters ahead. Here it is again but this time in full: We believe we live in the real world and only escape occasionally into our fantasies. In truth, however, we live in our own minds all of the time and must struggle to be in touch with what is actually happening out there that we can see filtering in only through the looking-glass of our minds. Why are we saying here that the human mind is like a looking-glass, a mirror? In his equally splendid sequel Through the Looking-Glass and What Alice Found There (1871), Carroll tells us that Alice was seven and a half exactly when she stepped through the looking-glass over the fireplace in her family’s drawing room. In this second story, she finds herself in the topsy-turvy world of Looking- Glass House—the mirror image of the place she has just left behind. She soon learns not only that things over on the far side of the mirror are reversed (as they are when you see them in a mirror) but also that things wonderful strange and contrary to the laws of nature can happen over there too. In this unpredictable world, flowers talk, and knitting needles can instantaneously become rowing oars. And, yes, eggs can be argumentative and disagreeable. By saying that the human mind is like a mirror, therefore, what we mean is that even when we think we are seeing the world for what it is really like, we are actually all pretty much just stuck behind our eyes in the freewheeling land of our very own Looking-Glass House. It may be a common human conceit to believe what we are thinking about in the inner splendor of our own minds is always true and correct, fair and square, wise and clever, insightful and right on target. Truth be told, however, what we are seeing “out there in the real world” is mostly a reflection of what we have learned to expect we will be seeing there (Chapter 5). And, yes, maybe also what we desperately want to find there.
Your Naughty Brain Despite all that we have said so far about how disconnected the human brain is from the real world, please do not think we are suggesting that your brain is not part of the world of tangible things. Estimates vary, but something on the order of 86 billion neurons and 85 billion nonneuronal cells are packed into its convoluted mass of living tissue, weighing roughly the same as a small pot roast suitable for a family of four or five (Herculano-Houzel 2012).
10 How Your Mind Works
Furthermore, physically speaking, there is obviously a lot more to you than just what is lodged up there inside your skull. This is true, even though biologists today are saying that who we are as individuals, biologically speaking, is far more doubtful than any of us once thought even remotely likely. For instance, it is now known that more than half the cells in the human body are bacterial. Said another way, organically speaking, much of you isn’t really you at all. Who knew? “Moreover, bacterial products comprise over 30% of our blood metabolites, and they are necessary for our normal physiological maintenance” (Gilbert 2017, 298). Scientific facts like these can be fascinating and are certainly worth keeping in mind. Yet the brain is not just a sizable piece of human meat. It is also an intricate biologically assembled appliance. And sadly, it is one that does not come with a reliable user’s manual. More to the point, and in keeping with our three working propositions, it is a critically important contraption that even the best informed scientists are still struggling to wrap their head around. Or, phrased more mechanically, one which they are still trying to reverse engineer. One reason experts and others, too, are still working more in the dark than they may sometimes realize when it comes to unraveling the mysteries of the human brain is one that frankly surprised us both back when we were doing research for this book. We had thought that psychology and neuroscience were much further along in understanding what it means to be human than we found to be the case. Currently, even the most credible scientific models of the mind are basically just two-dimensional.2 As set forth, for instance, in the Nobel Laureate Daniel Kahneman’s highly successful book Thinking, Fast and Slow (2011), it is commonly still assumed that the human brain has only two major ways, or modes, of thinking about things and events—ways that Kahneman and others identify as “System 1” (fast, instinctive, and emotional), and “System 2” (slower, more deliberative, and more logical). The two of us are not alone, however, in seeing two-dimensional models of the mind as fundamentally flawed (Chapter 2) and fundamentally misleading (Melnikoff and Bargh 2018; Pennycook et al. 2018). Perhaps such an approach to how the human brain works is an understandable reaction to the more unruly speculations of twentieth-century Freudian psychology (Kandel 1999). Whatever the reason, it seems indisputable that currently popular models of human thinking and cognition often underestimate the role of what we suspect most of us would see as richly characteristic of human thought: namely, its creativity (Sowden et al. 2015). As the anthropologist Agustín Fuentes has said, “our talent for creativity is what makes humans exceptional. (We are neither the nicest nor the nastiest species, but we are the most creative.)” (Fuentes 2017). As we will be exploring in the chapters ahead, psychology and neuroscience today are beginning to successfully document how the human brain engages in this critically important third mode of thought. Building on what is now
How Your Mind Works 11
known, we will offer you our own three-dimensional model of the mind that incorporates creativity as perhaps its most decisive—and unfortunately, its most unpredictable—mode. What sort of a three-dimensional model are we going to propose? One we hope you will find useful. Whenever we think about how mysterious the workings of the brain are, we find ourselves also thinking about three famous fictional characters: Sherlock Holmes, Professor George Challenger, and a young girl named Alice who went off on her own to explore a remarkable place called Wonderland. In this book, we will be asking these three storybook characters to help us talk about the mysteries of human nature. And instead of only examining what we humans do that makes us come across to ourselves as worldly, wise, and wonderful, we will also be proposing what it is about that soft, jelly-like mass hidden inside our skulls that lets us be not only remarkably clever and creative creatures but also at times astonishingly naïve, opinionated, and even dangerously destructive.
Notes 1 As Richard Passingham has explained in Cognitive Neuroscience: A Very Short Introduction (Oxford, 2016): “Now that we know so much about where there is activity in the brain while people perform cognitive tasks, the next stage is to find out how that activity makes cognition happen. In other words we need to understand the mechanisms” (106–107). See also: Satel and Lilienfeld (2013). 2 Folk models of the mind are not always two-dimensional. Although what philosophers call “mind-body dualism” (Chambliss 2018) has long been popular, wisdom has also long maintained that the brain has three different ways of dealing with the world conventionally labeled as “thoughts,” “habits,” and “emotions.”
Works Cited Birhane, Abeba (2017). Descartes was wrong: “A person is a person through other persons.” AEON (7 April 2017). Retrieved from: https://aeon.co/ideas/ descartes-was-wrong-a-person-is-a-person-through-other-persons Carruthers, Peter (2017). The illusion of conscious thought. Journal of Consciousness Studies 24: 228–252. Chambliss, Bryan (2018). The mind–body problem. Wiley Interdisciplinary Reviews: Cognitive Science 9, no. 4: e1463. doi:10.1002/wcs.1463 Fuentes, Agustín (2017). Creative collaboration is what humans do best. New York, March 22, 2017. Retrieved from: www.thecut.com/2017/03/how-imaginationmakes-us-human.html Gilbert, Scott F. (2017). Biological individuality: A relational reading. In Biological Individuality. Integrating Scientific, Philosophical, and Historical Perspectives, Scott Lidgard and Lynn K. Nyhart (eds.), pp. 297‒317. Chicago, IL: University of Chicago Press. Godwin, Christine A. Godwin, Michael A. Hunter, Matthew A. Bezdek, Gregory Lieberman, Seth Elkin-Frankston, Victoria L. Romero, Katie Witkiewitz, Vincent P. Clark, and Eric H. Schumacher (2017). Functional connectivity within and between
12 How Your Mind Works
intrinsic brain networks correlates with trait mind wandering. Neuropsychologia 103: 140–153. doi:10.1016/j.neuropsychologia.2017.07.006 Goodenough, Ward H. (1967). Componential analysis. Science 156: 1203–1209. doi:10.1126/science.156.3779.1203 Herculano-Houzel, Suzana (2012). The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. Proceedings of the National Academy of Sciences 109, Suppl. 1: 10661–10668. doi:10.1073/pnas.1201895109 Junger, Sebastian (2016). Tribe. On Homecoming and Belonging. London: 4th Estate. Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kandel, Eric R. (1999). Biology and the future of psychoanalysis: A new intellectual framework for psychiatry revisited. American Journal of Psychiatry 156: 505–524. doi:10.1176/ajp.156.4.505 Kandel, Eric R. (2019). The Disordered Mind: What Unusual Brains Tell Us about Ourselves. New York: Farrar, Straus and Giroux. Lupyan, Gary and Andy Clark (2015). Words and the world: Predictive coding and the language-perception-cognition interface. Current Directions in Psychological Science 24: 279–284. doi:10.1177/0963721415570732 Melnikoff, David E. and John A. Bargh (2018). The mythical number two. Trends in Cognitive Sciences 22: 280–293, 668–669. doi:10.1016/j.tics.2018.02.001 Pennycook, Gordon, Wim De Neys, Jonathan St. B.T. Evans, Keith E. Stanovich, and Valerie A. Thompson (2018). The mythical dual-process typology. Trends in Cognitive Sciences 22: 667–668. doi:10.1016/j.tics.2018.04.008 Reber, Thomas P., Jennifer Faber, Johannes Niediek, Jan Boström, Christian E. Elger, and Florian Mormann (2017). Single-neuron correlates of conscious perception in the human medial temporal lobe. Current Biology 27: 2991–2998. doi:10.1016/ j.cub.2017.08.025 Satel, Sally and Scott O. Lilienfeld (2013). Brainwashed: The Seductive Appeal of Mindless Neuroscience. New York: Basic Books. Simon, Herbert A. (1969). The Sciences of the Artificial. Cambridge, MA: MIT Press. Simon, Herbert A. (1986). Rationality in psychology and economics. Journal of Business 59: S209–S224. Retrieved from: www.jstor.org/stable/2352757 Sowden, Paul T., Andrew Pringle, and Liane Gabora (2015). The shifting sands of creative thinking: Connections to dual-process theory. Thinking & Reasoning 21: 40–60. doi:10.1080/13546783.2014.885464 Terrell, John Edward (2000). Anthropological knowledge and scientific fact. American Anthropologist 102: 808–817. doi:10.1525/aa.2000.102.4.808
2 MODELS OF THE HUMAN MIND How Do We Think About Thinking?
L. Frank Baum’s novel The Wonderful Wizard of Oz made its first public appearance at a book fair in Chicago at the Palmer House hotel in 1900. This classic children’s book was later made into that enduring 1939 movie “The Wizard of Oz” starring Judy Garland. In this well-known tale, a smart and feisty orphan named Dorothy Gale makes the best of her sudden relocation to a far-off land by marching down a yellow brick road to the Emerald City. She wants to petition the renowned Wizard in residence there for help in getting back home to Kansas. Dorothy is joined along the way by three now equally memorable companions: Scarecrow, who wants to ask that same locally esteemed wizard for a brain; Tin Woodman, who wants to acquire a heart to fill the empty cavity in his chest; and Cowardly Lion, who wants to become less fearful and more courageous. Stories as inventive and quirky as The Wonderful Wizard of Oz are fair game for psychological scrutiny and allegorical interpretation proffered by those with a literary or academic turn of mind (Littlefield 1964; Parker 1994; Rockoff 1990). Economists, for example, evidently like to use this story in their classes to help students understand the arcane issues of monetary policy. As one commentator has remarked, this book is presented as an allegory—a revealing story— about demands for monetary expansion in the late nineteenth century. “The allegory provides an efficient means of introducing students to debates about monetary issues because the elements of the story are so familiar. Students may also be intrigued by the unfamiliar interpretation” (Hansen 2002, 254). As a teaching tool, this seems fine, but apparently some economists have taken this notion too far. Claims have been published and taken seriously that the tale is actually a veiled monetary allegory. However, judging by his own writings as well as his personal life history, Baum did not intend for this particular book to be taken as more than a delightful children’s story. “On closer
14 Models of the Human Mind
examination, the evidence in favor of the allegorical interpretation melts away like the Wicked Witch of the West” (Hansen 2002, 256). A different conclusion seems more likely: the true lesson of The Wonderful Wizard of Oz may be that economists have been too willing to accept as a truth an elegant story with little empirical support, much the way the characters in Oz accepted the Wizard’s impressive tricks as real magic. (2002, 255) We see here an important lesson to be learned. We must be careful how we write about the human brain lest we be misunderstood. And the tale we want to tell is not simply a delightful children’s story. Far from it, as you will see if you travel down the road with the two of us all the way to the end of Chapter 12.
How Should We Write About the Human Brain? As we noted briefly in Chapter 1, many today favor thinking about the human brain as if it works chiefly in two differing ways—in two alternative modes— although some authorities are open to making their models of the mind more complicated (Eagleman 2011, 110). The first of these two ways is now often labeled as System 1 or Type 1. Most of us who aren’t experts, however, would probably call this first mode simply habit or possibly intuition. This way of thinking about things is said to be accomplished more or less subconsciously, and can be done without requiring a lot of metabolic effort. In other words, System 1 thinking is seen by those writing about it as relatively cheap and easy for the brain to use. The second way of thinking, labeled System 2 or Type 2—but more conventionally referred to as conscious awareness or reasoning—is thinking described by those adopting this perspective as slower, more deliberative, harder to do (more “effortful”), costing more metabolic energy to engage in, and is usually something that we are more or less consciously aware of doing (Kahneman 2011, 43–44). As we mentioned briefly in Chapter 1, Daniel Kahneman helped popularized this two-dimensional model of the mind—in psychological circles, such models are commonly spoken of formally as “dual process theory”—in his 2011 best-selling book about human decision-making Thinking, Fast and Slow. Kahneman himself acknowledges in this book, however, that opting to use the word “system” can be misleading. Why? Because these two purported systems aren’t intended to be seen as anything real. System 1 and System 2 are so central to the story I tell in this book that I must make it absolutely clear that they are fictitious characters.
Models of the Human Mind 15
Systems 1 and 2 are not systems in the standard sense of entities with interacting aspects or parts. And there is no one part of the brain that either of the systems would call home. (Kahneman 2011, 29) Then why employ a word like “system” (or “type”) at all? Kahneman’s explanation is fairly elaborate and possibly not as convincing as he would have us believe. In any case, he says he does not want System 1 and System 2 to be taken too seriously. “You have been invited to think of the two systems as agents within the mind, with their individual personalities, abilities, and limitations.” However, please don’t misunderstand. “You should treat ‘System 1’ and ‘System 2’ as nicknames, like Bob and Joe” (2011: 28–30). Using words like these in professional writing, he tells us, may appall his academic colleagues on the grounds that these terms might make it appear that he is suggesting that what someone thinks and does can somehow be attributed to the thoughts and actions of “little people inside the person’s head.” (In this instance, presumably tiny twins whose names are System 1 and System 2.) But not to worry, he reassures us. Using expressions like these in short active sentences attributing thoughts and actions to System 2, say, is just a matter of convenience, and “is intended as a description, not an explanation.” We see this last remark, however possibly cryptic it may appear to be, as an important concession. Perhaps we are misreading his written words, but it sounds like Kahneman would agree with us that descriptions are just convenient verbal pictures (so to speak) of things and events, and are not meant to be actually taken as explanations for anything at all. Perhaps he and we are of one mind on this matter, but there are those today who nonetheless vigorously defend two-dimensional models of the mind as more or less accurate portrayals of how the human brain literally deals with the world in which it finds itself (De Neys and Pennycook 2019; Mekern et al. 2019). Maybe yes, maybe no. In any case, debating how useful such models may or may not be strikes us as avoiding the big issue here. As the late theoretical biologist Richard Levins famously declared in the 1960s, the world is a vastly complex place. Therefore, it is always necessary to approach any real concern or scientific problem worth pursuing using a number of alternative models—a number of differing ways of thinking about the issue—each with different simplifications and working assumptions. Then, if these models, despite their different assumptions, lead to similar results we have what we can call a robust theorem which is relatively free of the details of [any one of the models used]. Hence our truth is the intersection of independent lies. (Levins 1966, 423)
16 Models of the Human Mind
On the face of it, this might not sound like such a good thing, but as Levins went on to remind us, the richness and diversity of the living world are far greater than the human mind can fully comprehend. Hence a “multiplicity of models,” to use his apt phrasing, is inevitable. Therefore, the alternative approaches even of contending schools are part of a larger mixed strategy. But the conflict is about method, not nature, for the individual models, while they are essential for understanding reality, should not be confused with that reality itself. (1966, 431)
Multiple Personalities In this positive spirit, therefore, and inspired by the willingness of Scarecrow, Tin Woodman, and Cowardly Lion to team up with young Dorothy to help her get back to Kansas, we want to offer you a different way of thinking about thinking, a different kind of model. Instead of featuring just two systems, or two types, we want to suggest that the human brain is a biologically evolved apparatus with several personalities, so to speak, all at one and the same time. Specifically, three of them. Although Kahneman felt he needed to warn against thinking his two systems are real, our three characters are clearly fictitious. Ours are also far from anything that could possibly be discovered lurking within the cramped confines of the human skull. You already know their names: Sherlock, George, and Alice. Please note, however, that however fanciful and fictitious they may be, we are absolutely going to try to convince you that these three can be useful tools of thought, helpful ways of thinking about and dealing with the demands of being human (Waddington 1977).
Sherlock Holmes Famed for the brilliance of his analytical mind, Holmes made his first public appearance in print in 1887. In some ways, he could almost serve as a poster child for what is nowadays called “mindfulness,” the mental state achieved by focusing one’s awareness on the present moment (Wittmann 2017, 42–44). However, rather than using mindfulness meditation as a way to calm the brain and pacify the human psyche, Holmes tells us in The Sign of the Four that his mind rebels at stagnation: “Give me problems, give me work, give me the most abstruse cryptogram or the most intricate analysis, and I am in my own proper atmosphere.” Moreover, as he explains to Watson in “A Case of Identity”: “Never trust to general impressions, my boy, but concentrate yourself upon details.”
Models of the Human Mind 17
However, there is a decided downside to his close attentiveness to detail. Far from such mindfulness as a way of chilling out, he so abhors the dull routine of existence that he is easily bored. Then he is willing to use artificial means such as alcohol and a 7% solution of cocaine. Why? Because, as Sir Arthur Conan Doyle has Sherlock lament to Dr. John Watson at the end of the first chapter of The Sign of the Four (1890): “I cannot live without brain-work. What else is there to live for? Stand at the window here. Was ever such a dreary, dismal, unprofitable world? See how the yellow fog swirls down the street and drifts across the dun-colored houses. What could be more hopelessly prosaic and material? What is the use of having powers, doctor, when one has no field upon which to exert them? Crime is commonplace, existence is commonplace, and no qualities save those which are commonplace have any function upon earth.” Unlike most of us who are human, therefore, Holmes was drawn by Sir Arthur as someone obsessively aware of the world around him. While he sees his uncommon sensitivity to the finest-grained details of life as one of his “peculiar powers,” most of the time he experiences the world and what’s happening in it as dull and boring. Luckily for those around him who are in need, nonetheless, when enticed by a tantalizing case, Holmes derives a great deal of genuine pleasure from using his extraordinary powers of observation and deduction to discover the logic and explanation hidden behind seemingly disparate and unrelated bits-and-pieces of real-world evidence—facts and figures that those of us less blessed (or cursed) than he would probably fail to notice, and even if we did, would probably not be able to interpret correctly (Figure 2.1). Or so Doyle tells us (Thomson 2012).
FIGURE 2.1
Mr Sherlock Holmes (Credit: Strand Magazine, 1892) (https://commons. wikimedia.org/wiki/File:Strand_paget.jpg).
18 Models of the Human Mind
Sherlock Holmes, therefore, comes to mind when we think about the human brain and how it works because he so dramatically typifies not only how important it can be for all of us to be critically observant about what is happening around us, but also how hard it can be for all of us, not just criminological geniuses, to stay engaged with what is happening in the real world when what’s going on there comes across to him or to us as simply routine and drearily predictable—an observation we will be returning to frequently in the chapters to follow. For now, however, we note only that this is one of the reasons mindfulness meditation can be a lot more difficult for people to do than you might think such thoughtfulness ought to be . . . and also why all of us—even the great Sherlock Holmes—are entirely capable of doing dimwitted things now and then (Van Dam et al. 2018).
George Challenger Holmes is not the only remarkable character created by Arthur Conan Doyle. Another unforgettable product of his fertile imagination is the cantankerous Professor George Edward Challenger. He made his first appearance in 1912 on the pages of Strand Magazine in a series of stories about prehistoric ape-men and dinosaurs later published in novel form as The Lost World. While today Challenger as a fictional character is perhaps not as well-remembered as Doyle’s famous detective, he, too, has been featured in many films and in numerous television, radio, and stage adaptations since his first arrival on the English literary scene over a century ago. And where would movie lovers today be if the late novelist Michael Crichton hadn’t been inspired both by Challenger and the dinosaurs featured in Doyle’s The Lost World when he sat down to write Jurassic Park (published in 1990), and later his sequel to this runaway bestseller titled (surprise!) The Lost World (1995).
FIGURE 2.2
George Edward Challenger (Credit: Chronicle/Alamy Stock Photo).
Models of the Human Mind 19
As Michael Crichton once remarked, comparing Holmes and Challenger is nearly irresistible because the two of them are such different creations in every respect, so much so that Crichton styled Challenger as a kind of anti-Holmes. Where Holmes is tall and lean, Challenger is squat and pugnacious. Holmes shuns publicity; Challenger craves the limelight. Holmes charms, Challenger insults; Holmes is subtle, Challenger crude; Holmes is diffident, Challenger aggressive. Indeed, the only trait they share is prodigious physical strength. (Crichton 2003) There is, however, another character trait Crichton fails to mention here that Holmes and Challenger share in common, although for strikingly different reasons. They are both remarkably confident about their own abilities. Holmes has a high opinion of himself for good reason. As we have already noted, he is acutely mindful of his surroundings. He knows from experience— as he tells his colleague in “A Case of Identity” while softly clapping his hands together and chuckling: “’Pon my word, Watson, you are coming along wonderfully. You have really done very well indeed. It is true that you have missed everything of importance, but you have hit upon the method, and you have a quick eye for colour. Never trust to general impressions, my boy, but concentrate yourself upon details.” In contrast, although Challenger is someone who is more than willing to contest popular wisdom and scientific conventions however long accepted, he lacks Sherlock’s patience, and frankly he is fully convinced he knows pretty much all there is that’s worth knowing. Furthermore, Doyle has Challenger himself telling a would-be visitor who is seeking an audience with him early on in The Lost World that he most decidedly is not someone in the habit of modifying his opinions. Why even Challenger’s own wife calls him a roaring, raging bully! To which he responds: “There are plenty of better men, my dear, but only one G. E. C. So make the best of him.” In view of how strikingly different, therefore, these two gentlemen are— one is brilliant, methodical, and patient, while the other is brilliant, crude, and generally obnoxious—we hope you will play along with us and allow Doyle’s Sherlock Holmes to be our role model for the human brain’s contrary skillfulness at both (1) paying close attention to the world around it, but also (2) being prone to getting bored with what it finds there. Furthermore, you will permit Professor Challenger to serve as our avatar (Figure 2.2) of the brain’s similarly contrary willingness (3) to use its own experiences as its
20 Models of the Human Mind
fundamental guide to life—commonly known as doing things “by habit”— while sometimes (4) forgetting perhaps too readily the wisdom behind Sherlock’s observation in “The Boscombe Valley Mystery” that “there is nothing more deceptive than an obvious fact.” It is important to add, we think, that while the word rational might be the right word to use to describe Sherlock, please do not assume that we are saying George Challenger is basically irrational. Behavior that can be described as habitual is not inherently irrational. Far from it. We just mean doing something again you have done before without checking carefully first to see if what you are engaged in doing is still the most rational thing to do. But enough for now. We will have more to say later on. What about the third character in our fanciful trio? How does Alice—the little girl who loves to go off on unexpected adventures without telling anyone evidently because she is remarkably self-assured for her age and has an overdeveloped sense of curiosity—fit into this picture of the brain and how it works that we are offering you in this book?
Alice As we noted in Chapter 1, Lewis Carroll tells us that Alice was seven and a half years old exactly when she stepped through the mirror over the mantelpiece in the drawing room where she had been playing sleepily with her cat and kittens. While this might seem rather too young for her to be such a bold adventurer, the cognitive sciences today are confirming the cleverness of what Carroll imagined Alice might be able to find over on the other side of the looking-glass. The hidden world of the mind located between our ears may often seem much like the flesh and blood world we navigate in our daily lives, but in our minds, as in Wonderland and Looking-Glass House, the normal rules and regulations of time and space need not apply. Instead, this is the realm of dreams, wild ideas, fantasies, and nightmares. In this cerebral realm, nothing needs to be normal; cause and effect can be thoroughly decoupled; our minds are free to create worlds where gravity may be ignored and we alone can set the rules and dictate what happens and when (Figure 2.3). Unfortunately, there is a catch to all the wild and wonderful liberties of being over on the far side of the mirror. While wandering through Wonderland or peering behind doors in Looking-Glass House, it is all too easy to forget that the real world with all its dangers and demands is just over there on the other side of the mirror. And when we forget which world we are in, we may find we have to pay the piper. However dull and boring the real world may seem at times not only to someone as talented as Sherlock Holmes but also to each and every one of us, the challenges that must be faced every day of our lives are rarely or solely imaginary.
Models of the Human Mind 21
FIGURE 2.3
Alice and her cat Dinah (Credit: John Tennial, 1871) (https://commons. wikimedia.org/wiki/File:Alice_and_kitten.jpg).
Three Ways of Staying Alive The popular 1977 disco song “Stayin’ Alive” featured in the highly successful movie “Saturday Night Fever” was written and performed by three British brothers named Gibb (Barry, Robin, and Maurice). Once heard, this song is hard to erase from your mind. In many ways it became the signature song around the world of the disco era in popular culture during the late 1970s and early 1980s. These three brothers were justly famous, by the way, for their three-part harmonies. Each of the three storybook characters we have just summoned forth for you has played a major role in helping the two of us tackle the mysteries of human nature and the human brain. As described by the famous writers who created them, these three characters are not only recognizably human in their individual ways but also strikingly unlike one another in their personalities. We find them all hard to ignore when we are thinking about what it means to be human because, as a trio of colorful personalities, they remind us so dramatically of what we see as the three major and divergent ways that evolution has equipped our species to deal with the complicated and demanding job of staying alive. At least in our eyes, as we have now suggested, Sherlock Holmes brilliantly displays what philosophers, pundits, and psychology professors alike would say are many of the human brain’s serious and seemingly rational ways of dealing with the world and its demands. In graphic contrast, as Michael Crichton wrote, the pigheaded, bombastic, and self-centered Professor George Challenger quite colorfully personifies what many would agree are the brain’s self-serving habitual ways of dealing easily and more or less impulsively with what life throws at
22 Models of the Human Mind
us. Last but not least, Alice always comes to mind whenever we are trying to make sense of how the human brain is also perfectly and sometimes devilishly able to be not only playful, cunning, and even capricious, but also—and most importantly—astonishingly creative.1 But wait. Before saying more, which we will be doing in the next chapter, we need to end this one by repeating an important disclaimer about this rational, habitual, and sometimes playful trio of famous characters. Please remember as you read the pages ahead that much like the Gibb brothers who could harmonize together beautifully and with remarkable skill, so too, these three ways in which your brain works on your behalf to get you from the cradle to the grave perform for the most part in harmony with one another. We do not want you to think that your brain is somehow subdivided into distinctly three different parts, or functioning players. True, once upon a time it was believed that the brain has a number of different and functionally distinct “areas,” “regions,” or “modules” (Buller 2005; Pinker 1997). This “lots of parts” view of the brain, however, is now well past its prime, and such thinking is definitely on the way out (Bressler and Menon 2010; Hackel et al. 2016; Raichle 2015). Similarly, once again please remember that Holmes, Challenger, and Alice are fictional, not real. We find it useful to talk about human nature and the brain in this fashion as an allegory, a tale about three colorful characters and how they work together to get you through life more or less safe, sound, and contented. But as the ancient Romans would caution, please read our tale about the doings of these three cum grano salis—with a grain of salt. We are most definitely not proposing that there really are three different characters roaming loose inside your skull.
Note 1 A less charitable view of the trio of characters in our three-dimensional portrayal of the ways in which the human brain deals with the world: Sherlock wants to learn the truth about what is outside their yellow submarine; George’s orientation is toward personal convenience; and Alice, as young and charming as she undoubtedly is, aims at controlling the outer world not only in ways that are creative, but as we shall be arguing more fully later, also more predictable. It is not Sherlock or George who is the chief architect of our species ability to dumb down the world to fit our needs, wants, and desires. It is Alice.
Works Cited Bressler, Steven L. and Vinod Menon (2010). Large-scale brain networks in cognition: Emerging methods and principles. Trends in Cognitive Sciences 14: 277–290. doi:10.1016/j.tics.2010.04.004 Buller, David J. (2005). Evolutionary psychology: The emperor’s new paradigm. Trends in Cognitive Psychology 9: 277–283. doi:10.1207/s15327965pli0601_1
Models of the Human Mind 23
Crichton, Michael (2003). Introduction to The Lost World by Arthur Conan Doyle. Modern Library Classics edition. Random House. Retrieved from: http://www. michaelcrichton.com/introduction-arthur-conan-doyle-s-lost-world/ De Neys, Wim, and Gordon Pennycook (2019). Logic, fast and slow: Advances in dual-process theorizing. Current Directions in Psychological Science 28: 503–509. doi:10.1177/0963721419855658 Eagleman, David (2011). Incognito: The Secret Lives of the Brain. New York: Vintage Books. Hackel, Leor M., Grace M. Larson, Jeffrey D. Bowen, Gaven A. Ehrlich, Thomas C. Mann, Brianna Middlewood, Ian D. Roberts, Julie Eyink, Janell C. Fetterolf, Fausto Gonzalez, Carlos O. Garrido, Jinhyung Kim, Thomas C. O’Brien, Ellen E. O’Malley, Batja Mesquita, and Lisa Feldman Barrett (2016). On the neural implausibility of the modular mind: Evidence for distributed construction dissolves boundaries between perception, cognition, and emotion. Behavioral and Brain Sciences 39, E246. doi:10.1017/S0140525X15002770 Hansen, Bradley A. (2002). The fable of the allegory: The Wizard of Oz in economics. Journal of Economic Education 33: 254–264. doi:10.1080/00220480209595190 Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Levins, Richard (1966). The strategy of model building in population biology. American Scientist 54: 421–431. Retrieved from: www.jstor.org/stable/27836590 Littlefield, Henry M. (1964). The Wizard of Oz: Parable on populism. American Quarterly 16: 47–58. Retrieved from: www.jstor.org/stable/2710826 Mekern, Vera, Bernhard Hommel, and Zsuzsika Sjoerds (2019). Computational models of creativity: A review of single-process and multi-process recent approaches to demystify creative cognition. Current Opinion in Behavioral Sciences 27: 47–54. doi:10.1016/j.cobeha.2018.09.008 Parker, David B. (1994). The rise and fall of The Wonderful Wizard of Oz as a “parable on populism.” Journal of the Georgia Association of Historians 15: 49–63. Pinker, Steven (1997). How the Mind Works. New York: W. W. Norton. Raichle, Marcus E. (2015). The brain’s default mode network. Annual Review of Neuroscience 38: 433–447. doi:10.1146/annurev-neuro-071013-014030 Rockoff, Hugh (1990). The “Wizard of Oz” as a monetary allegory. Journal of Political Economy 98: 739–760. Retrieved from: www.jstor.org/stable/2937766 Thomson, June (2012). Holmes and Watson. London: Allison & Busby. Van Dam, Nicholas T., Marieke K. van Vugt, David R. Vago, Laura Schmalzl, Clifford D. Saron, Andrew Olendzki, Ted Meissner, Sara W. Lazar, Catherine E. Kerr, Jolie Gorchov, Kieran C. R. Fox, Brent A. Field, Willoughby B. Britton, Julie A. Brefczynski-Lewis, and David E. Meyer (2018). Mind the hype: A critical evaluation and prescriptive agenda for research on mindfulness and meditation. Perspectives on Psychological Science 13: 36–61. doi:10.1177/1745691617709589 Waddington, Conrad H. (1977). Tools for Thought. St. Albans: Paladin. Wittmann, Marc (2017). Felt Time: The Science of How We Experience Time. Cambridge, MA: MIT Press.
3 HUMAN FAILINGS IN REASONING Why Do You Trust Yourself?
Admit it. You know there are times when you should not trust what your brain is telling you. Sometimes this isn’t a big deal. Say when you think you have sighted an old friend you have not seen for years walking along on the other side of the street, and you sing out a cheery hello. Only to realize a split second later that she isn’t who you thought she was, and whoever it is gives you a drop dead look that makes you wish you were invisible. Sometimes the result can be painful, even deadly. For example, when you think it is safe to cross the street, but you suddenly discover you have badly misjudged how rapidly that delivery truck is careening down the road toward you. Here then is the worry. If you should not always trust what your brain is telling you, how do you know when you should? Or do you think you are the great exception proving the rule? If so, what is the rule? That most human beings are flawed and make mistakes, but you are not and never do? Or maybe you are happy to admit that you are far from perfect, but what’s the big deal? Maybe the two of us are just making a mountain out of a molehill? After all, are not all of us who are human fundamentally rational creatures? Isn’t being rational our special human gift, our great defining species characteristic? And anyway, can’t most of us take care of ourselves well enough, even if each of us sometimes makes mistakes, sometimes stumbles, sometimes acts like a damn fool? Said another way, why write a book like this one about why you should not trust what your brain is telling you if nobody needs to read it? We are now going to offer you an assortment of recent news reports from around the globe about people behaving in less than wonderful ways. You may disagree, but we think these stories may change your mind if you believe there
Human Failings in Reasoning 25
is nothing we humans need to worry about when it comes to taking care of ourselves and taking care of the planet we call home. If nothing else, we think these accounts document in ways that are sometimes humorous but sometimes truly sad that we humans are not simply or even basically rational creatures. We are also beings that excel at rationalizing what we want, what we do, and what we are willing to say regardless of the possible consequences. Before doing so, however, we want to ask an elementary question that may strike you as odd. Why do we think we are rational beings? After all, isn’t it by now well-known that we are all the direct descendants of ancient apes? Although experts may debate why it all happened, isn’t it true that we are just the lucky ones whose primate ancestors evolved intellectually beyond those of our living cousins whom we have elected to call gorillas, chimpanzees, bonobos, or orangutans? But does all this necessarily mean we are by nature rational? Are you sure?
Human Conceit Long, long ago we humans proved at least to our own satisfaction that we are supremely clever and manipulative animals. The pyramids of Egypt may not be as complicated as a modern skyscraper in their architectural design and engineering, but they are similarly outstanding feats of human ingenuity and labor. True, other species on Earth are also in the running when it comes to their prowess at design and engineering. A termite mound on the savannahs of Africa, for instance, may be more like an urban skyscraper in the complexity of its internal workings as a building of sorts than an ancient Egyptian pyramid. Yet there is no way that a tower of termite-crafted mud, even one 25–30 ft tall, can hold a candle to the Great Pyramid of Khufu (Cheops), which measures over 480 ft from bottom to top and was built of stone in the middle of the 3rd millennium BC. Long ago we humans also convinced ourselves that we are astonishingly gifted beings. It even says we are in the Book of Genesis (1:26, King James Version): And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth. During the European Enlightenment—that remarkable epoch during the seventeenth and eighteenth centuries when accomplishments that had been made in science and mathematics during the sixteenth and seventeenth centuries helped undermine the old Medieval worldview—we took this biblical hint as to our God-like excellence as a species to its logical extreme. This was the time in European history, also called the Age of Reason, when many educated
26 Human Failings in Reasoning
souls took it upon themselves to write about the reformation of human society in accord with carefully reasoned principles. And while savages living outside Europe might be driven by raw emotions, who could possibly doubt that the inhabitants of Great Britain and at least northern Europe were God’s gift to the world? But that was then, and this is now the twenty-first century. Given what is happening around the world nowadays, you would have to be truly delusional to think we are still living in the Age of Reason. And not just because debates about such pressing issues as climate change, gender equality, globalism, and civil rights have become so obviously rancorous and divisive.
Homo Sapiens? Scientists have long favored assigning a two-part name conforming to the niceties of ancient Latin to every species officially recognized by experts as sitting somewhere on evolution’s all-encompassing Family Tree of Life. Would they today still use the two words the Swedish biologist Carl Linnaeus picked in the eighteenth century when he gave our species its current scientific name? Or on reflection would experts now opt for other words instead? The two Latin words Linnaeus chose to label us are homo (“man”) and sapiens (“wise”). That these two would be his choice is not surprising. Linnaeus was one of the giants of the Enlightenment, the Age of Reason. However, if we truly want to understand what it means to be human— rather than, say, a chimpanzee or a gorilla—we cannot just look on the bright side of things. Let’s face it. There is a lot of evidence that we are not as wonderful as we might wish to believe. All of the following accounts date to 2015 except for one or two from early in 2016. Given how controversial the later election season was during 2016 in the United States, we have avoided the many examples of apparent human foolishness we could offer you of more recent vintage. We felt using occurrences since the middle, say, of 2016 to illustrate the tenuous hold that reason and logic may have on human behavior might mislead you into thinking we are critiquing recent American politics and cultural life. We aren’t, although feel free to add examples of your own taken from the latest headlines. But please remember that the propositions in this book are about being human. They are not just about what it means to be an American following the election of Donald Trump in November 2016. After all, elections in places such as the Philippines even earlier in 2016 and in Brazil in 2018 similarly led to astonishing outcomes that we could certainly use as further proof of humankind’s evidently strange ways. The first two news stories included here may come across to you as easy to explain since alcohol was evidently involved in the events reported. If so, then please think again. There is more to these accounts than simply that having too much beer or too many cocktails can be unwise.
Human Failings in Reasoning 27
No Swimming—Alligators Shortly before 2:30 in the morning on July 3, 2015, Tommie Woodward (aged 28) decided to leap into a bayou in Texas near the Louisiana border. He ignored shouted pleas to cease and desist from those standing nearby at Burkhart’s Marina, a family-friendly place where beer and hamburgers have been staples on the menu for half a century. According to local news reports, he also ignored a prominent sign proclaiming “No Swimming – alligators.” After taking off his shirt and removing his wallet, he shouted an obscenity directed at the rumored alligators and jumped in. He was immediately attacked. Several days later, a marina customer who identified himself at first only as “Bear” killed the 11-ft beast. Game wardens later found remains identified as Woodward’s inside the alligator’s throat. He had been drinking at a bar before showing up at the marina with a young woman for a late-night swim. Evidently the last time someone in Texas had been killed in this unfortunate manner had been back in 1836 (Libardi 2015; “Man killed,” 2015; Nelson 2015; Preuss 2015).
Fireworks on the 4th of July If you are not from Texas, you may possibly not find this news story of evident human absurdity surprising. Texas, after all, is one state in the Union with a well-cultivated international reputation for harboring colorful personalities. Here then is another news story about someone choosing poorly and acting accordingly, again from 2015 but this time from a state about as far away and to the north of Texas as any state in the Union can be except Alaska. On the 4th of July, Devon Staples (22) died in a freak fireworks accident about 10:00 PM during a backyard celebration in Calais, Maine. “He was clowning around with a mortar tube on his head and a lighter in his hand, and he accidentally ignited the firework,” said his brother Cody Staples (25), who saw what happened from just five feet away. As Cody further explained the next day while he fought back tears: There was no rushing him to the hospital. There was no Devon left when I got there. It was a freak accident. . . . But Devon was not the kind of person who would do something stupid. He was the kind of person who would pretend to do something stupid to make people laugh. Staples’s fiancée Kara Hawley later reported that they had been drinking and that Devon was probably “buzzed” on alcohol. Even so, she said, she believed he had accidentally ignited the cigarette lighter while dancing with the fireworks mortar on his head (Cleary 2015; Greenberg 2015; Landau 2015a, 2015b).
28 Human Failings in Reasoning
Darwin Awards Every year for several decades now the world has been treated to an annual awe-inspiring tribute to the vulnerabilities of being human called the Darwin Awards (“Darwin Awards,” n.d.). Decidedly tongue-in-cheek, the website for these honors explains that candidates for this dubious public recognition must display an astonishing lapse of judgment since it takes a phenomenal failure of what some would call just good common sense to earn this posthumous award. Hence such everyday idiocies as playing Russian roulette, not wearing a lifejacket, not fastening your seatbelt, sleeping with a smoldering cigarette, and the like are simply not outstanding enough to qualify someone for such public fame. On the other hand, the following are excellent examples of strong contenders for this award: (1) juggling with hand grenades; (2) smoking in an oxygen tent; (3) strolling down a railroad track wearing headphones with the music turned up loud; (4) taking a selfie next to a wild elephant; and (5) angrily ramming the doors of an elevator with your motorized wheelchair (because the damn thing had the gall to go down without you) until you force the doors open (thereby permitting you to fall down the shaft to your death). Knowing how Woodward and Staples each died, therefore, should we see them both as choice candidates for the Darwin Award? The official website for this honor informs us that all of the tragic but true instances of humankind’s weaker moments memorialized there have been singled out to celebrate those who have protected the human gene pool “by eliminating themselves in an extraordinarily idiotic manner, thereby improving our species’ chance of long-term survival.” Perhaps, but surely this explanation is more a humorous thought than a biologically sensible one? With regard to Woodward and Staples, it might reasonably be said instead that each had chosen, perhaps under the influence of alcohol, to throw caution to wind and do something quite foolish. Yet even so, what kind of an explanation would alcohol be? Shouldn’t the real question be what does alcohol do to the human brain that so negatively impacts our human sensibilities, granting the commonsense wisdom is true that alcohol can be blamed for lapses in good judgment, and that all of us have reasonable sensibilities most of the time when not “under the influence”?
Defending Humanity While saying so may get both of us labeled as killjoys, we feel called upon to defend the basic humanity of the lost souls who have been honored at the Darwin Awards website. True, many of the stories inscribed there may be seen as “cautionary tales about people who kill themselves in astonishing ways, and in doing so, significantly improve the gene pool by eliminating themselves
Human Failings in Reasoning 29
from the human race.” Underlying this rationalization for handing out these tributes, however, is the premise that simple common sense ought to tell even the most idiotic among us that some things are better left undone. Yet, as we will be exploring in depth in later chapters, this commonplace thought does not hold up well when challenged by what neuroscience and modern cognitive psychologists are learning about how the human brain works on a daily basis. For now, however, let us just look at the facts reported about the deaths of Woodward and Staples. Both were male. Is this relevant to why they did what they foolishly did? They were both in their 20s. Also significant? Evidently, too, both had been drinking alcohol. Would that alone account for their dulled sensibilities? One thing does seem certain. What they both did might be called stupid, but would the reason for such behavior cited by the Darwin Awards website be even halfway on the mark? Did they do what they did simply because they were dimwitted? And, therefore, removing them from the human race was a step in the right direction helping to make our species a smarter one? Surely not. Yes, calling both of them “just plain stupid” would be an easy answer. Would it be the right one? Despite conventional wisdom, blaming poor choices, missteps, and human disasters on something lurking inside at least some of us that can be labeled as “stupidity” does not explain anything. Such an explanation is equivalent to suggesting that we sleep because we are “sleepy,” we talk because we are “talkative,” or we do naughty things because we are inherently sinful. Such explanations basically just repackage and try to sell reasonable questions as empty answers.
You See, But You Do Not Observe At the beginning of his short story “A Scandal in Bohemia,” Sir Arthur Conan Doyle has Sherlock Holmes explain to Dr. Watson how he arrives at clever deductions and is thereby able to solve crimes and resolve mysteries. Dazzled as usual by Holmes’s brilliance, Watson tells us: I could not help laughing at the ease with which he explained his process of deduction. “When I hear you give your reasons,” I remarked, “the thing always appears to me to be so ridiculously simple that I could easily do it myself, though at each successive instance of your reasoning I am baffled until you explain your process. And yet I believe that my eyes are as good as yours.” “Quite so,” he answered, lighting a cigarette, and throwing himself down into an armchair. “You see, but you do not observe. . . .” There is no need to reject entirely the commonsense convictions that alcohol and maybe even something that might legitimately be labeled as
30 Human Failings in Reasoning
stupidity can sometimes cloud the minds and pervert the actions of each and every one of us. Yet Sherlock’s astute remark suggests another possible common thread running through news reports about human folly and sometimes death. If Sherlock himself had been around to help us write this book, he might have insisted that we phrase this possible explanation as “you see, but you do not doubt.” Possibly the clearest—and most appalling—evidence of how overconfidence in our own opinions and beliefs can lead to unfortunate, even tragic, outcomes falls under the general heading of hate crimes.
Why Are You Here? Just before 7:00 AM on the day after Christmas 2015, a 68-year-old bearded man dressed warmly and with a blue turban on his head was attacked in Fresno, California, while waiting for a ride to work. According to the police report, two white males in their early 20s pulled up and began cursing at Amrik Singh Bal. Fearing for his safety, he tried to cross the street to get away from them, but “the subjects in the vehicle backed up and struck the victim with their rear bumper.” The car stopped, and the two men “got out and assaulted the victim, striking him in the face and upper body.” During the attack, police said, one of the suspects yelled “Why are you here?” The Fresno Police Department classified the assault as a hate crime and asked the public for help in identifying the perpetrators. Anyone with information regarding this incident was encouraged to contact the Fresno Police Department (case #15-90185) (“Fresno,” 2015; Holley 2015). This was not the first hate crime that December directed at American Sikhs in California. A Sikh house of worship in Orange County had been vandalized earlier in the month with expletive-laced graffiti about Islam and the Islamic State in Iraq and Syria (Stack 2015). According to Iqbal S. Grewal, a member of the Sikh Council of Central California, Sikhs have been mistaken for terrorists and radicals ever since the events of September 11, 2001. “This is the latest episode of what Sikhs have been enduring when they are very peace-loving and hard-working citizens of this great country and not members of al-Qaida or ISIS or any other radical group” (Holley 2016).
Mistaken Identities Perhaps one of the most extreme recent hate crimes against American Sikhs took place on August 5, 2012, when a white supremacist named Wade Michael Page, an Army veteran and rock singer specializing in crooning lyrics of hate, walked into a Sikh temple in Oak Creek, Wisconsin, and opened fire with a 9-mm semiautomatic handgun into a crowd of worshipers—killing six and wounding three before taking his own life.
Human Failings in Reasoning 31
Although Page had ties to well-known white supremacist organizations, his reason for this horrific act remains unknown for he did not leave a note behind, digital or otherwise. Months later, many in the Sikh community were disappointed with how little information the Federal Bureau of Investigation had been able to uncover about why he had done what he did. The suspicion that Page had ignorantly assumed Sikhs are Muslim was widely shared by many around the world. According to a 2013 report published by the Sikh American Legal Defense and Education Fund and Stanford University, nearly half (48%) of the American public associates turbans with Islam and believes that Sikhism is an Islamic sect (“Marks Two Years,” 2014; Goode and Kovaleski 2012; Yaccino et al. 2012). Even so, as one commentator in the New York Times observed a few days after this attack, the mistaken-identity narrative carries with it an unspoken, even unexamined premise. It implies that somehow the public would have— even should have—reacted differently had Mr. Page turned his gun on Muslims attending a mosque. It suggests that such a crime would be more explicable, more easily rationalized, less worthy of moral outrage. (Freedman 2012) How then are we to explain these attacks on American Sikhs? It has not been reported that alcohol had anything to do with what happened. Suggesting instead that the perpetrators were simply being stupid basically just repackages this question as its own answer. Similarly, ignorance about what wearing a turban means also explains nothing. So what then is the answer? We suspect you would agree with us that before taking out his gun and shooting people, it would have been wise for Wade Michael Page to look and listen first to what people inside that Sikh temple were doing and saying. Furthermore, if he had Internet access at home, work, or at a coffee shop, wouldn’t it also have been wise if he had done a Google search on the word “turban”? He certainly would have found a lot there to read, look at, and ponder. Doesn’t it seem likely instead, therefore, that the explanation for his behavior isn’t that he was intoxicated, inept, or simply ignorant, but rather that he genuinely believed that what he was about to do to others was OK, reasonable, and entirely justified? (Fiske and Rai 2015). Similarly, maybe alcohol did contribute to the untimely deaths of Tommie Woodward and Dylan Staples, but isn’t there also the suspicion even in their unfortunate cases that the human failing revealed by their actions isn’t simply that liking alcohol too much can be dangerous, but instead that there may be a general human weakness for thinking that what we believe must be so is right and reasonable?
32 Human Failings in Reasoning
Suspending Belief Early in the nineteenth century, the poet Samuel Taylor Coleridge argued that the willing suspension of disbelief is central to the human enjoyment of poetry, theater, and other forms of entertainment. Perhaps, but it can be argued that this now famous remark takes it for granted that people are by nature rational thinkers whose first instinct is to doubt anything that doesn’t come across as credible. Modern research in the social, psychological, and cognitive sciences, however, is showing instead that we humans actually find it easy to believe almost anything we set our hearts on believing. Even more alarming, we are also inclined to accept some things as true that we are not even aware of wanting to believe (Thaler 2015; Thaler and Sunstein 2011). There is ample evidence showing how challenging it can be for human beings to doubt what they believe is true. Consider the following seemingly unbelievable recent news stories about assuming too quickly that all is well despite what one would think are the obvious risks involved.
I Just Can’t Comprehend It Albuquerque, N.M. (February 1, 2015)—A 3-year-old boy found a handgun in his mother’s purse and fired just one shot that wounded both his parents at an Albuquerque motel on Saturday, police said. According to investigators, the toddler apparently reached for an iPod but found the loaded weapon. Police believe the shooting to be accidental (“3-year old,” 2015). Hoover, Alabama (August 18, 2015)—A man shot to death, likely in a tragic accident at the hands of his toddler son, loved his family and died doing one of the things he treasured most—spending time with his kids, family members said. The parents of 31-year-old Divine Vaniah Chambliss believe he was taking a nap with his two-year-old son when the boy found the semi-automatic handgun and pulled the trigger. Chambliss was found dead in the bed from a single gunshot wound to the head just after 3:00 PM Tuesday in a Hoover apartment where he had been watching his son while the boy’s mother was at work. “We’ve talked with the detective, we’ve talked with the coroner and everything is pointing to the baby,” said the victim’s father, Larry Chambliss. “I just can’t comprehend it.” (Robinson 2015a, 2015b) Similar incidents continue to be reported in the press and through social media almost daily. On April 20, 2016, for instance, a two-year-old Indiana boy fatally shot himself in the evening after discovering a gun in his mother’s purse when his mother “momentarily stepped away” and left her purse on the kitchen counter. The Indianapolis Metropolitan Police Department said in a press release that the grieving mother had been questioned by detectives and then released (Larimer 2016). Department Chief Troy Riggs was quoted on
Human Failings in Reasoning 33
April 21st as saying that it’s hard to believe something so tragic could happen to such a young person. “I didn’t realize that this could occur until I had a gun safety class years ago, and they showed demonstrations of what young children can do with weapons. It’s remarkably scary. I’ll just be quite frank.” His general advice: “never underestimate the ability of a child to fire a weapon. No matter how young they are” (Adams and Ryckaert 2016).
The Painful Reality of Unexpected Consequences Sadly, evolution hasn’t been able to make us excel at looking far enough down the road to see what might be the less than wonderful consequences of what we elect to do with the best will in the world in the here-and-now. In a benchmark 1936 paper published in the first volume of the American Sociological Review, Robert K. Merton at Harvard University popularized the now familiar label “unintended consequences” for such shortsightedness (Merton 1936). Here is one of our own favorite examples. Sometime in the 1830s or 1840s, rabbits were deliberately brought to New Zealand to be game for sportsmen to hunt, and also as cute little reminders for the British settlers of these islands in the South Pacific of what life had been like back home for them in Merrie Olde England. Unfortunately, as the saying goes, rabbits breed like rabbits. By the 1870s, they had become a dangerous ecological threat reaching plague proportions in some parts of these islands. The impact of vast numbers of rabbits on the landscape was disastrous. One New Zealand website recently described the problem thus created for this island nation in the South Pacific. “Rabbits grazed the hillsides, stripping them of their covering and leaving no vegetation to hold the soil when there were heavy rains in winter, and causing extensive erosion of the land.” They competed with sheep for the remaining pasture (12 rabbits eat as much as a single sheep). Farmers who had overstocked their land lost many of their sheep to starvation. “Some high country stations became uneconomic and farmers walked off the land, leaving them to be taken over by the government” (“Rabbits,” n.d.). As another website further observed in 2017: Every year on Easter in the Otago region of New Zealand, people gather to shoot thousands of bunnies. A police officer is on hand, not to arrest the hunters but rather to spur them on. This is the annual Easter bunny hunt, and by the time the day is done there will be literal piles of rabbit carcasses. (Chodosh 2017) Sometimes you can clearly have too much of a good thing, however fuzzy and cute.
34 Human Failings in Reasoning
Biting the Hand That Feeds You At this point in this survey of recent evidence bearing on human folly we want to emphasize lest we be misunderstood that what people believe to be so can get in the way of sweet reason in profound ways that are not just momentary lapses due to alcohol, anger, or lack of foresight. It is well known, for instance, that many political and social conservatives in the United States are staunch opponents of welfare handouts from the federal government and vote against such congressional largess whenever they are given the chance to do so (Newport 2015). Apparently, however, many of those who think and act this way have trouble seeing where the next meal may be coming from. Numerous studies have shown that states where the electorate consistently votes for conservative political candidates, and where people object the loudest to welfare programs, are ironically also the same states receiving the most from the federal coffers. For instance, in 2014 it was reported that Mississippi gets back about $3 in federal spending for every dollar people there sent to the federal treasury in taxes. Alabama and Louisiana are close behind. South Carolina receives $7.87 back from Washington for every $1 its citizens pay in federal tax. Yet these are some of the most politically conservative states in the Union. In contrast, 14 states including Delaware, Minnesota, Illinois, Nebraska, and Ohio get back less than $1 for each $1 they spend in taxes. As one essayist has observed: “it is red states that are overwhelmingly the Welfare Queen States.” The so-called conservative Red States governed by folks who are likely to say government is too big and spending needs to be cut “are a net drain on the economy, taking in more federal spending than they pay out in federal taxes. They talk a good game, but stick Blue States with the bill” (Tierney 2014).
It Really Does Taste Better Our evident willingness as humans to see and believe what we think to be so rather than what is right there in front of us is not limited to politics or practical concerns. Consider how apparently easy it is for us all to accept the words, claims, and deeds of others at face value when we shouldn’t. Near the end of 2015, for instance, a news story began to circulate on social media about a high-priced chocolate bar made and sold by two handsome bearded brothers in Brooklyn, New York, who had sales outlets also in London and Los Angeles (making these sweets more than simply bicoastal). The store displays for these bars pictured at the Mast Brothers website suggest that the interior designer of their showrooms borrowed much from the layout and appearance of an Apple Computer Store (“Mast,” n.d.). It would not be too farfetched to see similarities, in fact, between neatly packaged chocolate bars
Human Failings in Reasoning 35
and customer displays of the latest version of an Apple iPhone—in their size and shape, both products have much in common. On December 20th, a story in the New York Times summed up this chocolate hullabaloo this way: The integrity of a cherished Brooklyn-based brand of craft chocolate bar has been called into question after a food blogger published a four-part series of posts this month that accused the two brothers who founded it of faking how they learned to grind their cacao beans, the ingredients in their candy and even their beards. (Nir 2015) Whether the allegations put on the table about Rick and Michael Mast and their candy are true or false is of little relevance here. Scott Craig, their major adversary in this bittersweet story, has indeed apparently scored a few points (Craig, n.d.). Yet what most intrigues us about this tempest in a chocolate shop is how successful the Mast brothers have been at convincing customers shopping online in 2019 to shell out $6 or so for a 2.5 oz (70 g) bar of chocolate ($2.40 per ounce) enclosed in a beautifully designed wrapper hinting that the contents therein must be head and shoulders above the widely available commercial bars (currently selling at retail stores in packages of six 1.45 oz bars for around $4.90, or 56¢ per ounce) associated with Hershey, Pennsylvania, 165 miles west of Brooklyn, for over a century or so (Shanker 2015). Importantly, as another journalist reporting on this story has remarked: If you have $10 to spend on chocolate, you’re likely going to feel good about that decision. The price sends a signal to the brain that it’s top-quality chocolate, so you may experience a high level of pleasure when you eat it. (Pashman 2015) In other words, as this same reporter goes on to say, “if you think something tastes better, it really does taste better to you, even if it technically isn’t any better.”
Beyond Belief? While arguing about the credibility of a bar of chocolate is largely a trivial pursuit, another example of people accepting what other people tell them and then acting in ways most would see as contrary to their own interests is one that is almost beyond belief. Independent confirmation of the truth of this event has not been possible because what allegedly happened took place in the city of Raqqa in northern Syria, and the reports available come from admittedly biased sources.
36 Human Failings in Reasoning
Accepting this information at face value, it seems that a young man named Ali Saqr (said to be 20 or 21) publicly executed his own mother, Lena al- Qasem (45), sometime during the first week of January 2016 before a large crowd of people outside the local post office where she had been employed (Barnard 2016; “Islamic State,” 2016). There is some disagreement about the justification for her death, but it seems the Islamic State had declared his mother guilty of apostasy and had ordered her death. According to one report, the son had reported his mother to his ISIS superiors after quarreling with her. She was then sentenced to die, and he was ordered to be the one to kill her (Hall 2016; Naylor and Moyer 2016).
Our Own Worst Enemy In light of a news report as disturbing as this last one, what do you think now of Linnaeus’s decision that we all should be labeled as Homo sapiens? Does labeling us not just “human” but also “wise” still fit what we are like as an earthly life form after all that has happened in history since Linnaeus died in 1778, including generations of brutally exploitative European colonialism, two devastating world wars, the atomic bombs dropped on Hiroshima and Nagasaki in August 1945, and countless atrocities here and there around the globe ever since then? What is so wise and wonderful about this lengthy story of pain, misery, and wanton death and destruction? Unquestionably some of the news stories we have just retold resonate with some of the popular explanations for why people sometimes behave in ways that frankly seem pretty foolish when seen through the eyes of others. Certainly, there is no need to reject entirely the commonsense convictions that alcohol, ignorance, and maybe even something that can legitimately be labeled as stupidity can cloud the minds and pervert the actions of each and every one of us. Yet as we said before, easy explanations such as these run the risk of just repackaging reasonable questions as empty answers. In 1970 the humorist Walt Kelly famously observed: “We have met the enemy and he is us.” Although limited in number and therefore not an adequate scientific sampling of human foolishness (or worse), the diverse news accounts just reviewed suggest Kelly was not just being humorous when he offered us this observation. It seems clear that we humans are not just capable of being less than rational creatures. We are also skilled at fooling ourselves when we are doing so. If this is true, then we are confident Sherlock Holmes himself would agree with us that here is a real clue to solving the mystery of human nature and the brain. The guilty party may well be the brain itself. Before exploring this suspicion more closely, however, we first need to establish more clearly what it means to be human. It is now time to move on and talk about what we call The Great Human High-Five Advantage.
Human Failings in Reasoning 37
Works Cited 3-year-old accidentally shoots both parents at Albuquerque motel. CBS News, February 1, 2015. Retrieved from: www.cbsnews.com/news/3-year-oldaccidentally-shoots-both-parents-at-albuquerque-motel/ Adams, Michael Anthony and Vic Ryckaert (2016). Child, 2, kills self with gun from mom’s purse. IndyStar, April 20, 2016. Retrieved from: www.indystar.com/ story/news/crime/2016/04/20/2-year-old-dies-self-inf licted-gunshot-wouldnorthwest-side-police-confirm/83317528/ Barnard, Ann (2016). ISIS militant kills mother on group’s orders, activists say. New York Times, January 8, 2016. Retrieved from: www.nytimes.com/2016/01/09/ world/middleeast/isis-militant-mother-raqqa.html Chodosh, Sarah (2017). Sorry, but New Zealand really needs to kill these adorable rabbits. Popular Science, July 28, 2017. Retrieved from: www.popsci.com/ new-zealand-needs-to-kill-those-adorable-rabbits-on-its-sheep/ Cleary, Tom (2015). Devon Staples: 5 fast facts you need to know. Heavy, July 8, 2015. Retrieved from: https://heavy.com/news/2015/07/devon-staples-calais-mainef ireworks-accident-man-put-shoot-f ireworks-off-head-killed-dead-photosfacebook-4th-of-july-fourth/ Craig, Scott (n.d.). Mast Brothers: What lies behind the beards (Part 1, Taste/ Texture). DallasFood. Retrieved from: http:// dallasfood.org/2015/12/mast-brotherswhat-lies-behind-the-beards-part-1-tastetexture/ Darwin awards (n.d.). Retrieved from: https://darwinawards.com/ Fiske, Alan Page and Tage Shakti Rai (2015). Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships. Cambridge: Cambridge University Press. Freedman, Samuel G. (2012). If the Sikh Temple had been a mosque. New York Times, August 10, 2012. Retrieved from: www.nytimes.com/2012/08/11/us/if-the-sikhtemple-had-been-a-muslim-mosque-on-religion.html?ref=topics Fresno (2015). On-going hate crime investigation. Fresno Police Department, December 27, 2015. Retrieved from: www.facebook.com/FresnoPoliceDepartment/ posts/10156341080330006 Goode, Erica and Serge F. Kovaleski (2012). Wisconsin killer fed and was fueled by hate-driven music. Retrieved from: New York Times, August 6, 2012. https://www. nytimes.com/2012/08/07/us/army-veteran-identified-as-suspect-in-wisconsin shooting.html?_r=1&pagewanted=all Greenberg, Will (2015). Maine man didn’t mean to launch fireworks from his head that killed him, say friends. Washington Post, July 7, 2015. Retrieved from: https:// www.washingtonpost.com/news/morning-mix/wp/2015/07/07/friends-familydefend-maine-man-who-killed-himself-with-fireworks/ Hall, John (2016). Isis militant Ali Saqr al-Qasem publicly executes own mother in Raqqa after accusing her of ‘apostasy’. Independent, January 8, 2016. Retrieved from: www.independent.co.uk/news/world/middle-east/isis-militant-ali-saqr-al-qasempublicly-executes-his-own-mother-in-raqqa-after-accusing-her-of-a6801811.html Holley, Peter (2015). Americans attack Sikhs because they think they’re Muslims. The Sydney Morning Herald, December 30, 2015. Retrieved from: https://www. smh.com.au/world/americans-attack-sikhs-because-they-think-theyre-muslims20151228-glw1oz.html Holley, Peter (2016). Obama White House sends highest-ever official to Sikh house of worship following attacks. Washington Post, January 11, 2016. Retrieved from:
38 Human Failings in Reasoning
www.washingtonpost.com/news/acts-of-faith/wp/2015/12/28/americans-arestill-attacking-sikhs-because-they-think-theyre-muslims/?hpid=hp_no-name_hpin-the-news%3Apage%2Fin-the-news Islamic State militant “executes own mother” in Raqqa. BBC News, January 8, 2016. Retrieved from: http://www.bbc.com/news/world-middle-east-35260475 Landau, Joel (2015a). Maine man dies after launching firework off his head. New York Daily News, July 6, 2015. Retrieved from: https://www.nydailynews.com/news/ crime/maine-man-dies-launching-firework-top-head-article-1.2282157 Landau, Joel (2015b). Maine man who died after putting firework on his head thought it was a dud, almost eloped days before tragic accident. New York Daily News, July 7, 2015. Retrieved from: www.nydailynews.com/news/national/ maine-man-died-firework-thought-dud-article-1.2283915 Larimer, Sarah (2016). Two-year-old fatally shoots himself with gun found in mother’s purse, police say. Washington Post, April 21, 2016. Retrieved from: https://www. washingtonpost.com/news/post-nation/wp/2016/04/21/two-year-old-fatallyshoots-himself-with-gun-found-in-mothers-purse-police-say/ Libardi, Manuella (2015). Alligator in deadly attack shot and killed. Chron, July 7, 2015. Retrieved from: https://www.chron.com/news/article/Alligator-suspectedof-deadly-attack-shot-and-6369345.php Man killed by giant 11 ft alligator after ignoring warning signs and jumping into Texas bayou taunted the beast before leaping into the water. Daily Mail, July 5, 2015. Retrieved from: https://www.dailymail.co.uk/news/article-3149839/Man-killedalligator-Texas-mocked-beast-jumped-marina.html Marks two years since Sikh Temple shooting. NBC News, August 5, 2014. Retrieved from: www.nbcnews.com/news/asian-america/oak-creek-communitymarks-two-years-sikh-temple-shooting-n171981 Mast. Retrieved from: mastbrothers.com/ Merton, Robert K. (1936). The unanticipated consequences of purposive social action. American Sociological Review 1: 894–904. Retrieved from: www.jstor.org/stable/2084615 Naylor, Hugh and Justin Wm. Moyer (2016). Islamic State fighter publicly executes own mother, Syrian activists say. Washington Post, January 8, 2016. Retrieved from: https://www.washingtonpost.com/news/morning-mix/wp/2016/01/08/islamicstate-fighter-publicly-executes-his-mother-report-says/ Nelson, Sara C. (2015). Tommie Woodward killed by alligator seconds after leaping into water screaming “f**k that alligator.” Huffington Post, July 7, 2015. Retrieved from: https://www.huffingtonpost.co.uk/2015/07/06/tommie-woodward-killedalligator-leaping-marina-screaming-f k-alligator_n_7733946.html Newport, Frank (2015). Mississippi, Alabama and Louisiana most conservative states. Gallup, February 6, 2015. Retrieved from: www.gallup.com/poll/181505/ mississippi-alabama-louisiana-conservative-states.aspx Nir, Sarah Maslin (2015). Unwrapping the mythos of Mast Brothers chocolate in Brooklyn. New York Times, December 20, 2015. Retrieved from: www.nytimes. com/2015/12/21/nyregion/unwrapping-mast-brothers-chocolatier-mythos.html Pashman, Dan (2015). Are you a sucker if you like Mast Brothers chocolate? The Salt, December 23, 2015. Retrieved from: www.npr.org/sections/thesalt/2015/12/23/ 460819387/are-you-a-sucker-if-you-like-mast-brothers-chocolate Preuss, Andreas (2015). Man mocks alligators, jumps in water and is killed in Texas. CNN, July 4, 2015. Retrieved from: https://www.cnn.com/2015/07/04/us/texas alligator-attack/index.html
Human Failings in Reasoning 39
Rabbits – Rāpeti. Introduction into New Zealand. Christchurch City Libraries. Retrieved from: https://my.christchurchcitylibraries.com/rabbits/ Robinson, Carol (2015a). “I hurt my Dad”: Family distraught after 2-year-old boy accidentally shoots, kills father. AL.com Birmingham Real-Times News, August 20, 2015 (revised January 13, 2019). Retrieved from: https://www.al.com/news/ birmingham/index.ssf/2015/08/i_hurt_my_dad_family_distraugh.html Robinson, Carol (2015b). Man fatally shot at Hoover apartment; child possibly pulled trigger. AL.com Birmingham Real-Times News, August 18, 2015 (revised January 13, 2019). Retrieved from: https://www.al.com/news/birmingham/index.ssf/2015/ 08/man_fatally_shot_at_hoover_apa.html Shanker, Deena (2015). How the Mast Brothers fooled the world into paying $10 a bar for crappy hipster chocolate. Quartz, December 17, 2015. Retrieved from: https:// qz.com/571151/the-mast-brothers-fooled-the-world-into-buying-crappy-hipsterchocolate-for-10-a-bar/ Stack, Liam (2015). Hate crime inquiry opened into vandalism of Sikh Temple in California. New York Times, December 9, 2015. Retrieved from: http://www.nytimes. com/2015/12/10/us/hate-crime-inquiry-opened-into-vandalism-of-sikh-templein-california.html Thaler, Richard H. (2015). Misbehaving: The Making of Behavioral Economics. New York: Norton. Thaler, Richard H. and Cass R. Sunstein (2011). Nudge: Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press. Tierney, John (2014). Which states are givers and which are takers? The Atlantic, May 5, 2014 (revised March 8, 2017). Retrieved from: https://www.theatlantic.com/ business/archive/2014/05/which-states-are-givers-and-which-are-takers/361668/ Yaccino, Steven, Michael Schwirtz, and Marc Santora (2012). Gunman kills 6 at a Sikh Temple near Milwaukee. New York Times, August 5, 2012. Retrieved from: www. nytimes.com/2012/08/06/us/shooting-reported-at-temple-in-wisconsin.html
4 THE GREAT HUMAN HIGH-FIVE ADVANTAGE What Makes Us Human?
What exactly is it that makes us human rather than, say, just another somewhat sizeable ape generally bigger than a chimpanzee but smaller than a gorilla? Do you know? Does anyone know? Anthropologists pride themselves on being the world’s leading experts when it comes to knowing what it means to be human in all our many and astonishingly diverse ways. Take language, for instance. Estimates vary, but it is said that something like 6,500 different languages are still being spoken around the globe despite the dominance in global social media and news journalism of only a handful such as English, Spanish, French, and Mandarin Chinese. Similarly, historians are given to patting themselves on the back for being able to show you in great detail that despite popular claims to the contrary, human history, too, is diverse and never actually repeats itself. But isn’t all this obvious? Like the proverbial snowflake, aren’t people and events always somehow unique unto themselves and perhaps even special? Or do you disagree? In any case, if you have ever been sitting in an introductory anthropology or history class at a college or university, you have undoubtedly heard the news from the professor at the front of the lecture hall. Like the popular expression about infinite regress that asserts “it’s turtles all the way down,” so, too, when it comes to being human, it is diversity all around and down through history. Consequently, there is no such thing as human nature, if by “nature” one means what everyone has in common with everybody else on Earth (Ingold 2006). Similarly, history may possibly have trends, but no real patterns. End of story. You may now close your class notebook, or turn off your laptop computer. Or maybe you shouldn’t. Such dismissive claims are wrongheaded, or at any rate, overstatements. Variety may be a lot more than just the spice of life, but there definitely are natural limits to variation and diversity. There are also patterns to life and change over time. Why? Because the world is not a wholly
The Great Human High-Five Advantage 41
random and unpredictable place. Things and people certainly can differ widely but not beyond real limits, not beyond natural constraints. Hence, as the late anthropologist Robert Sussman at Washington University in St. Louis once wrote: “Is there something we can call human nature? Of course there is. Humans generally behave more like each other than they do like chimpanzees or gorillas” (Sussman 2010, 514). Key here are the words “humans generally behave.” The phrase “human nature” does not have to mean what all people everywhere on earth must do to deserve being labeled as human beings. Instead what is involved is trying to understand how evolution has made it possible for us to do some things well, some things not so well, and other things not at all. What then are the constraints and opportunities that evolution has given us? Where does the human brain fit into the story of us as a species? Is the brain perhaps more of a handicap than most of us realize?
Brain Basics As big as it is, the human brain is by no means the largest one on earth. Our brains are two to three times smaller than an elephant’s. They are four to six times smaller than those of some species of whales. Therefore, as the Brazilian neuroscience researcher Suzana Herculano-Houzel has asked, if we are the most intelligent of animals, how come we are not the animal with the biggest brain? (Herculano-Houzel 2016). Her answer to this question is reassuring. When it comes to neurons rather than size and weight, we are at the top of the chart. Furthermore, she says we got up there because our evolutionary ancestors discovered how to cook― making it possible to feed the astonishing 85–95 billion nerve cells in our brains efficiently and well. The evolution of the human brain, with its high metabolic cost imposed by its large number of neurons, may thus only have been possible because of the use of fire to cook foods, enabling individuals to ingest in very little time the entire caloric requirement for the day, and thereby freeing time to use the added neurons to their competitive advantage. (Herculano-Houzel 2012, 10667; also Wrangham 2009) The idea that what happens in the kitchen makes the human brain biologically possible is appealing. Famous chefs like Julia Child, James Beard, and the unknown inventor of the Krispy Kreme hamburger (also known as the Luther Vandross burger) would undoubtedly be happy to endorse this hypothesis. However, while cooking may be vitally necessary to feed our neuron-rich brains efficiently and well, being able to eat food made more digestible and easier to chew by cooking is at best only part of the reason each of us walks around with a head on our shoulders we can be justly proud of (Pontzer et al. 2016).
42 The Great Human High-Five Advantage
John Terrell’s mother liked to remind him when he was a child many years ago that “just because you can doesn’t mean you should.” The reverse would be a better way of describing how evolution works. Why? Because evolution is not insightful. Getting to the point in human evolution where cooking food became something our prehistoric ancestors were able to do had nothing to do with cooking being a way of making eating a more wholesome, nutritious, and expedient way of staying alive and reproducing our kind. Furthermore, evolution is amoral. There are no “shoulds” in evolution. Consequently, getting down to the historical nitty gritty, during the course of our prehistoric ancestors’ evolution getting to the point where they could cook calls for an explanation totally apart from whether cooking made it possible for pre-human humans to evolve finer brains than their previous more ape-like evolutionary predecessors. They did not develop their talents in the kitchen because cooking made the evolution of a neurologically bigger brain possible. Instead, knowing how to fire up the barbeque or bring water to a boil was an unanticipated side benefit of doing something else that was helping them survive and reproduce their kind. Hence the real question here is not “do you think this has enough salt?” Rather the issue is how did our ancestors get to that point in the past where sautéing before chewing became an achievable human fireside undertaking.
On the Fingers of One Hand The American philosopher and psychologist William James remarked in 1890 that it has long been conventional wisdom to attribute the many skills of the human mind to something called the soul ( James 1890). A less standard way, however, of resolving the riddle of becoming human is to look for what are the common characteristics we all have as human beings. Human beings cannot fly without using technological enhancements to their natural abilities such as hang gliders and airplanes. On the other hand, we can climb trees, although certainly not as well as the common tree squirrel or a colobus monkey. What we are great at, however, can be spectacular. Such as parlor tricks, word games, and building railroad bridges. And like it or not, we have evolution to thank for making these human achievements possible. Seen from this perspective, at least five things mark us off as a species. Importantly, it is the combination of these characteristics, not any one of them alone, that sets us apart and has enabled us to evolve to be the creatures we are today. This is why we like to call them The Great Human High-Five Advantage. What are these five characteristics? To tell you about them, we need you to lend us a hand. To see firsthand how Darwinian evolution has both benefited and cursed us, please hold up your right hand about 2 ft away from your nose, palm facing you as you admire the fine architectural arrangement of your five fingers (yes, we are going to assume you have the usual five, and please accept our apologies if
The Great Human High-Five Advantage 43
FIGURE 4.1
The high-five advantage.
for some reason, you do not). We use these fingers on our own hands to help us remember what we see as the main characteristics of human nature. Listing them all on one hand also reminds us that while none of these ingredients may be truly unique to human beings, how well they work together in our case may, in fact, be quite special and even unique to our species (Figure 4.1). By the way, please don’t feel like you have to keep your hand up there in front of your face any longer if you don’t want to. We asked you to do so just to put you in the mood. Whatever you decide to do, let’s start with your little finger. We use the pinky to symbolize the first of the five key characteristics we want to talk about: the helplessness and vulnerability of newborn human beings.
Social Nurturance The social psychologist Matthew D. Lieberman at the University of California, Los Angeles, has written passionately about the human brain and how we have been wired by evolution to be social creatures (Lieberman 2013; also Junger 2016). He is right, and the reasons why we are so social lie deep within our evolutionary past. As he has observed, some parts of the human social mind “can be traced back to the earliest mammals hundreds of millions of years ago. Other parts of the social mind evolved very recently and may be unique to humans.”
44 The Great Human High-Five Advantage
There are many payoffs to being social animals, as challenging as such a lifestyle can often be. For starters, as Lieberman has remarked, we may think we are built to maximize our own pleasure and minimize our own pain. Not really. Instead we are built to overcome our own pleasure and maybe even our own pain. Why so? One reason stands out from the others. If we weren’t prepared to attend to others, our offspring would not make it past the first day of their lives. According to evolutionary biologists, humans belong in the zoological grouping of mammals called primates, a category that includes both living and fossil lemurs, lorises, tarsiers, monkeys, and apes, as well as ourselves. As a primate, however, we are uniquely different from all the others, living and dead, in a telling way. When you are born, there is nothing strikingly odd about the size of your brain relative to your overall body size. What happens during your first year of life is another story. During those 12 months, the human brain grows at a rate setting you apart from every other primate, indeed from every other animal on earth. Instead of just doubling in size, as is true for other primates, the human brain becomes nearly four times bigger. The slowing down of brain growth starting around the time of birth for non-human primates does not happen for our kind of primate until around the end of a baby’s first year. Here is another way to describe this biological difference between ourselves and other primates. A human pregnancy is usually said to take about nine months. Looked at from the brain’s point of view, however, a human pregnancy might be reckoned instead as taking 21 months: 9 months in the womb, and another 12 months outside it. Why take note of this? If a human baby waited for 21 months to be born, she or he would never make it out of the womb alive. Nor would any mother survive the ordeal (Martin 2013). Given this information, it is not surprising that human newborns are as helpless as they are when they start demanding our attention. Perhaps the evolutionary implication of this reality, however, may be less obvious (Trevathan and Rosenberg 2016). During our evolution as a primate species, natural selection took our ancestors down a treacherous path. As our forerunners became increasingly committed to having big brains with lots of neurons in them, the survival of their offspring also became increasingly risky. If they hadn’t been willing to be good parents, their young would have perished. Had that happened, we would now not be here on earth to write about them or wonder if they knew any lullabies or nursery rhymes. In the jargon of evolutionary biology today, humans are obligate social animals. Therefore, one of the defining characteristics of what it means to be human is a deeply felt emotional willingness to nurture our young (Eagleman 2015). Yes, not all of us feel as strongly motivated to do so as others. Some of us are willing to be downright mean to anyone smaller than they are (Watts 2017). But thankfully most of us are willing to be a parent. Thankfully, too, many of us are really good at the job.
The Great Human High-Five Advantage 45
Social Learning Leaving the pinky behind, we move on to the next finger in line, the one many people like to put a ring around. For us, this finger stands for how utterly dependent we all are on what we learn about life from others around us, and from the generations of human beings who lived before us. One of the oldest debates in philosophy, psychology, human ethics, and jurisprudence is one about nature versus nurture. Nature: Do we do what we do as human beings because we have been biologically programmed by evolution and the laws of genetics to do so? Nurture: Or because we humans have a choice in the matter, and what we choose to do depends a lot on what we have been taught how to do by our own experiences and by others? On hearing that evolution took our ancestors down a treacherous path when they started producing offspring with increasingly more neuron-rich brains, a reasonable question would be why do something so unbelievably risky? What in the world could have been the payoff from a biological survival and reproduction point of view? Answer: It must have become a really good thing to have more and more neurons. But why? Scientists, philosophers, and others have long pondered this basic question. The answers suggested are many and varied. Some experts note that primates in general have distinctly large brains relative to their body mass even by mammalian standards. Primates are also often highly social creatures. Therefore, perhaps primates need brains larger than most mammals so that they can compete with others of their own kind in the life-long evolutionary struggle for food and favorable reproductive encounters. Or from this same competitive perspective but on a more positive note, perhaps having a really big brain may have made it easier for our prehistoric primate forerunners to develop strategic alliances with others around them in their social sphere. And easier, as well, to track those who were being self-serving freeloaders living off the tolerance and goodwill of others in the group (Barrett and Henzi 2005). These several variant ways of accounting for why primates in general have larger brains than other kinds of creatures have collectively been called the “social brain hypothesis” (Adolphs 2009; Dunbar 1998; Dunbar and Shultz 2007; Tomasello and Gonzalez-Cabrera 2017). As the anthropologist Robin Dunbar has remarked, however, there are also other likely payoffs beyond these possibly explaining why humans have large brains that are able to do so much more than what could be biologically pre-installed in the human cranium in just the nine months or so before birth. In fact, although the idea that we have a big brain because we are such socially crafty creatures may be pleasing to our human sense of self, it is likely instead that primates in general, not just humans in particular, have large brains also for dietary and other ecological reasons rather than just social ones (DeCasien et al. 2017). Whatever the explanation for why we have such impressive brains, the catch here is this one. Humans are not born already knowing all they need to know
46 The Great Human High-Five Advantage
to survive, prosper, and reproduce their kind, and they cannot somehow magically mind-to-mind absorb from others via mental telepathy what they need to know about how to get along in life. Consequently, they must at least be born with, or rapidly acquire somehow, the willingness and intellectual capacity to learn from others who have already mastered what has to be done to make it successfully through life (Boyd et al. 2011). Therefore, it is not at all surprising that evolution has promoted a biologically inheritable human openness, or capacity, to teach and to learn from others (Kinreich et al. 2017; Sigman 2017, 228–236; Simon 1996, 44–45). To use again the jargon of evolutionary biology, we are not just obligate social animals, we are also obligate social learners (Costandi 2016; Stagoll 2017). Why do we emphasize so strongly the role of social learning in the Great Human High-Five Advantage? Because it is obvious to anyone with their eyes wide open that people around the world have come up with different— sometimes almost unbelievable different—but yet similarly effective ways of achieving the same or certainly very similar goals and desires. This fact of human life is perhaps most apparent in the astonishing diversity of languages, customs, and even desires around the world. In short, much of what is artificial about the world we humans have created for ourselves is not just artificial, but also arbitrary and diverse (Lende and Downey 2012). As you read further in this book, please do keep in mind these three words: artificial, arbitrary, and diverse. They are key to a lot of what makes us human.
Social Networks With your pinky and ring finger now accounted for, what does your middle finger represent in our handy way of thinking about what makes us human— and why as a species we have been blessed with the unusually large brain we have been gifted with by evolution? Somewhat ironically perhaps considering how this finger is sometimes used to signal precisely the opposite, the middle finger stands for our social ties with others of our kind (Cacioppo et al. 2010). At birth and for many years thereafter, too, every human being needs a dependable network of social support and nurturance to get through life intact and relatively happy. Even diehard hermits get their start in life in this highly social way however totally they may later try to turn their back on society. True, humans may not be unique in the animal world for their dependence on both social support and social learning for their survival, well-being, and happiness. There is no doubt, however, that we are a strikingly social species not only for support, nurturance, and education, but in another way, too. We survive and flourish best as individuals when we are linked far and wide with others of our kind in productive and enduring social networks. Scientists and philosophers have generally found it hard to explain why humans are obligate social creatures. In part because Charles Darwin’s theory of evolution by means of natural selection views all life on earth as caught up in a
The Great Human High-Five Advantage 47
constant and competitive struggle for existence. So it seems difficult to understand why sharing and cooperation in social networks would ever win out over individualistic selfishness and self-serving behavior if it is true that competition is the big driving force behind the evolution of all life on earth. This is perhaps why evolutionary biologists in particular often seem disparaging about what it means to be human. For instance, the renowned evolutionist Edward O. Wilson at Harvard University has written that when asked if people are innately aggressive, he says: “This is a favorite question of college seminars and cocktail party conversations, and one that raises emotion in political ideologues of all stripes. The answer to it is yes” (Wilson 1978, 99). What Wilson is evidently accepting too easily is the old folk idea that down deep inside Homo sapiens is a dangerously “tribal” species (Terrell 2015). As he himself has phrased this conventional wisdom: Our bloody nature, it can now be argued in the context of modern biology, is ingrained because group-versus-group was a principal driving force that made us what we are. . . . Each tribe knew with justification that if it was not armed and ready, its very existence was imperiled. (Wilson 2012, 62) Anthropologists, however, know from their own firsthand experiences living in exotic places here, there, and elsewhere around the globe that this unpleasantly negative view of human social life is not only naïve but also a fine example of how even scientists are sometimes willing to accept folk beliefs as genuine statements of fact. Why humans are not a tribal species—whatever this is taken to mean—will be spelled out later when we ask what can be done to save our species from extinction. In a nutshell, however, the problem we all must confront isn’t that we are born with a “tribal app,” so to speak, pre-installed (answer: we aren’t). Nor do we all inevitably become tribal afterwards. Instead, it is just too damn easy for each of us to become “provincial” (Parkinson et al. 2018). If this sounds mysterious, stay tuned for what we say in the final chapters of this book. The issue of tribalism is not a trivial matter. Take, for example, the contentious social and political issue of race. From a folk perspective, it seems obvious enough that different sorts of people live in different parts of the world. Who could possibly mistake an African for an Asian or an Irishman? From a social networks perspective, however, it is a no-brainer to see that everybody on earth is networked with everyone else by “six degrees of separation.” Hence seeing separate races here, there, and everywhere is like mistaking red herrings for things that are real (Marks 2017). In fact and not just in fantasy, the many kinds of social ties with others we have as human beings connect us with others of our kind even farther and wider away than most of us are aware of even nowadays when the Internet, WiFi, and mobile phones are on hand to help many of us form new social connections and keep our old ones alive and actively engaged.
48
The Great Human High-Five Advantage
Therefore, yet another characteristic of being human along with social support and social learning is social networking. And please do not be misled by the current popularity of Twitter or Facebook. Word of mouth before the Internet was just as effective even if such an old-fashioned way of networking may have taken longer to get the word around. Moreover, writing something called letters worked, too, long before the digital age (Christakis 2019).
Fantasy and Imagination If your pinky represents social nurturance (our evolved and largely ingrained willingness to care for others and be part of their lives), your ring finger signifies social learning (which makes the pool of human wisdom we can draw upon as individuals vastly larger than the size or lifetime of a single human brain), and—politely, please—your middle finger stands for our species’ highly effective social networking skills (which can vastly extend the range and richness of our human, emotional, and practical resources, but which can also hem us into spending most of our time in our own narrow-minded cliques), then what does your index finger represent in our five-finger deployment of human nature? Somewhat appropriately, we think, this finger represents our species’ highly evolved powers of fantasy and imagination. Much has been written about how the human brain comes up with new ways to think about things, make discoveries, create fantasies, cook up excuses, and sometimes end up totally deluding itself into believing things that simply are not so (Sowden et al. 2015). In The Act of Creation, for instance, the novelist Arthur Koestler was bold enough back in 1964 to offer the world a seemingly complete guide to the conscious and unconscious processes underlying scientific discovery, artistic originality, and comic inspiration. In Koestler’s account, creative activities ranging all the way from the fertilization of an egg to the fertile brain of the creative individual display a common pattern. Brilliant ideas or great jokes, for example, come into focus when thoughts derived from different ways of thinking about things are brought together—a particular instance of what he saw as a universal creative process he called bisociation (Koestler 1964). In a similarly comprehensive study, the cognitive psychologist Margaret Bowden in 1990 agreed with Koestler that being creative can mean discovering connections between thoughts and things that others have not noticed before, but she underscored that being able to do so “requires the skilled, and typically unconscious, deployment of a large number of everyday psychological abilities, such as noticing, remembering, and recognizing. Each of these abilities involves subtle interpretative processes and complex mental structures” (Bowden 2004, 22; Malmberg et al. 2019). We agree with Bowden. Koestler’s suggestion that creativity can be reduced to a single word may be appealingly simple, but as the philosopher Alfred North Whitehead once cautioned, seek simplicity and distrust it. But then
The Great Human High-Five Advantage 49
what is creativity if whatever it is adds up to more than just something rather exotically called bisociation? We will have more to say about the brain’s powers of fantasy and imagination later on. Right now we are only focusing on what it is that makes us human. In this regard, it has long been popular to say that while both fantasy and imagination are unquestionably characteristics of our species, both are mainly the intellectual gifts of the unusually talented few among us. While again the explanation needs to wait until later, this is a misreading of how the human mind works. Fantasy and imagination are part and parcel of being human for all of us because our brains have to work creatively on our behalf every day of our lives to keep us safe and sound. It may well be, as Sherlock Holmes might observe, that as a species we excel at dumbing down the world around us to make the challenges we must deal with as human beings as humdrum and predictable as possible (Chapter 1). Yet as skillful as we are at simplifying and dumbing down the world, all of us need to be creatively ingenious at least now and then in how we handle what arrives on our doorstep unexpected, uninvited, and sometimes downright dangerous. While it may perhaps be true, therefore, that only some of us are as clever as Alice’s creator Lewis Carroll at expressing our inner thoughts in ways that can entertain and inspire countless other human beings, let’s not lose sight of an essential truth about ourselves as a life form. We are all capable of imagining sights, sensations, and unfolding stories that come from somewhere within the brain, not from sources in the outer world. More to the point, our inner fantasy lives may be what is most distinctive about being human. Dogs and cats, too, appear to be capable of having dreams. But do they have wild and zany insights? Brilliant ideas? Delusions of grandeur? Bizarre sexual fantasies?
Social Collaboration We are now at last at finger no. 5, the big one, the thumb. What characteristic of human nature does this finger, also known as the opposable digit, represent? The anthropologist Agustín Fuentes has written often about the critical role that human imagination has played in nurturing and advancing the evolution of our species. From a commonsense point of view, fantasy and imagination are the two sides of a coin called creativity. However commonsensical, saying this is too simple. As Fuentes has emphasized, “the initial condition of any creative act is collaboration” (Fuentes 2017b, 2). It is nearer the truth, in other words, to say that they are two of the three parts of an elementary formula that can be written out as imagination + collaboration = creativity. This simple equation may not always hold true, of course. Indeed, you can almost hear generations of talented and not so talented artists of all sorts loudly crying—This is nonsense! I am an artist! I don’t need to cooperate with anyone else. I just have to let my creative juices flow! And besides, I don’t give a damn whether other people “get” what I’m creating. I do my art for me, not for others.
50 The Great Human High-Five Advantage
Beyond saying the two of us find this decidedly elitist view of human creativity personally hard to swallow, when it comes to human evolution, human nature, and the survival of our human species, Fuentes is right. The sort of creativity that does make a difference in the real world of give-and-take and the struggle for existence is dependent on our evolved capacity as human beings to work more or less well with others to get things done (Shenk 2014). Therefore, however elementary the equation imagination + collaboration = creativity may seem to you, this formula captures not only what distinguishes us in this way as a species, but also what has enabled us to get to where we are today as one of the earth’s most dominant organisms. Genuinely collaborative human creativity, however, is not easy to achieve. Sadly perhaps, evolution has most decidedly not made it possible for us to read what others are thinking. Hence, as we will be discussing later at some length (Chapters 9 and 10), working well together can be astonishingly difficult to pull off not just because all of us can be moody, cantankerous, spiteful, selfish, and the like now and then, but also because—to use an old-fashioned word— fantasy and imagination are “properties” of individual minds, not collective social phenomena. Or said another way, fantasy and imagination reside in the brain’s hidden world of thoughts, dreams, and delusions—in the topsy-turvy realm of Alice’s Wonderland and Looking-Glass House—not in the outer world of politics, social problems, climate change, and murderous intrigue.
The Great Human High-Five Advantage The brilliant nineteenth century Polish composer and piano virtuoso Frédéric Chopin wrote beautiful études—musical compositions—cleverly designed to exercise all five fingers of a pianist’s hand. Agustín Fuentes has described our human talent at moving back and forth between the realms of “what is” and “what could be” as the defining characteristic of our kind of animal, one that has enabled us to move beyond just being a successful species to become a truly exceptional one. As we will be exploring in the chapters to come, we agree with him that humans have incredible creative potential. Our knack for creating megacities, double-decker airplanes, cures for hundreds of diseases, symphonies, and virtual reality games, among other remarkable inventions, attests to our capacity to imagine possibilities and make them real. We identified this human potential long ago, when we named our own species “sapiens,” which means “wise”. (Fuentes 2017a) Where we would perhaps part company from him is when he adds that by labeling our species Homo sapiens, Carl Linnaeus in the eighteenth century was
The Great Human High-Five Advantage 51
implying that our human potential for being creative is what makes us sapiens, that is, wise. Linnaeus originally classified our species in his masterful Systema Naturae, the first edition of which was published in 1735, simply as Homo without any second term pinning us down more specifically. He only added the word sapiens in 1758 in the 10th edition of this greatly influential classification of life on earth. Before then he had just coupled the label Homo with the somewhat cryptic side remark nosce te ipsum, words in Latin meaning “know thyself.” Even after adding the now familiar qualifier sapiens, Linnaeus evidently was still just implying that the only truly unique characteristic distinguishing us from other species of apes included in his classification is our human ability to recognize ourselves as human (Agamben 2001, 23–27)! Whether Linnaeus would be willing to agree with us that what marks humans off from other species isn’t that we are rational but rather playful and ingenious thinkers may be something that historians will never be able to pin down for sure. In any case, the time has now come to move on from asking what are the five main characteristics of being human to ask two similarly fundamental questions about our species. First, how does your brain know about things beyond the narrow confines of your bony skull? Second, how does your brain go about making sense of what your senses are telling you about what is evidently going on “out there” in the world? Now here is a worry we have. At this very moment you may be saying to yourself that both of these elementary questions sound boringly academic. If this is your reaction to our asking them, please do not turn your back on us, put this book down, and walk away. Give us the benefit of the doubt. We are going to do our best in the chapters that follow to convince you that having good answers to both of these questions is key to exploring what is happening up there at the upper end of your spine in a place that Dr. John H. Watson has told us Sherlock Holmes calls his “little brain attic.”1
Note 1 Sherlock Holmes to John H. Watson in A Study in Scarlet by A. Conan Doyle, 1887, Chapter 2, “The science of deduction”: “You see,” he explained, “I consider that a man’s brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things so that he has a difficulty in laying his hands upon it. Now the skillful workman is very careful indeed as to what he takes into his brain-attic. He will have nothing but the tools which may help him in doing his work, but of these he has a large assortment, and all in the most perfect order. It is a mistake to think that that little room has elastic walls and can distend to any extent. Depend upon it there comes a time when for every addition of knowledge you forget something that you knew before. It is of the highest importance, therefore, not to have useless facts elbowing out the useful ones.”
52 The Great Human High-Five Advantage
Works Cited Adolphs, Ralph (2009). The social brain: Neural basis of social knowledge. Annual Review of Psychology 60: 693–716. doi:10.1146/annurev.psych.60.110707.163514 Agamben, Giorgio (2001). The Open: Man and Animal. Stanford, CA: Stanford University Press. Barrett, Louise and Peter Henzi (2005). The social nature of primate cognition. Proceedings of the Royal Society B, 272: 1865–1875. doi:10.1098/rspb.2005.3200 Bowden, Margaret (2004). The Creative Mind: Myths and Mechanisms, 2nd ed. London: Routledge. Boyd, Robert, Peter J. Richerson, and Joseph Henrich (2011). The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences 108, Suppl. 2: 10918–10925. doi:10.1073/pnas.1100290108 Cacioppo, John T., Gary G. Berntson, and Jean Decety (2010). Social neuroscience and its relationship to social psychology. Social Cognition 28: 675–685. doi:10.1521/ soco.2010.28.6.675 Christakis, Nicholas A. (2019). Blueprint: The Evolutionary Origins of a Good Society. New York: Little, Brown Spark. Costandi, Moheb (2016). Neuroplasticity. Cambridge, MA: MIT Press. DeCasien, Alex R., Scott A. Williams, and James P. Higham (2017). Primate brain size is predicted by diet but not sociality. Nature Ecology and Evolution 1: article no. 0112. doi:10.1038/s41559-017-0112 Dunbar, Robin I. M. (1998). The social brain hypothesis. Evolutionary Anthropology 6: 178–190. doi:10.1002/(SICI)1520-6505(1998) Dunbar, Robin I. M. and Susanne Shultz (2007). Evolution in the social brain. Science 317: 1344–1347. doi:10.1126/science.1145463 Eagleman, David (2015). The Brain. The Story of You. New York: Pantheon Books. Fuentes, Agustín (2017a). Creative collaboration is what humans do best. New York, March 22, 2017. Retrieved from: https://www.thecut.com/2017/03/how-imaginationmakes-us-human.html Fuentes, Agustín (2017b). The Creative Spark: How Imagination Made Humans Exceptional. New York: Penguin. Herculano-Houzel, Suzana (2012). The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. Proceedings of the National Academy of Sciences 109, Supplement 1: 10661–10668. doi:10.1073/pnas.1201895109 Herculano-Houzel, Suzana (2016). The Human Advantage. A New Understanding of How Our Brain Became Remarkable. Cambridge, MA: MIT Press. Ingold, Tim (2006). Against human nature. In Evolutionary Epistemology, Language and Culture: A Non-Adaptationist, Systems Theoretical Approach, edited by Nathalie Gontier, Jean Paul Van Bendegem, and Diederik Aerts, vol. 39, pp. 259–281. Dordrecht: Springer Science & Business Media. James, William (1890). The Principles of Psychology. New York: Henry Holt & Co. Junger, Sebastian (2016). Tribe. On Homecoming and Belonging. London: 4th Estate. Kinreich, Sivan, Amir Djalovski, Lior Kraus, Yoram Louzoun, and Ruth Feldman (2017). Brain-to-brain synchrony during naturalistic social interactions. Scientific Reports 7: 17060. doi:10.1038/s41598-017-17339-5 Koestler, Arthur (1964). The Act of Creation. London: Hutchinson & Co. Lende, Daniel H. and Greg Downey (2012). The Encultured Brain: An Introduction to Neuroanthropology. Cambridge, MA: MIT Press.
The Great Human High-Five Advantage 53
Lieberman, Matthew D. (2013). Social. Why Our Brains Are Wired to Connect. New York: Crown Publishers. Malmberg, Kenneth J., Jeroen G. W. Raaijmakers, and Richard M. Shiffrin (2019). 50 years of research sparked by Atkinson and Shiffrin (1968). Memory & Cognition 47: 561–574. doi:10.3758/s13421-019-00896-7 Marks, Jonathan (2017). Is Science Racist? Malden, MA: Polity Press. Martin, Robert (2013). How We Do It: The Evolution and Future of Human Reproduction. New York: Basic Books. Parkinson, Carolyn, Adam M. Kleinbaum, and Thalia Wheatley (2018). Similar neural responses predict friendship. Nature Communications 9: 332–345. doi:10.1038/ s41467-017-02722-7 Pontzer, Herman, Mary H. Brown, David A. Raichlen, Holly Dunsworth, Brian Hare, Kara Walker, Amy Luke, Lara R. Dugas, Ramon Durazo-Arvizu, Dale Schoeller, Jacob Plange-Rhule, Pascal Bovet, Terrence E. Forrester, Estelle V. Lambert, Melissa Emery Thompson, Robert W. Shumaker, and Stephen R. Ross (2016). Metabolic acceleration and the evolution of human brain size and life history. Nature 533: 390–392. doi:10.1038/nature17654 Shenk, Joshua Wolf (2014). Powers of Two: Finding the Essence of Innovation in Creative Pairs. Boston, MA: Houghton Mifflin Harcourt. Sigman, Mariano (2017). The Secret Life of the Mind: How Your Brain Thinks, Feels, and Decides. New York: Little, Brown and Company. Simon, Herbert A. (1996). The Sciences of the Artificial. Cambridge, MA: MIT Press. Sowden, Paul T., Andrew Pringle, and Liane Gabora (2015). The shifting sands of creative thinking: Connections to dual-process theory. Thinking & Reasoning 21: 40–60. doi:10.1080/13546783.2014.885464 Stagoll, Brian (2017). Systemic therapy and the unbearable lightness of psychiatry. Australian and New Zealand Journal of Family Therapy 38: 357–375. doi:10.1002/anzf.1234 Sussman, Robert (2010). Human nature and human culture. In: Agustín Fuentes, Jonathan Marks, Tim Ingold, Robert Sussman, Patrick V. Kirch, Elizabeth M. Brumfiel, Rayna Rapp, Faye Ginsburg, Laura Nader, and Conrad P. Kottak, On nature and the human. American Anthropologist 112: 512–521. doi:10.1111/j.1548-1433.2010.01271.x Terrell, John Edward (2015). A Talent for Friendship. New York: Oxford University Press. Tomasello, Michael and Ivan Gonzalez-Cabrera (2017). The role of ontogeny in the evolution of human cooperation. Human Nature 1–15. doi:10.1007/s12110-017-9291-1 Trevathan, Wenda R. and Karen R. Rosenberg, eds. (2016). Costly and Cute: Helpless Infants and Human Evolution. Albuquerque: University of New Mexico Press. Watts, Amanda (2017). An 8-month-old baby is recovering after she was stuffed in a plastic bag for three days. CNN, August 11, 2017. Retrieved from: https://www. cnn.com/2017/08/10/us/baby-plastic-bag---trnd/index.html Wilson, Edward O. (1978). On Human Nature. Cambridge, MA: Harvard University Press. Wilson, Edward O. (2012). The Social Conquest of the Earth. New York: Liveright (a division of W. W. Norton). Wrangham, Richard (2009). Catching Fire: How Cooking Made Us Human. New York: Basic Books.
5 THE BRAIN AS A PATTERN RECOGNITION DEVICE How Do You Know That?
Sir Arthur Conan Doyle wanted us to see Sherlock Holmes as one of the smartest people in England a century and more ago due to his singular powers of observation and deduction. As he tells us (or rather as Holmes tells Watson in “The Adventure of the Greek Interpreter”), Sherlock is surpassed only by his brother Mycroft. When Watson hints in reply that modesty is moving Sherlock to acknowledge his brother as his superior, Holmes laughs at such an idea: “My dear Watson,” said he, “I cannot agree with those who rank modesty among the virtues. To the logician all things should be seen exactly as they are, and to underestimate one’s self is as much a departure from truth as to exaggerate one’s own powers. When I say, therefore, that Mycroft has better powers of observation than I, you may take it that I am speaking the exact and literal truth.” Perhaps some logicians may honestly believe, like Sherlock Holmes, that they can see things “exactly as they are.” Perhaps such a claim made sense a century or so ago. Nowadays, however, enough is known about how the brain works that it is no longer obvious that anyone however talented should ever claim what they “see” must be accepted as “the exact and literal truth.”
The Art of Seeing, Feeling, and Dealing With the World We live in a world of richly complex sounds, smells, tastes, textures, and colors. Traditionally we see our major senses as somehow separate if not always equal,
The Brain as a Pattern Recognition Device 55
but this is mainly so we can talk about them (Graziano 2019, 11–12, 18–21). Any good cook knows, it is the combination—the patterning—of our sensual experiences that sets a gourmet meal, for instance, apart from the merely humdrum hamburger. So too, a flaming Christmas pudding soaked in rum or a serving of Greek cheese bread cooked in a saganaki pan and set aflame at your table (as they used to do at the Parthenon Restaurant in Chicago before it closed its doors in 2016) are proof that our brains love it when the sensory details confronting us excite more than just one or two of our senses. This is hardly surprising. We experience the world through all of them. This is how we know we are alive. This is how we stand our guard, and how we deal directly with what life hands us during our hopefully long journey from birth to death (Esenkaya and Proulx 2016). It is not surprising, therefore, that keeping in touch with what is happening outside that boney canister on top of your spinal column where the human brain is safely tucked away is no easy task. Nor is it surprising that all of us do not sense or make sense of things in the world in exactly the same ways. This is why, for instance, we are willing to believe Sir Arthur Conan Doyle when he has Holmes tell Watson (see Chapter 2): “I cannot live without brain-work. What else is there to live for?” Furthermore, it isn’t hard to understand—if not entirely sympathize with—Sherlock when he finds what is happening around him seems so dull and boring that he must resort to cocaine or morphine to help him cope with the evident drabness of his own everyday existence. When it comes to how each of us experiences life, there is no doubt about it. It is different strokes for different folks. However, regardless how each of us personally experiences the world, the human brain is not just a disinterested bystander in our lifelong endeavors to stay alive and prosper. The two of us are not going out on a limb if we insist that nobody in their right mind could honestly claim they can get through life scot-free without doing some serious life-preserving brainwork.
Life’s Balancing Act As Doyle portrayed him, Holmes undoubtedly comes across as an extreme case, an outlier as a human being. After all and frankly speaking, from a positive take on life, being dull and boring is not necessarily bad. Indeed, one thing that can be said about the sensory world of sounds, smells, tastes, textures, and colors is an important clue to how the human brain confronts the realities of life. However crazy and confusing the world may sometimes seem, this is a fairly predictable place to live and let live. Night follows day. Spring follows winter. This year’s tulips look a lot like last year’s display in the garden over there outside the window. Although your loved ones may sometimes forget, the chances are also good they may even remember your birthday. And sadly, yes, we all die in the end.
56 The Brain as a Pattern Recognition Device
In sum, the world may sometimes seem a dull and unexciting place, but generally speaking, it is also not a wildly unpredictable one—unless you are perhaps one of those unfortunate souls with a persecutory delusional disorder who are convinced there is a monster under your bed or over there in the closet, or that aliens from another planet are spying on you and may be intent on abducting you for some nefarious purpose. Therefore, from your brain’s point of view, getting safely through each day and night is a balancing act. On one hand, it has to avoid being overwhelmed by all the sounds, smells, tastes, textures, and colors that your senses are picking up as clues about the world around you. On the other hand, the brain cannot be too careless about making sense of what your senses are telling you— inattention that can sometimes lead to embarrassment, accidents, or worse . . . possibly death (Eagleman 2015; Hoffman 2016). From evolution’s down-to-earth point of view, therefore, our bodily senses make a lot of practical sense. With them in action and on our side, we can deal relatively quickly and at times even creatively with what the world throws at us. Without them, we are literally lost. Said another way, based on what we are learning about things happening outside our heads, we make decisions, wise or foolish, about whether we need to do anything to make things better, make things right. In a nutshell, therefore, our senses give us the power of choice. Not a bad benefit to have when dealing with the world and its wily ways.
Sensational News We suspect that you have already picked up on this, but so far we have been ignoring an important issue. What does the human brain do with all the countless sensations that it is constantly being bombarded with every second of every minute of every hour of every day of every year? Gary Lupyan at the University of Wisconsin, Madison, has underscored an important but often overlooked truth about our senses. It is tempting to think that they are just passively capturing information on the fly about the outside world like sails capturing the wind. Wrong. Our brains are not passive players. Conventional wisdom may have it that like a ship’s sail, “our eyes capture photons, our ears vibrations, our skin mechanical and thermal forces, and after transducing them into neural signals and what many textbooks refer to as ‘further processing’ out comes a sensation of sight, sound, hotness, etc.” (Lupyan 2015, 548; also de Lange et al. 2018; Firestone and Scholl 2016; von Helmholtz 1924). Such common sense misses the point entirely. In Lupyan words: “perception is more accurately viewed as a constructive process of turning various forms of energy (mechanical, chemical, and electromagnetic) into information useful for guiding behavior.” In short, energy received by our senses is not information at all until we make it so (Garfield 2016).
The Brain as a Pattern Recognition Device 57
It would be an understatement to say that there remains a lot of uncertainty in psychology and neuroscience about exactly how converting sensual energy into useful information happens within the brain (Samaha et al. 2018; Wilson 2002). Whatever the neural mechanisms involved, however, most researchers nowadays agree that the brain isn’t trying to recreate or reproduce inside the confines of its skull a “true” mental copy or representation of what is out there in the real world. Said simply, brains are not cameras (Felin et al. 2017). Instead, all the brain needs to do its job at least passably well is sufficient information about the outside world to decide whether (1) nothing currently needs to be done; (2) something may need to be done, and you had better decide what to do; or (3) you need to come up with more information to work with before doing anything. Gary Lupyan and other researchers would add to this simple story that the brain not only tries to decide whether it has enough information to work with, but also whether it has the right information it needs to reach a decision to act or just hang loose (Hoffman 2019; Lupyan and Clark 2015).
Proper Recognition Perhaps the easiest way to think about how our senses work on our behalf, therefore, is to say that the brain tries to (1) recognize what your senses are picking up (e.g., “I’ve seen this before, and I know what to expect next!”) or (2) fails to do so (“Ouch, I haven’t seen this before, and haven’t a clue what it is or what to do.”). When recognition is successful, the brain must then determine (3) whether to react in some way (“The ball is coming my way, and I damn well better catch it.”) or ignore what is being sensed (“The ball isn’t coming my way, and so I can just stand here and wait for the next one to come along.”). When the latter is the case, the brain may still have to decide whether it needs to know more, or can safely assume at least for the moment that nothing more needs to be known or done (Dehaene 2014, 59–64). If all this comes across as confusing, here is a three-word mantra for what we are saying: SENSE – RECOGNIZE – REACT Instead of using the word RECOGNIZE in this mantra, you could instead use RECALL or REMEMBER.1 Whatever word or words you favor, this simple mantra may seem trivial, but it captures nonetheless much of what has been debated in psychology, philosophy, and the sciences generally, too, for many centuries, even millennia (Holland 2008). The major sticking point has not been what is labeled as sense or react, but rather as recognize (or recall, remember, etc.).2 This word in this mantra is not particularly controversial in its own right. What is debatable is what this term stands for, namely, the idea that the brain plays a key role not only in converting the signals that the body’s senses are
58 The Brain as a Pattern Recognition Device
picking up into useful information, but also in deciding what, if anything, should be done with that assembled information once it is created inside the cranial vault.3
Who Cares What You Think? So far we have been taking it for granted that the human brain is capable of being like Sherlock Holmes when it deals with the world around it. Said crudely perhaps, we have been assuming that the brain can observe, not just react. Even more to the point and using computer jargon, the brain does not process information, it creates the information it needs out of the almost nothingness of the impulses it receives via its bodily nervous system (Buonomano 2017, 215–216). This might come across to you as a no-brainer of an assumption, but strange as it may seem, scientists and philosophers have been debating seemingly forever about how much the brain actually gets involved in determining the body’s many daily decisions both large and small (Raichle 2010). On one side of this endless debate are those who say that most of what we do in life does not actually require much in the way of brain work at all. Why not? Because much of what we have to do from the time we wake up in the morning until we go back to bed again at night is often pretty routine. Actually having to make heady decisions about what to do might just slow things down and get in the way of getting stuff done—for example, breathing, walking, and even doing something as seemingly complex as driving the same way day after day from home to work and back again. On the other side of the argument, however, are those who are convinced that we would not now be the highly evolved rational creatures that we (supposedly) are today if our brains were not always trying—consciously or unconsciously—to get us the biggest payoff possible from everything we do by rationally balancing the costs and benefits of doing something with an eye to maximizing gains and minimizing losses in the great evolutionary struggle for existence. Or at any rate, in the struggle to be on top when it comes to fame, fortune, and favorable reproductive encounters. So who’s right, who’s wrong in this debate? Or is the real truth a lot more complicated?
Battle of the Giants During the twentieth century, two of the world’s leading psychologists, John B. Watson (1878–1958) and B. F. Skinner (1904–1990) were on one side of this ageless debate in science and philosophy about how much the brain has an active role to play in determining what we think, say, and do. They insisted, often in rather extreme ways, that credible insights into human psychology have
The Brain as a Pattern Recognition Device 59
to be based on careful observations of actual behavior rather than on fanciful speculations, however seemingly insightful, about what may be happening mentally inside the human skull. Only in this rigorous way, they argued, can psychology be a true science rather than just a sad miscellaneous hodgepodge of ill-founded ideas and unverifiable claims. Their side of the debate about how the brain plays a role in our lives has come to be called “behaviorism” in recognition of the ruling premise that what and how we do what we do—our observable behavior—is controlled (or shaped somewhat like putty) not by anything the brain itself does in any major way, but instead by external stimuli and our more or less knee-jerk responses to external sensations as determined—Skinner favored the word “conditioned”— by whether we experience those stimuli as positive or negative. The ruling premise of behaviorism, which is not an unreasonable one, is that our responses to positive stimuli (e.g., the delicious smell of food cooking) are likely to be repeated (“strengthened”), while those to negative stimuli (say, rotting flesh) are more likely to be avoided in the future (Skinner 1953). On the other side of the scientific and philosophical debate during the last century about the role of the brain in human affairs were those defending the traditional Enlightenment idea that the brain not only actively makes choices about what to do (or not do) that can determine what we think, say, and do, but is also always seeking to make the right choices, the best choices, the most optimal or “maximal” choices. Why? Because this is how any rational being with a brain as big as ours ought to behave. Seen from this self-satisfied perspective, therefore, far from being just putty in the hands of the physical world around us, we humans would be letting down our birthright as clever beings if we did not strive to do things rationally—except perhaps when under the influence, say, of alcohol, drugs, or extreme emotional distress. We are convinced that both sides in this debate have been misguided, which is a nice way of saying that these alternative ways of thinking about the brain and its role in our lives are both too extreme, too black or white. To give you a clearer sense of what has gone wrong on each side, we want to take a closer look at how two of the leading figures in this fray during the last century wrote about the brain and its evolution: B. F. Skinner, and the award-winning economist Herbert A. Simon (1916–2001) whose ideas we first mentioned back in Chapter 1.
B. F. Skinner Burrhus Frederic (“B. F.”) Skinner was famously opposed to what he called “mentalistic explanations” for human behavior. By this he meant attributing to the mind an active role in determining how we behave. In his eyes, trying to explain what we do by appealing to inner states of mind, feelings, and other
60 The Brain as a Pattern Recognition Device
elements of an “autonomous man” inside our skulls was unscientific and a waste of time (Bjork 1993). In his own words: “The ease with which mentalistic explanations can be invented on the spot is perhaps the best gauge of how little attention we should pay to them” (Skinner 1971, 160). Skinner had no patience in particular for anyone claiming that we do what we do because we have mental intentions, purposes in mind, or something intangible called free will (Skinner 1977, 1985). Instead, according to Skinner, the “task of a scientific analysis is to explain how the behavior of a person as a physical system is related to the conditions under which the human species evolved and the conditions under which that individual lives” (Skinner 1971, 14). As distasteful as some might find such a realization, he went on to say, “the fact remains that it is the environment which acts upon the perceiving person, not the perceiving person who acts upon the environment” (1971, 188). Many have disagreed with Skinner from the get-go about this claim, and even Skinner was willing to concede the “indisputable fact of privacy.” We take this qualification to mean that he knew he could not get away with saying people do not have their own opinions and inner thoughts. Nonetheless, he stuck to his staunch environmentalism. “It is always the environment which builds the behavior with which problems are solved, even when the problems are to be found in the private world inside the skin” (1971, 195). In a review of Skinner’s 1971 book Beyond Freedom and Dignity, the linguist Noam Chomsky scathingly rejected Skinner’s claims. “His speculations are devoid of scientific content and do not even hint at general outlines of a possible science of human behavior. Furthermore, Skinner imposes certain arbitrary limitations on scientific research which virtually guarantee continued failure” (Chomsky 1971). Other reviews of this book were just as critical and condescending (Bjork 1993, 202–206). Perhaps the most telling objection to Skinner’s insistence that we are basically robots controlled by the world around us is an evolutionary one. The human brain is an incredibly costly luxury for us to have if all it does for us is rubberstamp the dictates and commands it receives from outside sources. In humans, the brain accounts for only about 2% of body weight, but it demands about 25% of the oxygen consumed by your body at rest (Harris et al. 2012; Herculano-Houzel 2016; Mergenthaler et al. 2013). Most of this energy is used to power the brain’s neurons, so based on this evidence alone, the brain must be doing something pretty important to merit such energetic support. Over the course of his career, Skinner often invoked evolution and Darwinian natural selection when arguing in favor of behaviorism as the proper way to study why we do what we do as human beings (Skinner 1981, 1990). The issue of brain size does not seem to have been one that was on his radar. It should have been. What is the benefit of having an out-sized brain if it is so costly to maintain? This is not a question that should go unanswered.
The Brain as a Pattern Recognition Device 61
Herbert Simon During his brilliant career, Herbert Simon had much to say about what he saw as the brain’s decision-making talents and the critical role of rationality in the conduct of human affairs. As he observed in The American Economic Review in 1978: “almost all human behavior has a large rational component” (Simon 1978, 2). However, as he eloquently explored in his many books and writings, the words “rational” and “rationality” are tricky ones. Economics, for example, has long had a romantic, even heroic, picture of the human mind (Thaler 2015). In Simon’s words: Classical economics depicts humankind, individually and collectively, as solving immensely complex problems of optimizing the allocation of resources. The artfulness of the economic actors enables them to make the very best adaptations in their environments to their wants and needs. (Simon 1996, 49) This claim is about as far removed from Skinner’s austere thoughtless behaviorism as you can go. Simon himself, however, rejected this traditional understanding of what it means to be rational as demanding far too much of the human brain. Given all that we would have to know to be able to make the very best choices—the most “rational” ones—the best we can do even with our highly capable brains is look for the most satisfactory solutions—the ones that are “good enough”—when it comes to deciding what to do in life (Gigerenzer 2016). Unlike Skinner and other behaviorists, therefore, Simon saw the outer world, the environment, as setting the conditions for success, but not determining our goals and decisions in life. And being intelligent means behaving rationally, and behaving rational means adapting our actions to fit our circumstances so that we can achieve our goals (Simon 1978).
Who’s Right, Who’s Wrong? Given such different takes on human nature and the brain, who should we see as right, who as wrong? Is it B. F. Skinner or Herbert Simon? Needless to say, we think the answer is both of them were right, both were wrong. As is so often the case in science as well as in life, their models of the mind are OK as far as they go, but the problem is they do not go far enough. So instead of being the grand answers that each of these scholars may have intended them to be, what they have given us to think about is sometimes useful, but not always good enough. Consider again the mantra we offered you earlier in this chapter: SENSE – RECOGNIZE – REACT
62 The Brain as a Pattern Recognition Device
Using this mantra as our guide, isn’t it obvious that sometimes how someone REACTS may be as simple and straightforward as Skinner insisted? For instance, can you deny that sometimes your mouth begins to water more or less uncontrollably when you smell your favorite food cooking in the kitchen? (Say, roast lamb or spaghetti sauce?) Or that when you hear a sudden scream, you may snap to attention and look over fearfully in the direction the scream seems to be coming from without having to think things through?4 On the other hand, who can deny that sometimes all of us actually do try to calculate our odds of success or failure in fairly logical and fairly rational ways, just as Simon says we do—even if he is also right that our minds can never truly know enough about the odds of things and events in our complex world to be able to “maximize the odds”? But do we really have to argue with you that much, maybe even most, of the time what any of us are thinking and doing falls somewhere between these two extremes of being, on one hand, utterly knee-jerk and mindless in what we say, do, and think, and on the other, wise and wonderful in word and deed? Now even if you agree with us that how all of us get through life for the most part falls somewhere between these two extremes, saying this does not explain why this is likely to be so. It is time, therefore, to ask George and Alice to join the conversation and tell us what they do for us that evidently Sherlock alone cannot or does not do well all on his own. We asked these two to flip a coin. George won the toss. In the next chapter, therefore, we ask Professor Challenger to fess up and tell us how he REACTS to life’s demands and challenges.
Notes 1 For an alternative way of saying this simple mantra, see: Gross (2014, fig. 1.2). We prefer our way of saying it because Gross’s “modal model of emotion” assumes that the brain is constantly assessing the situations it is dealing with in light of “relevant goals,” a core assumption that we feel begs the issue of how rational is habitual behavior. It would not be an exaggeration to say much of psychology today still accepts the Enlightenment fallacy that we are goal-oriented, rational animals. 2 Current thinking in neuroscience and the cognitive sciences evidently favors using the word prediction rather than recognition (e.g., Hohwy 2018). Our own understanding of how the brain interacts with the world and responds is closer to that of Anthony Chemero (2003), Chemero and Silberstein (2008), Lane Beckes and others (2015), Gantman and Van Bavel (2016), Hawkins et al. (2017), and Henderson and Hayes (2017). 3 For an example of what we mean here by “recognition,” see Barrett (2017, 25–26, 308). Barrett herself favors the idea that the brain is constantly and rapidly making predictions (59–60). For one possible model of how working memory plays a key role in perception, see Ding et al. (2017). 4 We think it is important to add here that how someone reacts to what their senses are telling them need not be obvious to anyone watching them. For example, a
The Brain as a Pattern Recognition Device 63
recent study of how your eyes recognize what they are seeing supports the proposition that visualization is basically a neural reaction to what your eyes are showing you. Thus our guiding mantra could be restated as SENSE — RECOGNIZE — VISUALIZE. In short, what we “see” is something the brain reassembles for us out of our memories of prior experiences (perhaps from only a few milliseconds ago) as a response to a surprisingly small number of current visual cues. For further discussion, see Ponce et al. (2019); also Frankland et al. (2019).
Works Cited Barrett, Lisa Feldman (2017). How Emotions Are Made: The Secret Life of the Brain. Boston, MA: Houghton Mifflin Harcourt. Beckes, Lane, Hans IJzerman, and Mattie Tops (2015). Toward a radically embodied neuroscience of attachment and relationships. Frontiers in Human Neuroscience 9: 266. doi:10.3389/fnhum.2015.00266 Bjork, Daniel W. (1993). B. F. Skinner: A Life. New York: Basic Books. Buonomano, Dean (2017). Your Brain is a Time Machine: The Neuroscience and Physics of Time. New York: W. W. Norton & Company. Chemero, Anthony (2003). An outline of a theory of affordances. Ecological Psychology 15: 181–195. doi:10.1207/S15326969ECO1502_5 Chemero, Anthony and Michael J. Silberstein (2008). After the philosophy of mind: Replacing scholasticism with science. Philosophy of Science 75: 1–27. doi:10.1086/587820 Chomsky, Noam (1971). The case against B. F. Skinner. The New York Review of Books 17(11): 18–24. de Lange, Floris P., Micha Heilbron, and Peter Kok (2018). How do expectations shape perception? Trends in Cognitive Sciences 22: 764–779. doi:10.1016/j.tics.2018.06.002 Dehaene, Stanislas (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Penguin Books. Ding, Stephanie, Christopher J. Cueva, Misha Tsodyks, and Ning Qian (2017). Visual perception as retrospective Bayesian decoding from high- to low-level features. Proceedings of the National Academy of Sciences 114: E9115–E9124. doi:10.1073/pnas.1706906114 Eagleman, David (2015). The Brain. The Story of You. New York: Pantheon Books. Esenkaya, Tayfun and Michael J. Proulx (2016). Crossmodal processing and sensory substitution: Is “seeing” with sound and touch a form of perception or cognition? Behavioral and Brain Sciences 39: e241. doi:10.1017/S0140525X1500268X Felin, Teppo, Jan Koenderink, and Joachim I. Krueger (2017). Rationality, perception, and the all-seeing eye. Psychonomic Bulletin & Review 24: 1040–1059. doi:10.3758/ s13423-016-1198-z Firestone, Chaz and Brian J. Scholl (2016). Cognition does not affect perception: Evaluating the evidence for” top-down” effects. Behavioral and Brain Sciences 39: e229. doi:10.1017/S0140525X15000965 Frankland, Paul W., Sheena A. Josselyn, and Stefan Köhler (2019). The neurobiological foundation of memory retrieval. Nature Neuroscience 22: 1576–1585. doi:10.1038/ s41593-019-0493-1 Gantman, Ana P. and Jay J. Van Bavel (2016). Behavior is multiply determined, and perception has multiple components: The case of moral perception. Behavioral and Brain Sciences 39: e242. doi:10.1017/S0140525X15002800 Garfield, Jay L. (2016). Illusionism and givenness. Journal of Consciousness Studies 23, no. 11–12: 73–82.
64 The Brain as a Pattern Recognition Device
Gigerenzer, Gerd (2016). Towards a rational theory of heuristics. In Minds, Model, and Milieu. Commemorating the Centenary of Herbert Simon’s Birth, Roger Frantz and Leslie Marsh (eds.), pp. 34–59. London: Palgrave Macmillan. Graziano, Michael S. A. (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. New York: W. W. Norton. Gross, James J. (2014). Emotion regulation: Conceptual and empirical foundations. In Handbook of Emotion Regulation 2nd ed., James J. Gross (ed.), pp. 3–20. New York, Guilford Press. Harris, Julia J., Renaud Jolivet, and David Attwell (2012). Synaptic energy use and supply. Neuron 75: 762–777. doi:10.1016/j.neuron.2012.08.019 Hawkins, Jeff, Subutai Ahmad, and Yuwei Cui. (2017). A theory of how columns in the neocortex enable learning the structure of the world. Frontiers in Neural Circuits 11: 81. doi:10.3389/fncir.2017.00081 Henderson, John M. and Taylor R. Hayes (2017). Meaning-based guidance of attention in scenes as revealed by meaning maps. Nature Human Behaviour 1, no. 10: 743–747. doi:10.1038/s41562 Herculano-Houzel, Suzana (2016). The Human Advantage: A New Understanding of How Our Brain Became Remarkable. Cambridge, MA: MIT Press. Hoffman, Donald D. (2016). The interface theory of perception. Current Directions in Psychological Science 25: 157–161. doi:10.3758/s1342 Hoffman, Donald D. (2019). The Case Against Reality. Why Evolution Hid the Truth from Our Eyes. New York: W. W. Norton. Hohwy, Jakob (2018). The predictive processing hypothesis. In The Oxford Handbook of 4E Cognition, Albert Newen, Leon De Bruin, and Shaun Gallagher (eds.), pp. 129–146. Oxford University Press. Holland, Peter C. (2008). Cognitive versus stimulus-response theories of learning. Learning & Behavior 36: 227–241. doi:10.3758%2Flb.36.3.227 Lupyan, Gary (2015). Cognitive penetrability of perception in the age of prediction: Predictive systems are penetrable systems. Review of Philosophy and Psychology 6: 547– 569. doi:10.1007/s1316 Lupyan, Gary and Andy Clark (2015). Words and the world: Predictive coding and the language-perception-cognition interface. Current Directions in Psychological Science 24: 279–284. doi:10.1177/0963721415570732 Mergenthaler, Philipp, Ute Lindauer, Gerald A. Dienel, and Andreas Meisel (2013). Sugar for the brain: The role of glucose in physiological and pathological brain function. Trends in Neurosciences 36: 587–597. doi:10.1016/j.tins.2013.07.001 Ponce, Carlos R., Will Xiao, Peter F. Schade, Till S. Hartmann, Gabriel Kreiman, and Margaret S. Livingstone (2019). Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences. Cell 177: 999–1009. doi:10.1016/j.cell.2019.04.005 Raichle, Marcus E. (2010). Two views of brain function. Trends in Cognitive Sciences 14: 180–190. doi:10.1016/j.tics.2010.01.008 Samaha, Jason, Bastien Boutonnet, Bradley R. Postle, and Gary Lupyan (2018). Effects of meaningfulness on perception: Alpha-band oscillations carry perceptual expectations and influence early visual responses. Scientific Reports 8, no. 1: 6606. doi:10.1038/s41598-018-25093 Simon, Herbert (1978). Rationality as process and as product of thought. The American Economic Review 68, no. 2: 1–16. Retrieved from: www.jstor.org/stable/1816653
The Brain as a Pattern Recognition Device 65
Simon, Herbert A. (1996). Sciences of the Artificial, 3rd ed. Cambridge, MA: MIT Press. Skinner, Burrhus F. (1953). Science and Human Behavior. New York: Simon and Schuster. Skinner, Burrhus F. (1971). Beyond Freedom and Dignity. New York: Knopf. Skinner, Burrhus F. (1977). Why I am not a cognitive psychologist. Behaviorism 5: 1–10. Skinner, Burrhus F. (1981). Selection by consequences. Science 213: 501–504. doi:10.1126/science.7244649 Skinner, Burrhus F. (1985). Cognitive science and behaviourism. British Journal of Psychology 76: 291–301. doi:10.1111/j.2044-8295.1985.tb01953.x Skinner, Burrhus F. (1990). Can psychology be a science of mind? American Psychologist 45: 1206–1210. doi:10.1037/0003-066X.45.11.1206 Thaler, Richard H. (2015). Misbehaving: The Making of Behavioral Economics. New York: W. W. Norton. von Helmholtz, Hermann (1924). Treatise on Physiological Optics, vol. 3 (English translation of Handbuch der physiologischen Optik originally published by the Optical Society of America in 1924 and edited by James Powell Cocke Southall), Dover edition, 1962 and reprinted 2005. Wilson, Margaret (2002). Six views of embodied cognition. Psychonomic Bulletin & Review 9: 625–636. doi:10.3758/BF03196322
6 THE BRAIN AS A PATTERN LEARNING DEVICE Why Do We Have Habits?
We began Chapter 5 by noting that Sir Arthur Conan Doyle wanted us to accept what makes Sherlock Holmes one of the smartest people in England is his remarkable powers of observation and deduction. Furthermore, Sherlock not only agrees with this explanation, but also claims he is surpassed in this regard solely by his older brother Mycroft. What we failed to mention then was why Holmes felt called upon in “The Adventure of the Greek Interpreter” to talk about his brother and himself. In this story, first published in 1893, Watson has just offered him the following observation: “from all that you have told me, it seems obvious that your faculty of observation and your peculiar facility for deduction are due to your own systematic training.” This comment by Watson sets the stage for Sir Arthur to lecture us about nature versus nurture—the enduring debate in science, philosophy, and jurisprudence about heredity versus learning that we discussed briefly back in Chapter 4. Doyle has Holmes initially concede to Watson that systematic training could have something to do with why he and his brother are so extraordinarily gifted considering how unexceptional were those in their family line before them: “To some extent,” he answered, thoughtfully. “My ancestors were country squires, who appear to have led much the same life as is natural to their class. But, none the less, my turn that way is in my veins, and may have come with my grandmother, who was the sister of Vernet, the French artist. Art in the blood is liable to take the strangest forms.” We suspect you may agree with us that suggesting “art in the blood” acquired from a singular grandmother somehow wins out over systematic training seems
The Brain as a Pattern Learning Device 67
a strikingly lame reason for favoring what Doyle labels in this story as “hereditary aptitudes.” However, when it comes to debating nature versus nurture, we do not think it is Sherlock and Mycroft who should take center stage. We think a far better character to use as a debating foil for teasing apart the reasoning involved is Doyle’s other famous literary creation, Professor George Edward Challenger. Furthermore, a much better way to discuss something as grandiose as nature versus nurture is to talk instead about something more down-to-earth: instincts versus habits.
Instincts Versus Habits We find it difficult to think about irascible Professor Challenger without also thinking about biology and evolution. Doyle himself linked both together when he introduced Challenger to the world in 1912 in The Lost World published in eight installments in The Strand Magazine. This is the same monthly publication, by the way, that also gave us Doyle’s Sherlock Holmes stories. As we noted in Chapter 2, this is the tale that inspired Michael Crichton to write his thriller Jurassic Park. If you have read neither Doyle’s original story nor Crichton’s bestselling novel, perhaps you have seen one or more of the films in the continuing series of blockbuster Hollywood movies based on Crichton’s (and Doyle’s) fantasies about living dinosaurs, human greed, and harrowing near-death encounters. No matter if you have not. What we want to say now remains the same. As we described him in Chapter 2, Challenger is our fictional characterization of your brain’s highly developed ability to use its own past experiences as its basic wayfinding guide for getting from birth to death more or less successfully in that seemingly dull and boring way popularly known as habitual. We are not, of course, the only species on Earth that biological evolution has gifted with the ability to profit from their own personal past experiences, although it is probably true that we may be uncommonly talented at getting the most out of the habits we form as we go about our daily lives—including but by no means limited to noteworthy skills like being able to walk in shoes with stiletto heels, ride a bicycle without falling over, drive a car from home to work and back again as if on autopilot, and other similar (seemingly) mindless tasks and accomplishments. Such acquired habits are nothing to sneer at. Just imagine what life would be like if you had to think carefully through every step, every gesture, every move needed to do more or less the very same thing day in and day out, year after year. Why are we bringing this up? Because the sorts of things we learn to do, think, or say through the give-and-take of our own experiences can sometimes seem so automatic, so easily done, so predictable that they may come across to
68 The Brain as a Pattern Learning Device
us as instinctive rather than habitual talents—that is, as kneejerk responses to life’s challenges and demands that were somehow written by the Creator or by that seemingly mysterious something called Darwinian evolution into the genes we have inherited from our parents. There are many conventional ways of labeling supposedly natural human tendencies broadly called instincts: inclinations, talents, faculties, gifts, urges, impulses, drives, compulsions, predispositions, proclivities, aptitudes, and the list goes on. In fact, there are so many words to choose from that this fact alone hints at how willing we all may be to believe that much of what we do as individuals isn’t what we have to learn to do, and most certainly isn’t anything we can be held responsible for. Why not? Because, don’t you see, we cannot help it. Doing stuff like that is just in our genes, our ancestry, our biological inheritance as human beings. Nowadays the advertising slogan “There’s an app for that,” first used by the computer giant Apple Inc. in a television commercial for its iPhone 3G in 2009, has become so familiar to so many that it has become the tag line for countless jokes and parodies. The notion that somehow, like an Apple iPhone, we humans are controlled by little internal genetically inherited apps called instincts and the like seems entirely plausible, although the favored word in cognitive psychology may still be the older computer term “module” rather than the newfangled word app (Colombo 2013; Fodor 1983; Palecek 2017). Sometimes, such beliefs and claims are of little consequence beyond perhaps misleading us into thinking we cannot be held accountable for our actions. Yet sometimes, too, the “there’s an app for that” view of the mind is a lot more consequential, maybe even a lot more dangerously so. In Chapter 4, we noted that the famous Harvard zoologist Edward O. Wilson has written repeatedly about how our species is an instinctively tribal one, and a decidedly nasty one, at that (Thorpe 2003; Wilson 1978, 2012). Wilson’s Harvard colleague Steven Pinker seems to agree. According to Pinker, “familiar categories of behavior—marriage customs, food taboos, folk superstitions, and so on—certainly do vary across cultures and have to be learned, but the deeper mechanisms of mental computation that generate them may be universal and innate” (Pinker 2002, 39). One such deeper mechanism, Pinker has written, is that people everywhere “divide the world into an in-group and an out-group” (39). One consequence of such purportedly innate behavior is that human history supposedly shows that all of us are always ready, willing, and able to kill others of our kind. Moreover, we are inherently brilliant in figuring out clever ways to do so (315–317).
What Did Darwin Say? Charles Darwin, the great nineteenth century naturalist, strongly influenced how many scientists and others today think about the biological history of life
The Brain as a Pattern Learning Device 69
on Earth. His book On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life, first published in 1859, remains his most persuasive treatise on evolution’s elusive, yet remarkable, ability to shape and then reshape anew the world’s biosphere, the sum of all living things on our planet. Despite this book’s tremendous impact on how we have come to understand both ourselves and the world around us, Darwin’s explanation for why there are millions and millions of sometimes astonishingly diverse species living here, there, and elsewhere on Earth turns on two quite elementary biological facts. First, if you do not reproduce, you cannot pass on whatever it is you can do, biologically speaking, to the next generation. Second, if what you can do also makes it more likely that you will live to reproduce your kind, then whatever it is you are able to do, and how well you do it, may have direct impact on what the next generation of creatures on Earth are like down the road of time, biologically speaking. In a nutshell, this is Darwin’s supposedly heretical idea. The chief implication of what he called the evolution of species by natural selection is that given enough time and the right circumstances, the gradual accumulation of even seemingly minor differences in how well organisms deal with the opportunities and challenges that come their way generation after generation can be enough to transform what was once upon a time just a single type, kind, or species, of plant, animal, fungus, or bacterium into many differing, although historically related, species. Substitute here the word “evolve” for the word “transform,” and you have the essence of what biological evolution is all about—namely, what Charles Darwin called “descent with modification.” Ever since the publication of On the Origin of Species, Darwin’s ideas about evolution have been dismissed by some as just a theory. Such skepticism may have been reasonable back in Darwin’s day given the state of knowledge then about the Earth and its history, but such is most definitely no longer true. Biological, paleontological, and geological evidence confirming the fact of biological evolution is now abundant and decisive (Ayala 2008; Sapolsky 2017). What is less certain is what to make of Darwin’s ideas about evolution when it comes to acting like a human being. Few scientists today seriously doubt that we need to credit evolution rather than some Higher Power to account for ourselves as intelligent and reasonably versatile creatures. But how much of what we are able to do is instinctive rather than something we have learned how to do since (and probably even before) we came out of our mother’s womb (Thomason 2018)? In other words, even if you do not want to put them all under the heading “George Edward Challenger,” are our learned habits major or minor players in the art of being us?
70 The Brain as a Pattern Learning Device
Let George Do It For a decade or so after World War II, the saying “let George do it” was a popular way of begging off responsibility for outcomes and actions at least in part because this was the title of a popular American radio drama on the air between 1946 and 1954 starring Bob Bailey as intrepid detective-for-hire George Valentine. Even if our inner Sherlock Holmes would never himself dream of using such a clichéd American expression, there is no doubt that having at our fingertips quick and easy ways—however instinctive or habitual—to get things done is obviously beneficial in dealing with what Darwin called “the struggle for life.” Recall from what we have discussed previously (Chapter 5) that the human brain is an astonishingly costly device, metabolically speaking, to use. Weighing in at only about 2% of body mass, it consumes roughly 25% of the resources needed to run a human being while alive (Herculano-Houzel 2016, 17). Although psychologists today remain divided on exactly how our brain works, there seems to be some consensus that what is up there on your shoulders has been fashioned by Darwinian evolution to use as little energy and spend as little time dealing with the world and its challenges as possible. After all, why not take the low road? If the evolutionary goal is staying alive and reproducing your kind, what would be the point of spending too much time and metabolic effort contemplating, say, the glories of a sunrise by climbing to the top of a mountain? Just get the job done, and then move on to the next task. How then does the human brain accomplish critically important work? Evidently it starts off dealing with what is happening around it in ways that are quick, easy, and according to some perhaps metabolically cheap. But if easy strategies do not get the job done, whatever that may be, then the brain begins to invest more time and maybe energy (Pepperell 2018, 3) on the task until— at a level of cognitive decision-making much more demanding than at the start—it (and, therefore, we) may finally become consciously aware of what is happening and what it is trying to do (Dehaene 2014; Lupyan 2015). Some knowledgeable authorities today call this laborious brainwork “hierarchical” inference (Friston 2008). Moreover, the claim is heard in some scientific circles that rather than simply trying to figure out what it must do at any given moment, the human brain is a biologically constructed computer that is constantly trying to make solid predictions about what will happen next— and, therefore, is also constantly attempting to update its running predictions in light of what is happening around it (Clark 2013; Graziano 2019; Wei and Stocker 2015). Although this model of how a human brain goes about its work may prove down the road to be correct, continuously making predictions sounds to both of us like asking for an awful lot of brainwork that must also be very costly metabolically speaking. As we first discussed in Chapter 5, we are inclined instead
The Brain as a Pattern Learning Device 71
to see the human brain much more simply as an organ of the human body that simply tries to recognize—rather than predict—what is evidently going on around it based on its own previous experiences with seemingly similar past events.1 Therefore, instead of seeing the brain as a kind of highly sophisticated biologically built computer, we like to think of the brain as a fairly reliable pattern recognition device.2 And lest we be misunderstood, the patterns we have in mind are not only those that are visual, such as road signs, religious icons, or the faces of family, friends, and yes, enemies, too. We also mean patterns in time, as well, such as catchy tunes, Beethoven’s Fifth Symphony, and the all-clear signal after a tornado alert. But wait, even if the human brain isn’t a highly sophisticated computer, but instead is merely a metabolically less costly pattern recognition device of some sort, so what? Why bother to ask George to do anything at all? Or Sherlock, for that matter. Why not rely as often as humanly possible instead just on biologically inherited instincts that don’t have to be learned before they can be useful to us?
Is There an App for That? The name Herbert Spencer (1820–1903) is rarely mentioned by anyone today. Back in Darwin’s century, however, he was one of the most influential thinkers of the Victorian era—a man of towering intellect (Francis 2007; Kumar 1997). Moreover, it was apparently Herbert, and not Charles, who coined the still famous phrase “survival of the fittest” that supposedly sums up what Darwin meant by the evolution “by means of natural selection.” If this is, in fact, what evolution is all about—if life on Earth is truly a constant struggle for existence where only the fit survive—then why would any organism, much less anyone in our own species, waste any time and effort at all in acquiring habits and learning new things instead of relying on prefabricated instincts inherited biologically—that is, genetically—from its ancestral kin? After all, wouldn’t not having to learn what to do and how to do something be vastly more efficient, and metabolically cheaper, too, than having to take the time and effort to be schooled by personal experience in the essential arts and crafts of staying alive? Back in the early twentieth century, the answer to this question seemed obvious. It was generally taken for granted then that much of our behavior as human beings is obviously instinctive rather than learned. As one commentator wrote in 1921: Not only are instincts no longer looked upon with suspicion, but they are regarded as the mainspring of human behavior. Instinct has become a current fad in psychology. Behavior of man, origin of social institutions,
72 The Brain as a Pattern Learning Device
religious motives, and the like—all these different human activities are to be explained in terms of instinct. Recent social unrest and the labor movement are again attributed to the failure on the part of society to satisfy the instinctive impulses. Writers on the psychology of war almost identify the war motive with the herd instinct, the instinct of pugnacity, and other allied instincts (Kuo 1921, 645). It has been a century since these words were published. Today there are still those—Edward Wilson and Steven Pinker at Harvard, for instance—who voice similar opinions. Moreover, there is currently little doubt that relying whenever possible on biologically inherited instincts would be an efficient and cheap way to survive and reproduce one’s kind for the future good of our very own species. Hence it is hardly surprising that there is at least some convincing evidence supporting the notion that some of the things we do as human beings may be mostly instinctive (Pankseep and Solms 2012). For instance, as we first noted in Chapter 4, there would seem to be something instinctive about the willingness of many new parents to drop most things and do all that they can to make sure their newborns survive (and stop crying). Furthermore, it is generally acknowledged, as well, that infants and young babies are at least somewhat programmed biologically by evolution to do certain things in rudimentary ways without having to learn individually how to do them—although nobody is sure (despite claims to the contrary) how many biologically pre-installed behavioral apps there may be lodged up there inside an infant’s skull at the time of birth. Yet even if it is likely, for example, that newborns arrive in this world predisposed to suckle at their mother’s breast, it is nonetheless true that communicating with an infant using facial expressions, as a counter example, only begins to flower around three to four months of age (Beebe et al. 2016; Bornstein 2014; “Developmental milestones,” n.d.). In brief, rather than being pre-installed apps, most of even a baby’s ways of dealing with what goes on around her or him are acquired as a normal part of the give-and-take of growing up, listening to others, and learning firsthand how to cope. If you are skeptical about what we just said, here is another example favoring the critical importance of individual learning over genetically inherited instincts even in the lives of human babies. Watching them during the early months of their first year on earth soon makes it obvious that healthy babies are born detectives, little Sherlocks, so to speak. Just look at how they are given to staring intently at you and the world around them when they are not asleep or at their mother’s breast. They take the world in without a blink, and also without a clue that it is generally considered rude and even threatening to stare too long or too hard at others. They may not yet know what they are looking at or for, but they sure do have behaving like Mr. Sherlock Holmes down pat. Therefore, to make a long story short, although perhaps 100 years ago crediting instincts for what makes us human seemed wise, nowadays it is generally
The Brain as a Pattern Learning Device 73
more conventional to say simply that all of us—not just babies—probably do a few things by instinct, a lot of things George’s way by sheer habit, and at least some things by the crafty way that Sherlock Holmes does them—by paying close attention to facts and figures, and then reasoning things out objectively, logically, and very, very carefully.
Habits are Habit Forming Disagree with us if you must, but we are convinced that at least when it comes to being human, learning—however metabolically costly and time consuming—wins out over the ease and convenience of relying instead on genetically inherited (instinctive) ways of responding to life’s challenges and demands. Concluding this, however, does not explain why this bias in favor of learning over instinct should exist, if indeed we are right that it does. According to the psychologists Wendy Wood and Dennis Rünger, when our actions and memories become closely linked through repetition—that is, when they become more or less habitually patterned—we basically do not have to think about what we are doing because responding “has been outsourced onto the context cues contiguous with past performance” (Wood and Rünger 2016, 307). That is, how a person reacts habitually begins to look a lot like what might be easily misidentified as acting as if “by instinct.” Such repetitive behavior also looks a lot like the kind of behavior that Skinner and other behaviorists in the twentieth century insisted was more or less the sum total of all that the human brain is capable of doing (as discussed in Chapter 5). While we think they should not have put all their money, so to speak, on George Edward Challenger’s habitual ways of handling the world, by the same token, it would be foolish to think our genes instead of the environment we interact with solely shapes who we are and what we can do. Yet here we must be careful not to beg an important issue. Why do we develop habits in the first place? Mostly, of course, the reason seems obvious. Recall the mantra SENSE—RECOGNIZE—REACT (Chapter 5). When the links between all three become routine enough through repetition, the brain does not have to spend a lot of time and effort making sense out of (or predicting, if this is what the brain does) what it is faced with. Moreover, due to the repetitiveness of many of our daily experiences, it may actually take little in the way of incoming sensual cues, or clues, about what is happening outside its bony shell for the human brain to recognize what is probably happening before it needs to RESPOND accordingly. A classic example of this last observation would be what you experience when you hear just a few notes of a favorite song and you begin to feel how you have always felt when you hear that particular much-loved or much disliked tune. Say, the Beatles’ “Lucy in the Sky with Diamonds,” Queen’s “We Will Rock You,” or “The Star-Spangled Banner.” Hear just a few familiar words, and your juices begin to flow, so to speak. You may even start to whistle while you work.
74 The Brain as a Pattern Learning Device
Habits, in other words, are simply RESPONSES that come quickly and easily, which is to say, without much thought. Habits might even be called metabolically fairly cheap learned instincts. But then again, maybe they shouldn’t be. Labeling them this way as pseudo-instincts would probably just add to the confusion. Please don’t get us wrong. Acquiring habits isn’t always a piece of cake. Despite rumors to the contrary, it often takes a lot of hard work and deliberate concentration to acquire new habits however good or bad. Learning to play the piano really well, for example, or learning how to ride a bicycle. At first, almost everybody is fairly inept at doing such things. With practice, however, what begins as both mentally and physically demanding work gradually fades into the background. Eventually, playing a piano or getting from A to B on a bicycle becomes something you can achieve without much conscious supervision at all by your self-aware inner self. That is, by your inner Sherlock. In sum, and in any case, surely the key behavioral skills required to play a piano or ride a bicycle cannot be instinctive from the start (Wenger et al. 2017). After all, there is nothing basic and elemental about the habits called performing brilliantly at Carnegie Hall, or winning the Tour de France.
Who Can We Blame? When something awful happens because your inner Sherlock hasn’t been attending closely enough to the world around you, it may make perfect sense to blame your brain for what has gone wrong. But who is to blame when you err because you have simply reacted far too quickly to what is evidently going on outside the confines of your skull? Should you blame your inner George—that is, your own personally acquired habits? Or would it be permissible to blame instead particular biological traits you have inherited genetically from your parents? Or perhaps better still (assuming you want to avoid taking all responsibility for what has gone wrong), when is it reasonable to blame not just your own personal genes, but all those defining human nature itself? The only way we know to handle such a heady issue is to tackle the circumstances of your own involvement in what has gone wrong on a case-to-case basis. Given what we have been saying in the previous chapters, you know that we think there is something that can legitimately be called human nature (Chapter 4). However, we also think that most of what any of us are capable of doing must be acquired through personal experience and social learning, and is not inherited genetically from those who have brought you into this world. Maybe a century ago, as we noted earlier in this chapter, believing that instincts are the mainspring of human behavior may have made good sense. But that was then, and this is now. So why does it still seem at least sometimes wise to claim that we humans are ruled somehow by instincts? At least in part because doing something habitually
The Brain as a Pattern Learning Device 75
can be so easily and so thoughtlessly done, it is not surprising that our own particular habits can be readily mistaken as genetically inherited human instincts, that is, as thoughts, actions, and beliefs that are universal—as things everybody on Earth must do because they seem so utterly natural to us. Such an assumption about the universality of what are actually just our own personally acquired habits drives any self-respecting social scientist to distraction. Furthermore, just because a lot of people we know may have the same habits, good or bad (going to church on Sundays, say, or smoking cigarettes) does not mean that they are universal, inevitable, or in any other way “natural” to our species.
Notes 1 Currently, scientists are using computers to simulate how the brain accomplishes such work (Ananthaswamy 2017; Sanborn and Chater 2016). 2 While we will not explore the likelihood here, instead of thinking the human brain does elaborate computer-like data analysis and probability calculation, a better model for what the brain is actually doing might be information science (Fleissner and Hof kirchner 1996; Frankland et al. 2019).
Works Cited Ananthaswamy, Anil (2017). The brain’s 7D sandcastles could be the key to consciousness. New Scientist 3145: 28–32. Retrieved from: https://www.newscientist.com/article/ mg23531450-200-the-brains-7d-sandcastles-could-be-the-key-to-consciousness/ Ayala, Francisco J. (2008). Science, evolution, and creationism. Proceedings of the National Academy of Sciences 105: 3–4. doi:10.1073/pnas.0711608105 Beebe, Beatrice, Daniel Messinger, Lorraine E. Bahrick, Amy Margolis, Karen A. Buck, and Henian Chen (2016). A systems view of mother–infant face-to-face communication. Developmental Psychology 52: 556–571. doi:10.1037%2Fa0040085 Bornstein, Marc H. (2014). Human infancy . . . and the rest of the lifespan. Annual Review of Psychology 65: 121–158. doi:10.1146/annurev-psych-120710-100359 Clark, Andy (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 36: 181–253. doi:10.1017/ S0140525X12000477 Colombo, Matteo (2013). Moving forward (and beyond) the modularity debate: A network perspective. Philosophy of Science 80: 356–377. doi:10.1086/670331 Dehaene, Stanislas (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Penguin Books. Developmental milestones (n.d.). Birth to 12 months. Office of Child Development, University of Pittsburgh. Retrieved from: www.ocd.pitt.edu/Files/PDF/ Foster/27758_ocd_DM_b-12.pdf Fleissner, Peter and Wolfgang Hof kirchner (1996). Emergent information. Towards a unified information theory. BioSystems 38: 243–248. doi:10.1016/0303-2647(95) 01597-3 Fodor, Jerry A. (1983). The Modularity of Mind. Cambridge, MA: MIT Press. Francis, Mark (2007). Herbert Spencer and the Invention of Modern Life. Ithaca, NY: Cornell University Press.
76 The Brain as a Pattern Learning Device
Frankland, Paul W., Sheena A. Josselyn, and Stefan Köhler (2019). The neurobiological foundation of memory retrieval. Nature Neuroscience 22: 1576–1585. doi:10.1038/ s41593-019-0493-1 Friston, Karl (2008). Hierarchical models in the brain. PLoS Computational Biology 4(11): e1000211. doi:10.1371/journal.pcbi.1000211 Graziano, Michael S. A. (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. New York: W. W. Norton. Herculano-Houzel, Suzana (2016). The Human Advantage: A New Understanding of How Our Brain Became Remarkable. Cambridge, MA: MIT Press. Kumar, Krishan (1997). Spencer, Herbert (1820–1903). In The Dictionary of Anthropology, Thomas Barfield (ed.), pp. 443–444. Oxford: Blackwell. Kuo, Zing-Yang (1921). Giving up instincts in psychology. The Journal of Philosophy 18: 645–664. Retrieved from: www.jstor.org/stable/2939656 Lupyan, Gary (2015). Cognitive penetrability of perception in the age of prediction: Predictive systems are penetrable systems. Review of Philosophy and Psychology 6: 547–569. doi:10.1007/s1316 Palecek, Martin (2017). Modularity of mind: Is it time to abandon this ship? Philosophy of the Social Sciences 47: 132–144. doi:10.1177/0048393116672833 Panksepp, Jaak and Mark Solms (2012). What is neuropsychoanalysis? Clinically relevant studies of the minded brain. Trends in Cognitive Sciences 16: 6–8. doi:10.1016/ j.tics.2011.11.005 Pepperell, Robert (2018). Consciousness as a physical process caused by the organization of energy in the brain. Frontiers in Psychology 9: 2091. doi:10.3389%2Ffpsyg. 2018.02091 Pinker, Steven (2002). The Blank Slate: The Modern Denial of Human Nature. New York: Penguin Books. Sanborn, Adam N. and Nick Chater (2016). Bayesian brains without probabilities. Trends in Cognitive Sciences 20: 883–893. doi:10.1016/j.tics.2016.10.003 Sapolsky, Robert M. (2017). Behave. The Biology of Humans at Our Best and Worst. New York: Penguin Press. Thomason, Moriah E. (2018). Structured spontaneity: Building circuits in the human prenatal brain. Trends in Neurosciences 41: 1–3. doi:10.1016%2Fj.tins.2017.11.004 Thorpe, Ian J. N. (2003). Anthropology, archaeology, and the origin of warfare. World Archaeology 35: 145–165. doi:10.1080/0043824032000079198 Wei, Xue-Xin and Alan A. Stocker (2015). A Bayesian observer model constrained by efficient coding can explain “anti-Bayesian” percepts. Nature Neuroscience 18: 1509– 1517. doi:10.1038/nn.4105 Wenger, Elisabeth, Claudio Brozzoli, Ulman Lindenberger, and Martin Lövdén (2017). Expansion and renormalization of human brain structure during skill acquisition. Trends in Cognitive Sciences 21: 930–939. doi:10.1016/j.tics.2017.09.008 Wilson, Edward O. (1978). On Human Nature. Cambridge, MA: Harvard University Press. Wilson, Edward O. (2012). The Social Conquest of the Earth. New York: Liveright (a division of W. W. Norton). Wood, Wendy and Dennis Rünger (2016). Psychology of habit. Annual Review of Psychology 67: 289–314. doi:10.1146/annurev-psych-122414-033417
7 THE BRAIN AS A PATTERN MAKING DEVICE What Makes Us Creative?
In Chapter 5 we argued that the human brain has a lot more work to do than B. F. Skinner and other behaviorists back in the twentieth century thought necessary. On the other hand, as we insisted in Chapter 6, it is unlikely that the brain is a remarkably sophisticated computer—despite the fact that almost everybody writing about how the brain works currently seems to favor using the jargon and logic of modern computer science. For example, the well-known author and cognitive scientist Stanislas Dehaene has written that below the level of human consciousness: myriad unconscious processors, operating in parallel, constantly strive to extract the most detailed and complete interpretation of our environment. They operate as nearly optimal statisticians who exploit every slightest perceptual hint—a faint movement, a shadow, a splotch of light—to calculate the probability that a given property holds true in the outside world. (Dehaene 2014, 92)1 Not only do we see this way of imagining how your brain works as suspect, we also think it may be more appropriate to describe the brain simply as a handy pattern recognition device useful for tracking whether things are, or are not, like what they were like the last time you encountered them in the world beyond the bony confines of your skull (Chapter 6). In the same spirit of not giving the human brain more credit than it deserves as a biologically assembled survival appliance, when we are talking about habitual old George, we favor seeing the brain as just a fairly down-to-earth pattern learning device. Lastly, when we have Alice in our sights, we often think of her as our very own pattern making device.
78 The Brain as a Pattern Making Device
Up until now, however, we have said little about how Alice helps us reshape, change, and possibly even improve on the patterns each of us encounters in the real world outside our own private submarine. The time has come to do so.
Make of It What You Will In previous chapters, we sought to underscore the fact that evolution has been generous in endowing our species with the capacity to do some things exceptionally well, other things less successfully, and some things not at all. The examples we offered were obvious ones. We are able to build magnificent skyscrapers, we are not especially skilled at climbing trees (although our distant biological ancestors may have been a lot better at doing so), and we cannot fly unaided. The general lesson we have drawn is that evolution has given our species uncommon flexibility in deciding what to do to get through life more or less intact and reasonably successful. In short, evolution has given us the capacity to do many things, but contrary to what some university scholars and street prophets alike may profess, evolution has not told us much about what to do with the biological gifts we have received. Instead, as the saying goes, we can “make of it what you will.” As a consequence, like it or not, what happens to us, as individuals and as a species, is largely in our own hands, largely our responsibility, not evolution’s problem because, after all, as we have said before, evolution has no “shoulds,” only “coulds.” When it comes to “coulds,” however, nothing we have said so far in this book about Sherlock Holmes or George Challenger has pinned down the human brain’s obvious capacity to be inventive, creative, and sometimes downright silly. So yes, it is time to ask Alice to join the conversation. Before this happens, however, we first need to say a few things about Sigmund Freud, and about how even if many of his ideas no longer sound as convincing as they once did decades ago, nonetheless there is definitely a place for someone like Alice up there on our shoulders.
Freud is Dead The first thing we need to say is that Freud is dead. No, we do not mean the famous twentieth century psychologist who died in 1939 at the beginning of World War II after struggling for years with cancer. (Freud did not listen to his doctors, and he really, really liked to smoke cigars.) We mean Freud’s way of thinking about how the brain works with the world popularly called Freudian psychoanalysis. Not every psychologist practicing today would agree with us that Freudian thinking is dead and buried. We are not, however, the only ones to think so.
The Brain as a Pattern Making Device 79
The psychologist and Nobel Laureate Eric Kandel observed in an insightful overview in 1999 that this remarkable man revolutionized our understanding of the human mind during the first half of the twentieth century. Unfortunately, he went on to say, during the second half of the last century, Freudian psychoanalysis did not evolve scientifically. It did not develop objective methods for testing Freud’s excitingly original ideas. Consequently, Kandel gloomily concluded in his benchmark essay, psychoanalysis entered the twenty-first century with its influence in decline. With the passing of psychoanalysis as a valued way of thinking about how your brain works, nothing comparable in its scope and helpfulness has taken its place, leaving most of us today without a workable framework for understanding ourselves and why we do what we do. As Kandel concluded in 1999: “This decline is regrettable, since psychoanalysis still represents the most coherent and intellectually satisfying view of the mind” (Kandel 1999, 505). Much of the strength of Freud’s way of thinking about how the brain handles the world was unquestionably the richness of his portrayal of the inner human psyche. Like a gourmet meal billed as complete from soup to nuts (and after-dinner coffee), his famous trio of players on the hidden stage located between our ears—the id, the ego, and the superego—along with the roles he had them play in the human drama in shaping our actions, reactions, and psychological maladies all had a completeness, a totality, about them that, as Kandel noted, no modern alternative school of psychology has been able to match. We may be too hopeful, but we think Sherlock, George, and Alice may yet be able to give Freud’s famous trio a run for their money. But please do not make the mistake of thinking these three are the kissing cousins of Freud’s miraculous combo. Alice, for example, is no superego. Not by a long-shot. Nor is she a superhero. Yet we do think she can be truly heroic.
Stop Calling Us Lazy Recall from previous chapters that Daniel Kahneman labels thinking he sees as hard and slow by the name “System 2.” He also says this sort of thinking is not only demanding, but also something most of us try to avoid. “The evidence is persuasive,” he claims, “activities that impose high demands on System 2 require self-control, and the exertion of self-control is depleting and unpleasant” (Kahneman 2011, 42; also Kahneman 2003). No wonder he tells us that when given half a chance, the brain’s System 2 would rather turn itself off than get down to business. Given such a negative take on this way of handling the brain’s relationship with the world, is it any wonder that Kahneman uses the word “lazy” to describe System 2’s behavior so often in his 2011 book Thinking, Fast and Slow (by our count, at least 15 times) that it is easy to conclude he truly wants us to take this dismissive claim literally. Indeed, while acknowledging that this is a harsh
80 The Brain as a Pattern Making Device
judgment, he says such an assessment is not unfair. After all, those “who avoid the sin of intellectual sloth could be called ‘engaged.’ They are more alert, more intellectually active, less willing to be satisfied with superficially attractive answers, more skeptical about their intuitions.” In a word, they are more rational (2011, 46). Perhaps Kahneman is right, but we wonder. We are inclined to think he is not only being overly judgmental, but may also be missing the point. Recall again, for instance, what Holmes tells Watson at the beginning of Doyle’s tale “A Scandal in Bohemia”: Watson only sees, he does not observe. If true, then shame on Watson. But is Sherlock being fair? What is it that Watson is too lazy to see? In this story, Sherlock accuses him of not picking up on the fact (although his eyes are just as good as Sherlock’s!) that there are 17 steps (go ahead, count them) leading up to the room where Holmes and he are ensconced when we first meet these two together in this tale, the first of Doyle’s many short stories about this famous duo. On a scale from 1 to 10, however, how would you rate such mental laziness? Look at it this way. If it is true that habits are more cost effective than conscious, deliberate, and intentional undertakings, then why on earth would Watson or anyone else, other than perhaps the obsessive Mr. Holmes, wish to take the time and effort needed to count how many steps there are from the hall up to Holmes’s private sanctuary? Wouldn’t you agree that Watson, in fact, isn’t being lazy at all? Instead, couldn’t it be argued that he is merely being wise? At least with regard to his own personal time and energy budget? After all, if a brain’s resources can only be stretched in so many different directions at one and the same time, why waste precious time and energy doing something like counting stairs when taking those stairs habitually, or at any rate without much conscious effort, would be more than just good enough? Importantly, wouldn’t being economical in this fashion free up some of the brain’s metabolic budget for other uses, other thoughts, other pursuits? Here then is the real issue! Unless we have misunderstood Kahneman, he sees the brain as being lazy when it does not invest a lot of time and energy doing something prosaic that it perceives to be just the same old thing it has encountered before. But is this a sign of laziness? Or could it be the human brain simply has better things to do?
Who’s Driving the Car? As far as we can tell, many have accepted Kahneman’s claim that System 2 is lazy. The renowned physicist and mathematician Freeman Dyson has actually written that so long as we are engaged in the routine skills of calculating and talking and writing, we are not thinking, and System One is in charge. We only
The Brain as a Pattern Making Device 81
make the mental effort to activate System Two after we have exhausted the possible alternatives. (Dyson 2011, 43) These words suggest that the only character up there in the human brain capable of doing any serious thinking must be Holmes. When he is not actively in pursuit of some criminal or evildoer (or counting stairs), then surely more often than not it must be Challenger who is the only one awake at the wheel. Let us grant for the moment that Daniel Kahneman and Freeman Dyson are right. Yes, thinking is hard work. Does it follow that orchestrating our lives as much as humanly possible to make George (System 1) carry most of the burden in the struggle for existence is being lazy? If true, how on earth do the seemingly frivolous kinds of thinking popularly called daydreaming, self-reflection, fantasizing, and the like fit into this humorless picture of human nature and the brain? If laziness is such a ruling human passion, should we consider adding a sixth finger to our Great Human High-Five Advantage (Chapter 4) to incorporate our evident species commitment to intellectual sloth? More to the point, does turning the energy consumed by Sherlock (System 2) routinely down low whenever possible actually happen inside the human cranium? Can we confidently rule out the possibility that rather than saving metabolic energy for a rainy day, say, whenever possible, what is not used by either Sherlock or George gets used instead in other ways by the human brain? Say, by someone named Alice?
Alice Arrives Unexpectedly It would be hard to ignore that fantasy, imagination, and playfulness are characteristic of human thought. It is not surprising, therefore, that such dimensions of the human experience have not been entirely neglected in recent decades as research topics in psychology and modern neuroscience (Abraham 2016). Yet the main focus of scientific psychological research since the heyday of Freudianism has often been on pinning down and putting under a microscope the logic (and illogic) of human rationality and decision-making. In recent decades, there have been several Nobel Prizes awarded for such sober research work.2 You can imagine, therefore, what a shock investigators got not long ago when they started looking inside the living human skull using modern brain-scanning technologies such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). To their amazement, what they saw there was not that the brain turns the power down low as often as possible to save energy. Far from it, frequently the brain could be seen turning inward instead apparently to engage in its own private thoughts, dreams, and yearnings. In short, what has been detected using modern laboratory tools is that when someone is left undisturbed with time on her or his hands, their brain does not
82 The Brain as a Pattern Making Device
go dark. On the contrary, brainwork gets focused instead in particular brain regions now referred to collectively in the cognitive sciences as the brain’s “default network” (Buckner 2012; Buckner and DiNicola 2019; Buckner et al. 2008; Shine and Breakspear 2018). Some have labeled this continuing mental activity seemingly disconnected from external events as “mind wandering” (Corballis 2016; Smallwood and Schooler 2006, 2015). However, it is now accepted that such labeling is too narrow-minded, even misleading. As Jessica Andrews-Hanna at the University of Arizona and her colleagues have elaborated: Tasks that activate the [default] network often require participants to retrieve episodic, autobiographical, or semantic information, think about or plan aspects of their personal future, imagine novel scenes, infer the mental states of other people, reason about moral dilemmas or other scenarios, comprehend narratives, self-reflect, reference information to one’s self, appraise or reappraise emotional information, and so on. (Andrews-Hanna et al. 2014, 32) Although much remains to be learned about the costs and benefits of using the brain in this way—which has now also been called stimulus-independent thinking, spontaneous thinking, task-unrelated thinking, daydreaming, and zoning-out (but of course!) as well as mind wandering—it has become increasingly apparent in recent years that Alice’s kind of brainwork and the sorts of deliberative thinking associated with System 2 (Sherlock) are quite capable of working together with one another in various ways, just as System 1 (George) and System 2 are able to do (Christoff et al. 2009, 2016; Poerio e al. 2007; Vatansever et al. 2015, 2017).3
One Must Seek the Truth Within In many ways it seems surprising that dual-process models of the mind are still as popular in psychology as they evidently are. After all, Sherlock Holmes is a widely known fictional detective renowned not only for what Watson refers to as his “faculty of observation,” but also for his “facility for deduction” (as noted in Chapter 6). Similarly, in Agatha Christie’s widely read crime novels and short stories, the Belgian detective Hercule Poirot often alludes to his “little grey cells” as his own special weapon against murder and wrongdoing. As Christie has him explain in her story “The Disappearance of Mr. Davenheim” (1924) when he is challenged by his old friend Inspector Jaap of Scotland Yard for running down the value of details as clues, Poirot responds: “By no means. These things are all good in their way. The danger is they may assume undue importance. Most details are insignificant;
The Brain as a Pattern Making Device 83
one or two are vital. It is the brain, the little grey cells”—he tapped his forehead—“on which one must rely. The senses mislead. One must seek the truth within — not without.” “You don’t mean to say, Monsieur Poirot, that you would undertake to solve a case without moving from your chair, do you?” “That is exactly what I do mean — granted the facts were placed before me. I regard myself as a consulting specialist.” Certainly, Poirot’s declaration here that truth must be found within the folds of the human brain rather than outside its protective shell is debatable. Even so, it surely goes without saying that neither Sherlock nor Hercule would work as believable characters in anyone’s detective story if it was not common knowledge that it takes more than just a keen eye and good work habits to make such fictional detectives come across to readers as brilliantly insightful. But wait a moment. There is really no need to rely on well-known detective stories to challenge the credibility of dual-process models of the mind. As Shakespeare in The Tempest has the sorcerer Prospero, the wrongly dispossessed Duke of Milan, declaim: “We are such stuff as dreams are made on, and our little life is rounded with a sleep.” Said in our own words, some of the best evidence for why models of the human mind need a character like our beloved Alice as well as Sherlock and George can be found in your very own dreams and daydreams (Bulkeley 2018; Carruthers 2017).
The Stuff That Dreams Are Made On Common sense has long maintained that sleep is a different kettle of fish entirely from what most of us would be willing to call being (1) rational and (2) deliberative. Conventional wisdom has long had it that sleep is when the brain powers down and largely turns off—not entirely, of course, since it has also long been conventional to say that sleep is when we dream. According to the ancient Roman poet Ovid, for instance, Morpheus (the god of dreams) is the son of Hypnos (the god of sleep). So it won’t do to try to claim that when we are asleep, we are totally dead to the world. Furthermore, the mysteries of what is sleep and why we dream are compounded when evidence concerning both is put on the table for serious discussion. In 1865, as a case in point, the Russian scientist Dmitri Mendeleyev reported waking up from a snooze during which he had dreamed about a table and what was on it. When he awoke, he realized that while asleep, he had finally been able to figure out how he could complete an arrangement of the chemical elements in a logical way—a scheme now known as the Table of Periodic Elements. Similarly, in February 1858, Alfred Russel Wallace independently stumbled on Darwin’s theory of evolution by means of natural selection during a shaking fit brought on by malaria.
84 The Brain as a Pattern Making Device
Then again, August Kekulé reported in 1890 that sometime in the winter of 1861–1862 he had a daydream while struggling to figure out how the atoms in benzene fit together. In his reverie, a snake took hold of its tail in its mouth. When he awoke, Kekulé realize how benzene molecules are arranged in rings of carbon atoms. Skeptics have suggested Kekulé may have dreamed up this purported daydream. Others are more willing to see some truth to it all (Strunz 1993). But let’s not lose sight here of the point we are trying to make with these several well-known historical examples. Although conventional wisdom sees “being asleep” as the flip side of “being awake,” it cannot be denied that even when asleep, the human brain is able to do work that is unexpectedly skillful, insightful, and yes, rational.
Unexpected Insights These examples of people making important intellectual discoveries while more or less asleep are not as rare as you might think. According to Deirdre Barrett—a psychologist whose special area of expertise includes dreams and dreaming—such problem-solving fantasies generally arise in the human mind when people find themselves stuck at some point along the way toward solving a problem they are intent on mastering. Unexpectedly but fortunately, a reverie may suddenly resolve their mental impasse (Barrett 2001, 2017). Although we know of no solid evidence that might support such a likelihood, we wonder whether Agatha Christie may have modeled her hero Hercule Poirot on the historically real French mathematician, scientist, and philosopher Henri Poincaré (1854–1912) whose reputation as a famous intellectual must surely have been known to her. Such speculation aside, we would love to be able to have a good meal and an extended conversation with this illustrious gentleman who wrote so insightfully about how his own inner self at unexpected moments had given him solutions to difficult problems in mathematics he had been struggling consciously with little resolve. Here is what Poincaré reported in 1910 in the philosophical quarterly The Monist about such sudden unexpected insights. Although the opening paragraph of this exceptional historical document suggests that he is only writing about mathematical creativity, it soon becomes clear from what he relates that the scope of his recollections is far broader than just that: The genesis of mathematical creation is a problem which should intensely interest the psychologist. It is the activity in which the human mind seems to take least from the outside world, in which it acts or seems to act only of itself and on itself, so that in studying the procedure of geometric thought we may hope to reach what is most essential in man’s mind. (Poincaré 1910, 321)
The Brain as a Pattern Making Device 85
Although we might not favor the words “most essential” to describe the kind of creative thought he is alluding to, focusing as he does on mathematical insight makes sense. Unlike other creative activities, mathematics has a formal precision about it all that makes such brainwork an ideal window on what is behind human creativity. Starting with the following incident, Poincaré carefully lays out for his readers the basic lessons he believes we can draw from several inspirational moments he has had while struggling to develop certain key ideas central to his own mathematical accomplishments. For fifteen days I strove to prove that there could not be any functions like those I have since called Fuchsian functions. I was then very ignorant; every day I seated myself at my work table, stayed an hour or two, tried a great number of combinations and reached no result. One evening, contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination. By the next morning I had established the existence of a class of Fuchsian functions, those which come from the hypergeometric series; I had only to write out the results, which took but a few hours. (1910, 326) Poincaré goes on to tell us about other occasions, too, when solutions to mathematical questions suddenly entered his consciousness after he had been struggling with them without success and evidently without suspecting they were connected with what he had been working on previously. On one such occasion, for example: Disgusted with my failure, I went to spend a few days at the seaside, and thought of something else. One morning, walking on the bluff, the idea came to me, with just the same characteristics of brevity, suddenness, and immediate certainty, that the arithmetic transformations of indeterminate ternary quadratic forms were identical with those of non-Euclidean geometry. (1910, 327) Is there a pattern we can deduce from these reportedly sudden and unexpected mental insights? He says there are. His revelations in each instance had been preceded by long, unconscious prior work. The rôle of this unconscious work in mathematical invention appears to me incontestable, and traces of it would be found in other cases where it is less evident. Often when one works at a hard question, nothing good is
86 The Brain as a Pattern Making Device
accomplished at the first attack. Then one takes a rest, longer or shorter, and sits down anew to the work. During the first half-hour, as before, nothing is found, and then all of a sudden the decisive idea presents itself to the mind. It might be said that the conscious work has been more fruitful because it has been interrupted and the rest has given back to the mind its force and freshness. But it is more probable that this rest has been filled out with unconscious work and that the result of this work has afterward revealed itself to the geometer just as in the cases I have cited; only the revelation instead of coming during a walk or a journey, has happened during a period of conscious work, but independently of this work which plays at most a rôle of excitant, as if it were the goad stimulating the results already reached during rest, but remaining unconscious, to assume the conscious form. (1910, 328–329)
Rethinking Thinking While Poincaré has more than this to tell us about how bright ideas in mathematics and life more generally speaking may suddenly enter our conscious awareness, these few passages from this classic personal account seem sufficient for us to suggest two characteristics of inner brainwork worth keeping in mind. First, it may not be necessary to be asleep to be suddenly blessed with seemingly miraculous insights. Apparently just taking some time off to think about something else can be sufficient (Baird et al. 2012; Gilhooly 2016). Second, his pairing of what Poincaré calls “unconscious work” (by which he evidently means what we call Alice’s more secretive contributions to brain functioning) with “conscious work” (presumably what we would label as Sherlock’s involvement) both before and after experiencing such insightful moments is evidently also key. Another word that might be used for the work done beforehand would be what many psychologists call priming, although when psychologists use this term, they are usually referring to what they see as something that can unconsciously bias the mind beforehand in one way or another (Dehaene 2014, 56–59; Doyen et al. 2015). We would not limit the idea of priming solely to things or events that unconsciously influence what the brain sets its mind on doing soon thereafter. The famous chemist Louis Pasteur said something during a speech he gave in 1854 that has since become perhaps his most widely quoted remark: “dans les champs de l’observation, le hasard ne favorise que les esprits prepares” (in the fields of observation, chance favors the prepared mind). Like many other famous quotations, the meaning of this remark is somewhat unclear when taken as it often is out of its original context (Pearce 1912). We think, however, a less literal translation than the one just given may work better: “if you don’t know what you are looking for, then the chances are good you won’t find it.”
The Brain as a Pattern Making Device
87
We favor what Pasteur suggested—translated as we just did—because one of the statements commonly made about our species’ creativity is that great insights and discoveries arrive in the minds of human beings as sudden flashes of inspiration almost as if achieved by magic. Not only do we think that Pasteur was right to say that the significance of things and events can easily be missed if we aren’t prepared to notice them, but we also want to say that it makes little evolutionary sense to believe that only a small number of uncommonly gifted people are capable of making new discoveries and arriving at new insights. What is important is not whether someone is oddly gifted, but whether she or he is primed—ready, willing, and able—to pay attention when bolts of inspirational lightning, so to speak, strike. There is also an assumption that has often been taken for granted down through the ages. Despite the popularity of conventional expressions such as daydreaming, reverie, Tagträumen, drömmar, and fantasticheria, it seems widely believed that dreaming is something that happens after we fall asleep, and that dreaming is different from being awake and conscious. But is this true? Taking the hint from what Henri Poincaré and others have reported about their inspirational moments in science suggests a different likelihood. In a real and not just a silly sense, our brains may be dreaming all the time (Crittenden et al. 2015; Globus 2019; Raichle 2015a, 2015b). Or as dream researcher Deirdre Barrett has written, dreaming may be essentially the same thing as thinking, but just in a different neurophysiologic state (Barrett 2017). By this she apparently means that dreams may seem strange or nonsensical because the chemistry and functioning of the sleeping brain affect how we handle our thoughts, but we can still focus in them on the same sorts of issues that concern—and may worry—us while we are awake. She adds that coming at problems while asleep can also be beneficial. “This unusual state of consciousness is often a blessing for problem solving—it helps us find solutions outside our normal patterns of thought” (Barrett 2011, 2015; also Bulkeley 2019).
At the Door to Wonderland In Alice’s Adventures in Wonderland, Lewis Carroll tells us that Alice at first is more than a little traumatized by her unexpected arrival at the small door leading into Wonderland. Nonetheless, she takes a sip of a strange concoction in a little bottle on a nearby table and undergoes a transformation. “What a curious feeling!” said Alice; “I must be shutting up like a telescope.” And so it was indeed: she was now only ten inches high, and her face brightened up at the thought that she was now the right size for going through the little door into that lovely garden. First, however, she waited for a few minutes to see if she was going to shrink any further: she felt a
88 The Brain as a Pattern Making Device
little nervous about this; “for it might end, you know,” said Alice to herself, “in my going out altogether, like a candle. I wonder what I should be like then?” And she tried to fancy what the flame of a candle is like after the candle is blown out, for she could not remember ever having seen such a thing. After a while, finding that nothing more happened, she decided on going into the garden at once; but, alas for poor Alice! when she got to the door, she found she had forgotten the little golden key, and when she went back to the table for it, she found she could not possibly reach it: she could see it quite plainly through the glass, and she tried her best to climb up one of the legs of the table, but it was too slippery; and when she had tired herself out with trying, the poor little thing sat down and cried. “Come, there’s no use in crying like that!” said Alice to herself, rather sharply; “advise you to leave off this minute!” She generally gave herself very good advice, (though she very seldom followed it), and sometimes she scolded herself so severely as to bring tears into her eyes . . . As both of us like to say, we think Alice is awesome. For this, we both are grateful to evolution and The Great Human High-Five Advantage. As a corollary, we also suggest that the old Latin saying in noche consilium, “the night is counsel,” and which can be more freely translated as “sleep on it before you decide,” is good advice. At this point in our narrative, however, we have only now just arrived at the door of Alice’s Wonderland. In the following chapter, we will go through this fanciful portal to explore what lies over there on the far side.
Notes 1 It is hard not to conclude, somewhat tongue-in-cheek, that Dehaene would have us replace what has long been called the “homunculus fallacy” (Adolphs and Anderson 2018, 11–13)—the notion that each of us as a small “mini-me” in residence inside our skull—with a veritable corporate think tank of specialized computer programmers. 2 Notably, Herbert Simon, awarded in 1978; Daniel Kahneman 2002; and Richard Thaler 2017. 3 It is also evident that the habitual learned responses associated with George (System 1) can be prompted not only by input from Sherlock (System 2), but also from Alice without any obvious external provocation (Buckner and DiNicola 2019).
Works Cited Abraham, Anna (2016). The imaginative mind. Human Brain Mapping 37: 4197–4211. doi:10.1002/hbm.23300 Adolphs, Ralph and David J. Anderson (2018). The Neuroscience of Emotion: A New Synthesis. Princeton, NJ: Princeton University Press.
The Brain as a Pattern Making Device
89
Andrews-Hanna, Jessica R., Jonathan Smallwood, and R. Nathan Spreng (2014). The default network and self-generated thought: Component processes, dynamic control, and clinical relevance. Annals of the New York Academy of Sciences 1316: 29–52, 32. doi:10.1111/nyas.12360 Baird, Benjamin, Jonathan Smallwood, Michael D. Mrazek, Julia W. Y. Kam, Michael S. Franklin, and Jonathan W. Schooler (2012). Inspired by distraction: Mind wandering facilitates creative incubation. Psychological Science 23: 1117–1122. doi:10.1177/0956797612446024 Barrett, Deirdre (2001). Comment on Baylor: A note about dreams of scientific problem solving. Dreaming 11: 93–95. doi:10.1023/A:1009436621758 Barrett, Deirdre (2011). Answers in your dreams. Scientific American Mind 22, no. 5: 27–33. doi:10.1038/scientificamericanmind1111-26 Barrett, Deirdre (2015). Dreams: Thinking in a different biochemical state. In Dream Research, Milton Kramer and Myron Glucksman (eds.), pp. 94–108. New York and Hove, East Sussex: Routledge. Barrett, Deirdre (2017). Dreams and creative problem-solving. Annals of the New York Academy of Sciences 1406: 64–67. doi:10.1111/nyas.13412 Buckner, Randy L. (2012). The serendipitous discovery of the brain’s default network. Neuroimage 62: 1137–1145. doi:10.1016/j.neuroimage.2011.10.035 Buckner, Randy L., Jessica R. Andrews-Hanna, and Daniel L. Schacter (2008). The brain’s default network: Anatomy, function, and relevance to disease. Annals of the New York Academy of Sciences 1124: 1–38. doi:10.1196/annals.1440.011 Buckner, Randy L. and Lauren M. DiNicola (2019). The brain’s default network: Updated anatomy, physiology and evolving insights. Nature Reviews Neuroscience 20: 593–608. doi:10.1038/s41583-019-0212-7 Bulkeley, Kelly (2018). The meaningful continuities between dreaming and waking: Results of a blind analysis of a woman’s 30-year dream journal. Dreaming 28: 337– 350. doi:10.1037/drm0000083 Bulkeley, Kelly (2019). Dreaming is imaginative play in sleep: A theory of the function of dreams. Dreaming 29: 1–21. doi:10.1037/drm0000099 Carruthers, Peter (2017). The illusion of conscious thought. Journal of Consciousness Studies 24: 228–252. doi:10.1080/13869795.2013.723035 Christoff, Kalina, Alan M. Gordon, Jonathan Smallwood, Rachelle Smith, and Jonathan W. Schooler (2009). Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. Proceedings of the National Academy of Sciences U.S.A. 106: 8719–8724. doi:10.1073/pnas.0900234106 Christoff, Kalina, Zachary C. Irving, Kieran C. R. Fox, R. Nathan Spreng, and Jessica R. Andrews-Hanna (2016). Mind-wandering as spontaneous thought: A dynamic framework. Nature Reviews Neuroscience 17: 718–731. doi:10.1038/nrn.2016.113 Corballis, Michael C. (2016). The Wandering Mind: What the Brain Does When You’re Not Looking. Chicago, IL: University of Chicago Press. Crittenden, Ben M., Daniel J. Mitchell, and John Duncan (2015). Recruitment of the default mode network during a demanding act of executive control. Elife 4: e06481. doi:10.7554/eLife.06481 Dehaene, Stanislas (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York: Penguin Books. Doyen, Stéphane, Olivier Klein, Daniel J. Simons, and Axel Cleeremans (2015). The other side of the mirror: Priming in cognitive and social psychology. Social Cognition 32, special issue: 12–32. doi:10.1521/soco.2014.32.supp.12
90 The Brain as a Pattern Making Device
Dyson, Freeman (2011). How to dispel your illusions. Review of Thinking, Fast and Slow by Daniel Kahneman. The New York Review of Books 58 (December 22): 40–43. Retrieved from: www.nybooks.com/articles/2011/12/22/how-dispel-your-illusions/ Gilhooly, Ken (2016). Incubation in creative thinking. In Cognitive Unconscious and Human Rationality, Laura Macchi, Maria Bagassi, and Riccardo Viale (eds.), pp. 301– 312. Cambridge, MA: MIT Press. Globus, Gordon (2019). Lucid Existenz during dreaming. International Journal of Dream Research 12: 70–74. doi:10.11588/ijodr.2019.1.52858 Kahneman, Daniel. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist 58: 697–720. doi:10.1037/0003-066X.58.9.697 Kahneman, Daniel (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Kandel, Eric R. (1999). Biology and the future of psychoanalysis: A new intellectual framework for psychiatry revisited. American Journal of Psychiatry 156: 505–524. doi:10.1176/ajp.156.4.505 Pearce, Richard M. (1912), Chance and the prepared mind. Science 35: 941–956. Retrieved from: www.jstor.org/stable/1638153 Poerio, Giulia L., Mladen Sormaz, Hao-Ting Wang, Daniel Margulies, Elizabeth Jefferies, and Jonathan Smallwood (2007). The role of the default mode network in component processes underlying the wandering mind. Social Cognitive and Affective Neuroscience 12: 1047–1062. doi:10.1093/scan/nsx041 Poincaré, Henri (1910). Mathematical creation. The Monist 20: 321–335. doi:10.1093/ monist/20.3.321 Raichle, Marcus E. (2015a). The brain’s default mode network. Annual Review of Neuroscience 38: 433–447. doi:10.1146/annurev-neuro-071013-014030 Raichle, Marcus E. (2015b). The restless brain: how intrinsic activity organizes brain function. Philosophical Transactions of the Royal Society B 370: 20140172. doi:10.1098/ rstb.2014.0172 Shine, James M. and Michael Breakspear (2018). Understanding the brain, by default. Trends in Neurosciences 41: 244–247. doi:10.1016/j.tins.2018.03.004 Smallwood, Jonathan and Jonathan W. Schooler (2006). The restless mind. Psychological Bulletin 132: 946–958. doi:10.1037/0033-2909.132.6.946 Smallwood, Jonathan and Jonathan W. Schooler (2015). The science of mind wandering: Empirically navigating the stream of consciousness. Annual Review of Psychology 66: 487–518. doi:10.1146/annurev-psych-010814-015331 Strunz, Franz (1993). Preconscious mental activity and scientific problem-solving: A critique of the Kekulé dream controversy. Dreaming 3: 281–294. doi:10.1037/ h0094386 Vatansever, Deniz, David K. Menon, Anne E. Manktelow, Barbara J. Sahakian, and Emmanuel A. Stamatakis (2015). Default mode network connectivity during task execution. Neuroimage 122: 96–104. doi:10.1016/j.neuroimage.2015.07.053 Vatansever, Deniz, David K. Menon, and Emmanuel A. Stamatakis (2017). Default mode contributions to automated information processing. Proceedings of the National Academy of Sciences U.S.A. 114: 12821-12826. doi:10.1073/pnas.1710521114
8 THE IMPACT OF CREATIVITY How Did You Learn That?
Both of us have no difficulty believing there is something out there called reality even if we are not sure what this is. We are confident it is not just turtles all the way down, but proposing, as some do nowadays, that reality should be called quantum reality instead does little to help us pin down what’s what (Ananthaswamy 2018; Cameron 2008; Gleick 2018). Nor do dictionaries shed much light on the matter. “Quantum—a discrete quantity of energy proportional in magnitude to the frequency of the radiation it represents” (“Quantum,” n.d.). This is good to know, of course, and what a terrific name for something, whatever it is. But we still feel like we are trying to find our way through an unfamiliar room in the dark. Hopefully in a less elusive fashion, the three propositions we introduced in Chapter 1 came across to you as more down-to-earth. Here again is the first one. We live in two different worlds at the same time. One is the world outside ourselves that is tangible, risky, and demanding. The other exists only in our brain’s private realm between our ears, a place where we form our opinions, reach conclusions, imagine even impossible worlds and events, and decide what actions, if any, to take in the outside world that may knowingly or not impinge upon our safety, sanity, and survival. The second proposition introduced in Chapter 1 explains why it can be foolish, even dangerous, to find comfort in the fact that the world inside our heads is seemingly less tangible than the world outside our skulls. The outer world we live in is mostly a world of our own making. The Victorian poet William Ernest Henly (1849–1903) famously celebrated human ascendancy over the world, over “this place of wrath and tears,” in his
92 The Impact of Creativity
poem “Invictus” written in 1875. The fourth and last stanza asserts his fierce sense of liberation from the dictates of Heaven and Earth: It matters not how strait the gate, How charged with punishments the scroll, I am the master of my fate: I am the captain of my soul. Courageous words. Yet in a world of our own making, we also have to accept responsibility (and blame) when things go wrong. As Shakespeare has Hamlet say in his famous tragedy about the Prince of Denmark, “Ay, there’s the rub.” The reason we are less confident about human mastery than William Ernest Henly evidently was is captured in the third proposition we presented in Chapter 1. We live in our own minds all of the time, and must struggle to be in touch with what is happening in the outside world. Being so well-endowed by evolution with the talents—and the freedom—to remake the world the way we want it to be has made it possible for us as individuals (and perhaps also as a species) to overreach, stumble, get into trouble, and even die through our own foolishness and by our own hands. Furthermore, the irony at the heart of our three propositions is that mastering the art of imagining the world the way we want it to be rather than the way it happens to be is an achievement that playfully begins with the innocence and delight of childhood.
Playing in Two Worlds at the Same Time Back when he was in grade school in rural Wisconsin, Gabriel created his own fanciful world both in his mind and on his parents’ bed. His recollections today make it clear that back then he saw no need to draw a line between what is real and what is fanciful except when the school bus was threatening to arrive to take him off to the classroom and the lessons he needed to learn there. Gabriel went through a phase during his early grade school years when he would try to save time in the morning before the school bus came to play with his growing collection of toy cars and trucks. At the very least, he might put a few of them like spectators at a show on the windowsill in the bathroom next to our toothbrushes. Or beside his plate at the breakfast table. He found getting ready for school was a much brighter prospect when they were not far from his grasp. Since often there was not a lot of time before the bus arrived at the foot of our farmland driveway, just moving one or two of them along the sill while brushing his teeth gave him a sense that he was already in their world, not gearing up for another day at school. And rolling a car or truck along the sill
The Impact of Creativity 93
gave the driver inside a chance to exchange greetings with the other vehicles placed there. Gabriel liked to overhear their conversations being shouted back and forth. Only on weekends did he have enough free time to watch what these toys and their drivers were doing to his heart’s content. He would gather up his favorites in his room and then slip across to his parents’ bed where the top blanket or quilt became an inviting landscape for him to work with. The size and weight of each car or truck suited his child’s fine motor skills. He would sit cross-legged in the middle of the bed with a handful of vehicles in his lap. Then he’d select one and place it on the soft bed cover stretched out like an artist’s canvas around him. As he moved the toy along on the bed, it would carve out a line of travel. Gradually the route being taken would become more apparent. Just as most children try not to color outside the lines in a coloring book, Gabriel would move his cars with great care to avoid disrupting the roads and major highways coming to life on the world he was sitting on. In Gabriel’s mind, different roadways opening up on the bed also began to have their own stories to tell. He tried to remember what were the sights and scenes encountered along each route. He liked to listen to what drivers were saying to one another as they passed by. He also wanted to see what they were seeing. He watched to confirm that they knew where they had to turn at landmarks encountered on their journey, and whether they knew what directions to take to get to where they were headed. As does happen in a child’s life while on the road, a car or truck moving from here to there on the bed might lose its way. Gabriel could sense the anxiety building up in the driver’s seat. He would try to help out by spotting in his mind’s eye something familiar out the window from the backseat where he was sitting for safety’s sake. What was really fun was figuring out where a particular car might be headed. Was the driver searching for a place to park in a busy supermarket? Was it looking for a makeshift parking spot on a front lawn beside the road during a summer family reunion? At times a street forming on the bed would begin to look like the country road outside our house, or perhaps like one of the roads his bus took to and from school. It was easy to imagine how a family farm on this or that road was a hive of farming activity all day, whereas people at a neighboring house might come home for dinner, only to leave soon thereafter to arrive on time for sports practice at our local high school. Gabriel did not have to be on the bed upstairs to see what was happening to the cars and trucks left there when he was called away and had to go downstairs for lunch. He might be eating a bagel with soup and a glass of milk, yet in his mind he could picture what they were doing in his absence, who the drivers were talking to or honking at, and where each vehicle was headed. After lunch, he would hurry back upstairs to his place on the bed covers to discover whether he had been right in his shrewd surmises while he had been absent from the scene.
94 The Impact of Creativity
Constructing Fantasies It has been known for years that humans are not the only creatures that can change the world they live in to better suit their needs and wants—clever behavior technically known in the evolutionary sciences as niche construction (Odling-Smee et al. 2003). Many species in their own differing ways— biochemical, physical, or behavioral—do not wait around simply hoping for good things to happen. Many instead modify where they live to make their surroundings more suitable, more accommodating. Classic examples are beavers building dams to create cozy ponds; termite colonies making mounds in Africa, South America, and Australia; birds building nests; and perhaps more surprising, earthworms improving the quality of the soil they move through by eating it and passing it through their bodies, over and over again, generation after generation. Moreover, like humans, there are other species, too, that can use and sometimes even make tools to help them do so. Chimpanzees can. Also certain species of birds, notably New Caledonian crows (McGrew 2013). While humans are not unique in using things to help them do things, tool use is nonetheless extremely rare in nature presumably in part because it takes a certain amount of thoughtful insight to see the advantages of such assisted living. Nor is it clear that for species other than our own, using tools gives individuals a significant advantage over nontool users in the struggle for existence, although it could be argued as a case in point that a beaver uses wood to make a pond, and that having a handy pond around in which to build a cozy lodge is an improvement over simply living thus unprotected in the wild. For humans, on the other hand, there is no doubt about it that tool use can make a substantial difference in the quality of life. As the authors of a recent assessment have written: “The advent of stone-tool use was undoubtedly a key event in our own lineage’s evolution, eventually leading to the establishment of humans as the most successful tool users on the planet” (Biro et al. 2013). In this regard, note that Gabe was not just playing with his toy cars and trucks in a way comparable, say, to a dog catching a ball or a cat batting a mouse. He was using them as tools—not to remake the world into a better place, but to help him envision how the adult world of real cars and trucks works, and how to make that world work for him. When used as an aid to remembering facts, a tool of this variety is commonly called a mnemonic device (Luria 1987; Putnam 2015). We do not know if there is an equally suitable word to use to describe tools (or toys) used to help people act out fantasies, stories, and events happening only in Alice’s hidden world in the human brain. Somehow the term “prop,” which is short for “property,” doesn’t seem to be quite the right word to use. Yet regardless whether this is the best word, Gabe’s boyhood recollections about toy cars and trucks, pretend roadways, and imaginary tiny human drivers show us how humans, presumably unlike other species, are able to use things in the real world as tools of the mind (Bardone and Magnani 2007).
The Impact of Creativity 95
It would be unwise to assume, however, that using things in the real world to act out fantasies and the like in the mind is just a characteristic of being a child that we abandon as we mature as human beings (Lakens 2014).
The Placebo Effect People of all ages not only use things as tools, but also as material ways to give substance to thoughts, dreams, and yearnings otherwise solely lodged in the space between their ears. A few classic examples of the sorts of things that humans use as tools of this more expressive and imaginative kind would be religious icons and sacred reliquaries, large fancy motor cars, and framed medical school diplomas prominently displayed in a doctor’s office. A less obvious example has been described by the American writer Robert Anthony Siegel. He has written that he takes a pill for writer’s block—and the panic attacks and insomnia he experiences along with this disability. The catch is that he knows the pill he takes is only a placebo, a capsule prescribed specifically for him by a friend named John Kelley who is a psychology professor and the deputy director of PiPS, the Program in Placebo Studies and Therapeutic Encounter at Beth Israel Deaconess Medical Center and Harvard University (Siegel 2017; also Koban et al. 2017). As Siegel explains, the term placebo refers to a dummy pill passed off as a real one, and more broadly speaking, any sham treatment presented as if it were real. “By definition a placebo is a deception, a lie. But doctors have been handing out placebos for centuries, and patients have been taking them and getting better, through the power of belief or suggestion—no one’s exactly sure.” The color of his own special pill is yellow, apparently the closest an American pharmacist can get to gold, which is the color Siegel associates with writing well. It is a capsule rather than a solid pill because, Kelley observed, capsules somehow look more scientific and therefore more potent. Kelley also prescribed them as being only short-acting on the logic that having a two-hour time limit would cut down on Siegel’s tendency to procrastinate. Together Siegel and Kelley crafted suitably proper-sounding medical instructions for swallowing these wonder pills covering not only how to take them, but also what effect they would have. Although containing nothing in them other than cellulose, Siegel adds that they cost him a hefty $405, and being only placebos, they aren’t covered by health insurance. Kelley was confident that making them so expensive would increase the sense of their value, thereby their potency as a wonder drug. After taking two capsules, the prescribed dose, for the first time, Siegel says he closed his eyes and tried to explain to the pills what they were to do for him. “I became worried that I wouldn’t be able to suspend disbelief long enough to let the pills feel real to me. My anxieties about their not working might prevent them from working.”
96 The Impact of Creativity
Indeed at first they did not work. If anything, his anxiety and sleeplessness increased. But then they did. According to his report on all this, they evidently still do. Siegel and others say there is a growing sense nowadays that placebos can be effective at least for some people. And not just for what they are in their own right, but also due to the symbols and rituals of health care associated with them. If so, then perhaps the word tool rather than placebo should be used to label such things. However, they are not tools to change the world but rather the mind and how it deals with the world. Just as Gabe’s toy cars and trucks had no lasting impact on his parent’s bed, but did help him fantasize about the world of real cars, trucks, and highways. Importantly, in both cases, the effectiveness of these tools was enhanced by being careful to use them in properly convincing, even doctor prescribed, ways.
The Essence of Fantasy We see being imaginative as part and parcel of our collective species heritage rather than as just an uncommon talent possessed by only a select few of us. Why? Because the brain works creatively on behalf of itself every day of our lives to keep us safe and sound. True, as a species, we have been collectively excelling at dumbing down the world around us to make the challenges we all face on a daily basis as predictable and humdrum as humanly possible. Yet as skilled as we are as a species at niche construction, and as primed as we may all be to respond more or less habitually—thank you, George Edward Challenger—to whatever Lady Luck throws our way, all of us need to be creative at least now and then in how we handle what arrives on our doorstep unexpected, uninvited, and sometimes downright dangerous. A trivial example would be suddenly finding out that your daughter has invited three of her middle school friends over for dinner at your house without asking you first. How on earth are you going to make that roast chicken go far enough to feed eight people, five of them hungry teenagers? A far more serious example would be figuring out where to hide from an active school shooter who has already killed several of your friends in the hallway just outside your classroom. As we suggested in Chapter 4, creativity is what can result when all five of the fingers in The Great Human High-Five Advantage are working well together—particularly the thumb (social collaboration) and the index finger ( fantasy and imagination) as in the formula imagination + collaboration = creativity. Gabe’s story about cars, trucks, and things that go—as well as Siegel’s potent little pill—would all qualify as fantasies of differing sorts. Yet what exactly is the essence of a fantasy? Why do we spend time having such thoughts, both consciously and subconsciously? Are they just an accident of evolution? Or are they one of its most potent creations?
The Impact of Creativity 97
Perhaps the best clues to the answers to such questions are those hidden within our dreams, worries, and delusions.
The Stuff That Dreams Are Made On For years, dreaming as a brain activity was usually associated with rapid eye-movement (REM) sleep, but modern brain imaging studies have shown that during sleep, your brain is working on a variety of different tasks, only some of which would traditionally be labeled as “dreaming” (Bulkeley 2017; Domhoff and Fox 2015; Windt et al. 2016). Moreover, not all dreams are the same. Just as our waking thoughts can go off in many directions, so, too, can our dreams. They may be solely sensory experiences, but they can also be thoughtful reflections. They may be simple visual images, but they can also be unfolding and strikingly movie-like narratives. Moreover, as anyone who has had a nightmare knows all too well, dreams can also be bizarre, frightening, and out-of-this-world. Importantly, the fact that the brain can still be actively thinking about things even when, by all outward appearances, a dreamer is immobile, unresponsive, and mostly out of touch with the world helps confirm that thinking does not depend on conscious awareness (Siclari et al. 2017). Another clue that such awareness is not needed would be what in everyday language is sometimes called “the worries.”
Having the Worries It is probably safe to say worrying about things is a universal human characteristic. If the human hand had six fingers rather than just five, we might even offer worrying as one of the defining traits of human nature. Technically speaking, worrying may be described as “experiencing uncontrollable, apprehensive, and intrusive negative thoughts about the future” (Sari et al. 2016). However detailed, we bring up this human characteristic not because of the emotional impact worrying can have on human health and happiness, but rather because how invasive worries can be when humans are trying to think about other seemingly more immediate concerns. We will have more to say about worrying as an emotional state in the next chapter. Here we simply want to point out that worrying is additional evidence of the ease with which the human brain can think about different things at one and the same time—that is, our brains are good at multitasking. Sometimes the results of such multitasking can be beneficial, say when someone has been so focused on one task that they have not been consciously aware of some impending disaster. When worrying “behind the scenes,” so to speak, becomes too invasive, however, the consequences can be disabling (Hoshino and Tanno 2017). When someone is worrying too much, they may not be able to pay enough attention to other things they ought to be doing instead, or at least as well. Such as dealing with impending disaster.
98 The Impact of Creativity
Lastly, while we do not want to go too far down this road here, we should also mention that delusions, too, can be taken as supporting evidence that the brain is capable of multitasking, although again like worrying, the impact that our delusions can have on how well we act and what we are able to do can also sometimes be disabling (Freeman 2016). Furthermore, research today is making it clearer that if you are plagued by persecutory delusions—the conviction that others are out to get you—worrying about such delusions can make them worse by bringing them forcefully to your conscious attention (Startup et al. 2016).
Asking Your Dreams for Help We do not want to end this chapter on a sour note about delusions and paranoia. The brain’s ability to fantasize and multitask is not only a valuable talent for our species to have, but also provides many opportunities that many of us may not be taking full advantage of. Since Gabe had a chance earlier in this chapter to tell you about his childhood love of toy cars and trucks, John now wants a moment to relate a story of his own. In his 1954 novel Sweet Thursday, the American writer John Steinbeck relates that one of his leading characters, a woman affectionately known as Fauna (her name is actually Flora) experiences what Steinbeck tells us is a common human occurrence: “a problem difficult at night is resolved in the morning after the committee of sleep has worked on it.” John wants to second Steinbeck’s observation. A number of years ago when he was writing a book about evolution and human friendship (Terrell 2015), John discovered that he needed to resist jumping immediately out of bed when the wake-up alarm went off in the morning. He found that if he avoided becoming too suddenly aware of the world around him and only slowly let himself come to full consciousness, his brain—or rather his inner Alice—often had something to tell him before he got out of bed. She had continued to mull over some problem he had been struggling with before closing his eyes the night before. (Yes, you are right if you have immediately just thought of what we reported about Mendeleyev, Wallace, Kekulé, Poincaré, and Pasteur in Chapter 7.) Although perhaps his Alice had not fully resolved the problem he had in mind just before falling asleep, surprisingly often she had come up with a useful thought or two while he was asleep to add to his manuscript that moved the ideas he was trying to put into words forward (Paulson et al. 2017). Deirdre Barrett (Chapter 7) has written often about this mental phenomenon. Instead of relying on old accounts of great discoveries in science and the arts, she has interviewed scientists and artists to see if she could find out more about how such dreamtime thinking works. What she reports is fascinating.
The Impact of Creativity 99
Architects Lucy Davis and museum designer Solange Fabio both described dreams of walking through finished buildings, noting design features—then waking up, sketching these, and eventually building their dreamed structures. Harvard astronomer Paul Horowitz told me that every time he was stuck on some detail of designing telescope controls, he would have a dream in which he watched someone doing the task— arranging a group of lenses or constructing a computer chip—while a voice narrated exactly what to do. Composer Shirish Korde described how music often arrived for him in dreams—sometimes hearing it, but also seeing it as birds at variable heights denoting positions on a music staff or as synesthetic colors denoting tones. (Barrett 2015, 95) What is perhaps most intriguing and suggestive here is that evidently successful problem-solving by the sleeping brain works best for confronting two sorts of problem situations: problems calling for creative thinking, and those having solutions that can be visualized. In particular, she says, dreams seem to be a good way to find solutions that are “outside the box.”
The Backseat Driver One thought that often comes to the foreground repeatedly when we both are thinking about how the brain engages in imaginative multitasking is one we have discussed at length in previous chapters. Daniel Kahneman is surely off base when he describes System 2—what we call Sherlock’s way of thinking—as lazy. As we have said before, Holmes is not lazy. It is just that he is not always needed. He gets bored easily, as well. Yet despite the evident mislabeling, Kahneman has put his finger on something, nevertheless. It is chiefly because we humans have often done such a good job of dumbing down the world we live in that Sherlock’s services are rarely needed from moment to moment. Instead, the chances are good that our inner George Challengers can successfully see us through much of the time day or night. Therefore, Holmes is often little more than a backseat driver on the road of life. Only sometimes does he truly need to take the wheel (Sternberg 2015, 38–47; Vatansever et al. 2017). However, as we have also observed previously, this does not mean that most of the brain just powers down and turns off. Instead, as we have been saying, when Sherlock is in the backseat, Alice often picks up the conversation and where she takes it may then be pretty much up for grabs. Please note, therefore, that we are clearly not saying that your inner Alice only comes around to talk to you consciously or subconsciously when you are sleeping. She is always at least in the passenger seat—ready, willing, and able to engage in a little lite daydreaming.
100 The Impact of Creativity
Moreover, to repeat the good news, if you have primed your mind before falling asleep with something challenging or worrying you that you are trying actively to resolve, you may be able to get your Alice to sit still long enough to deal with whatever is on your mind before you go ahead and wake up.
Works Cited Ananthaswamy, Anil (2018). What does quantum theory actually tell us about reality? Scientific American, September 3, 2018. Retrieved from: https://blogs.scientificamerican. com/observations/what-does-quantum-theory-actually-tell-us-about-reality/ Bardone, Emanuele and Lorenzo Magnani (2007). Sharing representations through cognitive niche construction. Data Science Journal 6, Suppl.: S87–S91. doi:10.2481/ dsj.6.S87 Barrett, Deidre (2015). Dreams: Thinking in a different biochemical state. In Dream Research: Contributions to Clinical Practice, Milton Kramer and Myron Glucksman (eds.), pp. 94–108. New York and Hove, East Sussex: Routledge. Biro, Dora, Michael Haslam, and Christian Rutz (2013). Tool use as adaptation. Philosophical Transactions of the Royal Society B 368: 20120408. doi:10.1098/rstb.2012.0408 Bulkeley, Kelly (2017). The future of dream science. Annals of the New York Academy of Sciences 1406: 68–70. doi:10.1111/nyas.13415 Cameron, Ross P. (2008). Turtles all the way down: Regress, priority and fundamentality. The Philosophical Quarterly 58: 1–14. doi:10.1111/j.1467-9213.2007.509.x Domhoff, G. William and Kieran C. R. Fox (2015). Dreaming and the default network: A review, synthesis, and counterintuitive research proposal. Consciousness and Cognition 33: 342–353. doi:10.1016/j.concog.2015.01.019 Freeman, Daniel (2016). Persecutory delusions: A cognitive perspective on understanding and treatment. Lancet Psychiatry 3: 685–692. doi:10.1016/S2215-0366(16)00066-3 Gleick, James (2018). What does quantum physics actually tell us about the world? The New York Times, May 8, 2018. Retrieved from: https://www.nytimes.com/2018/ 05/08/books/review/adam-becker-what-is-real.html Hoshino, Takatoshi and Yoshihiko Tanno (2017). Trait anxiety and impaired control of reflective attention in working memory. Cognition and Emotion 30: 369–377. doi:10.1080/02699931.2014.993597 Koban, Leonie, Ethan Kross, Choong-Wan Woo, Luka Ruzic, and Tor D. Wager (2017). Frontal-brainstem pathways mediating placebo effects on social rejection. Journal of Neuroscience 37: 3621–3631. doi:10.1523/JNEUROSCI.2658-16.2017 Lakens, Daniel (2014). Grounding social embodiment. Social Cognition 32, suppl.: 168–183. doi:10.1521/soco.2014.32.supp.168 Luria, A. R. (1987). The Mind of a Mnemonist: A Little Book about a Vast Memory. Cambridge, MA: Harvard University Press. McGrew, W. C. (2013). Is primate tool use special? Chimpanzee and New Caledonian crow compared. Philosophical Transactions of the Royal Society B 368: 20120422. doi:10.1098%2Frstb.2012.0422 Odling-Smee, John F., Kevin N. Laland, and Marcus W. Feldman (2003). Niche Construction: The Neglected Process in Evolution. Princeton, NJ: Princeton University Press. Paulson, Steve, Deirdre Barrett, Kelly Bulkeley, and Rubin Naiman (2017). Dreaming: a gateway to the unconscious? Annals of the New York Academy of Sciences 1406: 28–45. doi:10.1111/nyas.13389
The Impact of Creativity 101
Putnam, Adam L. (2015). Mnemonics in education: Current research and applications. Translational Issues in Psychological Science 1: 130–139. doi:10.1037/tps0000023 Quantum. (n.d.). Retrieved from: en.oxforddictionaries.com/definition/quantum Sari, Berna A., Ernst H. W. Kostera, and Nazanin Derakshan (2016). The effects of active worrying on working memory capacity. Cognition and Emotion 31: 995–1003. doi:10.1080/02699931.2016.1170668 Siclari, Francesca, Benjamin Baird, Lampros Perogamvros, Giulio Bernardi, Joshua J. LaRocque, Brady Riedner, Melanie Boly, Bradley R Postle, and Giulio Tononi (2017). The neural correlates of dreaming. Nature Neuroscience 20: 872–878. doi:10.1038/nn.4545 Siegel, Robert Anthony (2017). Why I take fake pills. Smithsonian Magazine, May 2107. Retrieved from: www.smithsonianmag.com/science-nature/why-i-takefake-pills-180962765/ Startup, Helen, Katherine Pugh, Graham Dunn, Jacinta Cordwell, Helen Mander, Emma Cernis, Gail Wingham, Katherine Shirvell, David Kingdon, and Daniel Freeman (2016). Worry processes in patients with persecutory delusions. British Journal of Clinical Psychology 55: 387–400. doi:10.1111/bjc.12109 Sternberg, Eliezer J. (2015). NeuroLogic: The Brain’s Hidden Rationale behind Our Irrational Behavior. New York: Vintage Books. Terrell, John Edward (2015). A Talent for Friendship. New York: Oxford University Press. Vatansever, Deniz, David K. Menon, and Emmanuel A. Stamatakis (2017). Default mode contributions to automated information processing. Proceedings of the National Academy of Sciences U.S.A. 114: 12821–12826. doi:10.1073/pnas.1710521114 Windt, Jennifer M., Tore Nielsen, and Evan Thompson (2016). Does consciousness disappear in dreamless sleep? Trends in Cognitive Sciences 20: 871–882. doi:10.1016/ j.tics.2016.09.006
9 LIES, DECEIT, AND SELF-DECEPTION How Gullible Are You?
After stomping off in great disgust from an outdoor tea-party where she had met March Hare, the Mad Hatter, and a very sleepy Dormouse (“It’s the stupidest tea-party I ever was at in all my life!”), the young protagonist in Lewis Carroll’s Alice’s Adventures in Wonderland soon finds herself in the beautiful garden she had previously seen only from afar. However, not all is well there. “A large rose-tree stood near the entrance of the garden: the roses growing on it were white, but there were three gardeners at it, busily painting them red.” Carroll tells us their names are Five, Seven, and Two (they are, you see, playing cards). “Would you tell me,” said Alice, a little timidly, “why you are painting those roses?” Five and Seven said nothing, but looked at Two. Two began in a low voice, “Why the fact is, you see, Miss, this here ought to have been a red rose-tree, and we put a white one in by mistake; and if the Queen was to find it out, we should all have our heads cut off, you know. So you see, Miss, we’re doing our best, afore she comes, to—” At this moment Five, who had been anxiously looking across the garden, called out “The Queen! The Queen!” and the three gardeners instantly threw themselves flat upon their faces. There was a sound of many footsteps, and Alice looked round, eager to see the Queen. Alice is soon sparring verbally with the Queen of Hearts, but Her Majesty suddenly shifts her attention to Five, Seven, and Two. “Get up!” said the Queen, in a shrill, loud voice, and the three gardeners instantly jumped up, and began bowing to the King, the Queen, the royal children, and everybody else.
Lies, Swindles, and Self-deception 103
“Leave off that!” screamed the Queen. “You make me giddy.” And then, turning to the rose-tree, she went on, “What have you been doing here?” “May it please your Majesty,” said Two, in a very humble tone, going down on one knee as he spoke, “we were trying—” “I see!” said the Queen, who had meanwhile been examining the roses. “Off with their heads!” and the procession moved on, three of the soldiers remaining behind to execute the unfortunate gardeners, who ran to Alice for protection. The American writer and poet Gertrude Stein is famous for the words “a rose is a rose is a rose” written in 1913 and first published as part of the poem “Sacred Emily” in 1922 (Fleissner 1977). While this claim may literally be true, long before Stein, Lewis Carroll had already established that some roses can be counterfeit, or at least may be traveling, so to speak, under false colors. In any case, we have taken the liberty of quoting here directly from Alice’s Adventures for a reason. All on its own, a lie is a falsehood, untruth, fabrication, fiction, misrepresentation, fib, whopper, or any of the countless words all meaning in one way or another that they are something which is the opposite of “that which is true or in accordance with fact or reality” (“Truth,” n.d.). A lie all on its own, however, is of little interest or importance. It is what we humans do with well-crafted fantasies of this sort that can make them so devious, dangerous, sometimes deadly, and all too often not in the best interests of those on the receiving end of such intentional deceit. So to put the matter bluntly, why does the brain’s inner Alice (not just rogue playing cards) sometimes make up fibs and downright lies? Why do we sometimes fall for such wicked fabrications? Why may it hard to see through them if they are skillfully concocted? How come we humans can be victimized in this disreputable way?
Bridge for Sale New York City’s Brooklyn Bridge spans the East River and connects Manhattan with the borough of Brooklyn at the western end of Long Island (aptly named since it is over 100 miles long measured west to east). Originally called the New York and Brooklyn Bridge, it was dubbed the “Eighth Wonder of the World” when it was officially opened on May 24, 1883. It remains one of New York State’s most famous landmarks and tourist attractions (“The Brooklyn Bridge,” n.d.). As the New York author Gabriel Cohen has remarked, the idea of illegally selling this bridge to a sucker willing to pay hard cash for it has become the ultimate example of the power of insidious human persuasion. Its history as an enticing fool’s folly more than adequately supports such a dubious reputation.
104 Lies, Swindles, and Self-deception
In Cohen’s words: “A good salesman could sell it, a great swindler would sell it, and the perfect sucker would fall for the scam” (Cohen 2005). Consider the case of George C. Parker. Backed up by impressively forged documents proving that he owned it, this turn-of-the-twentieth-century confidence man was able to sell the bridge more than once—he was also good at selling the Metropolitan Museum of Art, the Statue of Liberty, and Grant’s Tomb—to buyers he convinced would make a fortune for themselves off it in tolls. History records that there were times when the police actually had to step in and stop his victims from erecting toll barriers on the bridge so that they could reap the rewards he had assured them would be theirs for the taking. Parker was not the only con artist to work this particular swindle, thereby confirming that when it comes to human greed and gullibility, history can indeed repeat itself. Cohen reports that other practitioners of this same scam were often equally good at making it work for them. They timed the path of beat cops working near the bridge, and when they knew the officers would be out of sight, they propped up signs reading ‘Bridge for Sale,’ showed the edifice to their targets, and separated them from their money as quickly as possible. On one remarkable occasion, when the mark did not have enough cash to cover the asking price, the perpetrator of the scam was generously willing to sell off only half the bridge for the amount of money in hand. Truly a fine fellow and a thoughtful one, too.
Dirty Dealing and Self-Deception Why are at least some of us so gullible? The psychologist James J. Gross at Stanford University has concluded that emotions not only make us feel, they also incline us to act (Gross 2013, 4). At times neither wisely, nor well. For this reason, Gabriel Cohen’s suggested explanation for why many have been seduced, swindled, and bamboozled by phony sellers of the Brooklyn Bridge seems plausible, although perhaps not the only explanation possible. “Vendors of the bridge not only counted on the gullibility or greed of their targets; they also appealed to their vanity. Buyers could believe . . . they had become real men of substance, great capitalists.” Gullibility, greed, and vanity are certainly all within the realm of what are conventionally called feelings, but can they also be what someone like the psychologist Gross would call an emotion? If not, why not? More to the point, how do emotions incline us to act? Said more broadly, what do emotions do for us? And not just when they are helping somebody fool us into buying a bridge (Adolphs 2017).
Lies, Swindles, and Self-deception 105
Our Socially Motivated Brains The psychologist James Coan at the University of Virginia and his colleague Lane Beckes at the Bradley University in Illinois are studying how human beings respond emotionally to stress under differing social conditions. In one of Coan’s now classic experiments done in collaboration with Hillary S. Schaefer and Richard J. Davidson, they administered nonlethal but real electric shocks to individuals whose brains were being monitored using functional magnetic resonance imaging (fMRI) under three differing conditions: (1) holding their husband’s hand, (2) holding the hand of an anonymous male experimenter, or (3) holding nobody’s hand. The results were not only fascinating but supported their hypothesis that social bonding and soothing behaviors lessen the psychological and physiological impact of negative events and promote enhanced health and well-being. Here in brief is what they found. As hypothesized, both spouse and stranger hand-holding attenuated neural response to threat to some degree, but spousal hand-holding was particularly powerful. Moreover, even within this sample of highly satisfied married couples, the benefits of spousal hand-holding under threat were maximized in those couples with relationships of the very highest quality. (Coan et al. 2006, 1037) By scripting what happens within the brains of their study participants in several elementary ways, therefore, Coan and his colleagues have been able to use their hand-holding experiments to show dramatically how our feelings can be socially regulated by the presence or absence of others (Coan and Maresch 2014, 228). In these studies a reliable network of brain regions are activated to the threat of shock. Generally, activity in this network is reduced in the partner handholding condition (sometimes the partners are friends, other times romantic partners), and the extent to which there are reductions in activity is related to the quality of the relationship. (Beckes et al. 2015, 7) Although it could be debated whether the brain regions they use to monitor the social regulation of emotions are as definitive and reliable as they suggest they are, the common cause of the stress and anxiety induced in their participants by the anticipation of being on the receiving end of an unpleasant, although nonlethal, electric shock is an example of what we would suggest may be called an emotional “mini-script”—not the shock, mind you, but rather the mental anticipation of its unpredictable but impending occurrence. If so, then emotions could be likened to thoughts we may not even know we are thinking.
106 Lies, Swindles, and Self-deception
Said differently, what might be called the basic emotion—the mini-script— in these experiments isn’t the actual shock but rather what it is that is being anticipated by the brain.
Our Socially Embedded Selves Coan, Beckes, and their colleagues say their research findings are confirming what many have long suspected. Having the evolved ability to live and work closely with others gives humans a sustaining social baseline of emotional and practical support together with a sustaining sense of security. So much so, they suggest, that our emotional ties with others are in effect an extension of the way the human brain interacts with the world. How so? Put simply, when we are around others we know and trust, we are able to let down our guard, relax, and let others carry some of the burden of staying alive and being happy. Hence it is not an exaggeration, they argue, to claim that the human brain has evolved “to assume that it is embedded within a relatively predictable social network characterized by familiarity, joint attention, shared goals, and interdependence” (Beckes and Coan 2011, 976–977). There is a dark side to feeling good when you are being nurtured by other people. In the hands of the wrong person or persons, our wanting to be part of a social world “characterized by familiarity, joint attention, shared goals, and interdependence” (to repeat what Beckes and Coan have written) makes us also highly vulnerable to flattery, false words, and social manipulation. As every scam artist who has ever sold the Brooklyn Bridge to an overly trusting soul can attest with a sardonic smile on his or her face.
Becoming Emotional However sincere or deceitful, it would be hard to argue against the idea that the actions of others can make us feel socially nurtured. Yet what exactly are we feeling when we are feeling emotional in this way? As surprising as it may sound given how much we humans love to talk about them, nobody nowadays is sure how many different kinds of feelings there are in the average human’s emotional repertoire (Barrett 2012; Dixon 2017; Moors 2017). A popular guess is six or seven that can be labeled as SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF, and PLAY (LeDoux 2012; Panksepp 2011). Yet according to the psychologist Lisa Feldman Barrett at Northeastern University in Boston, Massachusetts, our emotions are hard to pin down because, despite longstanding claims to the contrary, none of us is actually born with the suite of emotions we end up experiencing as part of our lives, loves, and daily chores. Far from having some fixed number of basic emotional brain apps (RAGE, FEAR, LUST, and so on) genetically preinstalled in each and every one of us
Lies, Swindles, and Self-deception 107
before birth, Barrett and others today suggest that when it comes to our emotional selves, variation in number, kind, and disposition is the norm (Barrett 2017). Why? Because, as Herbert Simon (Chapter 1) might say it, our emotions are “artificial” rather than “natural.” That is, they are basically learned rather than genetically inherited. If so, then it should come as no surprise that when seen in global perspective, what gets labeled as an emotion is not only artificial, but also arbitrary and diverse. Different people respond emotionally to the same kinds of situations, encounters, or events in different ways, and travelers know firsthand this is also true from place to place around the globe. In some places, it is just fine and dandy—perhaps praiseworthy—to jump up and down, scream loudly, and in other ways act out when you are angry. It may even be okay to hit people, or kill someone in a fit of rage. Elsewhere, it is customary, maybe socially necessary, instead to “turn the other cheek” and just “grin and bear it.”
The Lynching of Robert Prager Whatever they are biologically speaking, and however they work psychologically—there seems to be broad agreement today that emotions help mobilize our bodies and our brains to deal with real or imagined issues, crises, and opportunities (Pessoa 2017; Shweder 2014). In the words of Ralph Adolphs at the California Institute of Technology, “emotion states evolved in order to allow us to cope with environmental challenges in a way that is more flexible, predictive and context-sensitive than are reflexes, but that doesn’t yet require the full flexibility of volitional, planned behavior” (Adolphs 2017, 25). Accepting all this as a working hypothesis along with William Shakespeare’s famous observation that “all the world’s a stage, and all the men and women merely players,” then perhaps rather than calling them mini-scripts, as we have already done, our emotions might simply be likened to stage directions for the brain to act upon—although perhaps nothing even as specific, say, as “Enter the Ghost in his nightgown,” but rather simply “Enter Ghost and Hamlet.” Whatever your druthers, however, here is the next obvious question. Who is writing the play? Jurisprudence and conventional wisdom would both resolve this perennial issue of authorship and responsibility in much the same fashion. The normal claim is that all of us as individuals must undertake the job of being our own playwright, and then we must personally bear the brunt of later critics’ reviews—the slings and arrows of outrageous fortune of simply being human, if you will. But is this always fair and just? Given the evident ease with which some people sold the Brooklyn Bridge, or in other ways bamboozled, how true is that we are always our own playwright?
108 Lies, Swindles, and Self-deception
Consider as possible evidence to the contrary the shameful case of Robert Prager. It has been said that lynching is as American as apple or cherry pie. That is a heavy judgment to lay on an entire nation, but there is substance behind such a claim. Furthermore, historically speaking, many of the lynchings by mobs in the United States have been by white Americans and the victims of such extrajudicial punishment have been African-Americans. But not always. The First World War—one of the deadliest wars in all of human history— ended on November 11, 1918, when Germany formally surrendered to the Allied Powers. On the face of it, who had killed Robert Prager seven months before that fateful day seems obvious? “They all did it,” would appear to be the right answer. But not necessarily the best answer. Why not? Because the real question is not who did it, but why. Robert Paul Prager was a German-American hanged from a tree while some 200 people watched shortly after midnight on April 5, 1918, in Collinsville, Illinois (Schwartz 2002/2003). Born in 1886, Prager had immigrated to America in 1905. He was still a bachelor when he died (although he was planning on marrying a woman he had met through a newspaper advertisement). He had no relatives in the United States. According to one commentator: Shortly before his death he decided to become a miner, but his efforts to join the miners’ union were unsuccessful, probably because he was an active socialist (it seems likely that his German birth was not the reason for his rejection since there were many German-Americans in the Collinsville area who were union members). (Hickey 1969, 118) Evidently the union president had accused Prager of being a spy and a liar— bear in mind the First World War was then still going strong, and the Armistice was not signed until “the eleventh hour of the eleventh day of the eleventh month” of 1918—allegations he strongly and publicly denied. Nonetheless, on Thursday, April 4th he was seized by a group of miners and forced to kiss an American flag. He had luckily gotten away thereafter and went back to his lodgings. But his luck soon turned. As this same historian relates: This security was short-lived, however. In the evening a group of miners gathered at a saloon on the outskirts of Collinsville. They discussed the events which had occurred earlier in the day and the accusations which had been made against Prager. Most of the miners were foreign-born and spoke little English, and it is believed that comments imputed to Prager were magnified completely out of proportion. The result was that the miners decided Prager was disloyal to the United States and must be punished. (119)
Lies, Swindles, and Self-deception 109
The details of what happened thereafter are sordid, and there seems to be little doubt that the overconsumption of alcohol by those who finally lynched him was part of the story. But not the entire story. When newspapers around the country got word of the appalling incident, editorials demanded that the perpetrators be punished to the full extent of the law. In the words of one such editorial, mob violence is “a species of collective criminality for which there is just one cure: punishment.” The governor of Illinois at the time was also forthright in his condemnation of mob violence: “The action of the mob at Collinsville was as much an assault on the principle of democracy as the treasonable practices with which Prager was charged, even if that charge was true.” The leaders of the mob were arrested and held without bail pending grand jury action. In all, 11 men were finally brought to trial, one that began on May 13th and lasted for three weeks. Seven hundred and fifty men had to be interviewed before a jury of 12 was finally selected.
Who Was Responsible? One of the contentious issues during the trial proved to be whether the mob had intended to lynch Prager more or less from the start. Or had originally only wanted to tar and feather him, and had opted for a rope when the materials for the lesser punishment could not be found. In any case, when those accused finally had their day in court, they all denied any responsibility. They also did what they could to convince the jury that they were 100% American patriots. After deliberating for 45 minutes, the jury returned a verdict of not guilty largely on the grounds that the lynching had been in a dark place, and hence ascertaining the identity of the actual perpetrators of the crime could not be done “beyond a reasonable doubt,” to use the usual legalese. Prominent newspapers around the country condemned this outcome as a gross miscarriage of justice.1 The historian Earl Schwartz has suggested that his fellow historians have generally seen the disturbing story of Robert Prager as possibly the most extreme example of ethnic and political polarization during World War I as well as an instance of the insensitivity then of some politicians and journalists to the cause of civil liberty. Schwartz sees more than just this in this tragic event: consideration of the deeper context of the Prager case suggests a different meaning. Prager unquestionably died at the hands of drunken, hysterical men because he was German. But when they killed Prager, they were also killing their own fears of being accused of disloyalty, fears rooted in a bitter and divisive labor struggle. (Schwartz 2002/2003, 414)
110 Lies, Swindles, and Self-deception
Prager’s execution at the intersection of St. Louis Road and a street called National Terrace in Collinsville three miles east of what is now Cahokia Mounds State Historic Site (the largest pre-Columbian settlement north of the Rio Grande) was witnessed by some 200 human souls that night in 1918. If Schwartz is right, were they all experiencing more or less the same fear of being seen as unpatriotic during a time of war and labor unrest? Or were they seeing the event unfolding before their eyes through their own personal concerns, fears, and hatreds? Or maybe combinations of differing motivations? This uncertainty about the motivations involved is not a trivial matter (Dezecache et al. 2015).2 Judged by their actions, the label “mob violence” seems appropriate enough as a simple description of the human behavior witnessed. But if we accept that emotions are learned rather than instinctual, and are only mini-scripts or stage directions, would it follow that everybody that fateful night was playing the same role in the very same socially scripted drama? More to the point, how much does it really matter whether people feel the same emotions when they are working in concert with one another for better or for worse?
Evolution and the Social Brain The renowned movie director Alfred Hitchcock (1899–1980) had a knack for employing subtle emotional cues of sight and sound to transform ordinary happenings—opening a door, climbing stairs, taking a shower—into moments on the silver screen capable of terrifying and then delighting audiences safely ensconced with friends and strangers alike in closely spaced rows of adjoining seats in a darkened movie theatre. Neither Lane Beckes nor Jim Coan would claim to be Hitchcocks of the neuroscience laboratory. In comparison with this remarkable filmmaker, their use of shock electrodes strapped to the ankles of their research participants cannot match his imaginative wizardry at crowd-thrilling entertainment (Beckes et al. 2012, 2015; Coan et al. 2006). Yet however low-keyed, their simple way of eliciting anxiety producing emotional mini-scripts or stage directions in the laboratory so they can monitor the anticipation of being shocked building up in the brains of their research participants is taking the study of what is or isn’t an emotion beyond the well-trodden playing field of philosophical and psychological debate. They are not the only neuroscience researchers proposing that the human brain normally assumes it is embedded within a social network characterized by familiarity, joint attention, shared goals, and interdependence. Joel Krueger at Exeter University and his colleague Thomas Szanto at the University of Copenhagen have also suggested that the emotions we feel may not be ours alone (Krueger and Szanto 2016). There is no reason to rule out, in other words, that people can have similar emotional reactions in similar situations.
Lies, Swindles, and Self-deception 111
If true, and there does seem to be good evidence supporting this claim, then why has evolution favored humans having similar emotional reactions to what is happening outside their brain’s personal yellow submarine? At least part of the answer is obvious. Working together well is difficult to pull off. Those involved not only need to understand what others are asking of them, but must also agree to go along with what is being proposed, requested, or required. In short, their hearts as well as their minds must be recruited for the job. Because emotions predispose us to act, what is the likelihood that doing a good job of selling an assignment emotionally may be sufficient to get something done without even needing to enlist conscious or subconscious intellectual agreement among all of those involved on what to do and how to do it? Or something that should not be done, as the sad case of Robert Prager illustrates for us all too well?
The Enlightenment Tradition During the Enlightenment—the Age of Reason—in seventeenth and eighteenth centuries, philosophers, theologians, politicians, and economists wrote eloquently about how humans can be rational when given half a chance (Kant 1784). This period in European history is generally said to have ended with the barbarism of the French Revolution at the close of the eighteenth century. Nonetheless, the legacy of this intellectual movement has lived on. As we have said previously, many still take it more or less for granted that down deep inside most humans are markedly different from all other life forms on Earth. We are rational. They are not. Enough said. In the nineteenth century, Charles Darwin added a compelling reason for believing we are indeed the world’s most thoughtful animals. Our species evolved over time to be this way—rational, logical, and moral. Why? Because being gifted mentally in this fashion has made us stand out as a species in the constant struggle for existence that all creatures must deal with all of the time, year after year (Laland 2017; Richards 2003). Nowadays many experts writing about our emotional lives appear to be taking it more or less for granted that when we are being emotional we are still basically behaving like rational beings—Freud’s influence has definitely faded, but the Enlightenment premise is still with us. Why? Because, as Lisa Barrett has forcefully stated this claim, the brain is continuously making “predictions at a microscopic scale as millions of neurons talk to one another.” Why? So it can anticipate “every fragment of sight, sound, smell, taste, and touch that you will experience, and every action you will take” (Barrett 2017, 59). You already know we do not think that even a mammalian brain as magnificent as the human one is in the business of predicting the future (Felin et al. 2017). Yes, the brain can certainly learn to be worried about the future.
112 Lies, Swindles, and Self-deception
However, its major job is handling what is happening around it in the here and now. We strongly doubt, therefore, that it spends much of its metabolic budget on making predictive calculations however immediate and short term such futuristic clairvoyance might be. Instead of being skilled at reading life’s tea leaves to learn what may possibly happen down the road, our brains are surely given mostly to making deductions—not predictions—about the world and what is apparently happening in the here and now around it based on what it has previously seen, experienced, or imagined in similar situations (Wittmann 2017). As we will discuss again in the final chapter of this book, this kind of mental efficiency has a decidedly dark side. Yet even in this less demanding way, as the history of the Brooklyn Bridge and the unfortunate fate of Robert Prager both show us, the brain is also quite able to act on its emotional responses rather than on what thoughtful deductive reasoning might direct it to do. Perhaps more unsettling still, as we will explore in the next chapter, we humans may not even need to have reasons for getting together to do things socially—watching a baseball game, for instance, or an opera—as long as whatever we are doing together makes us feel more secure and socially connected.
Notes 1 For a broader perspective on mob violence versus the rule of law in the early years of twentieth century America, see Capozzola (2002). 2 For a more recent example of mob violence, see Swenson (2017).
Works Cited Adolphs, Ralph (2017). How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Social Cognitive and Affective Neuroscience 12: 24–31. doi:10.1093%2Fscan%2Fnsw153 Barrett, Lisa Feldman (2012). Emotions are real. Emotion 12: 413–429. doi:10.1037/ a0027555 Barrett, Lisa Feldman (2017). How Emotions Are Made: The Secret Life of the Brain. New York: Houghton Mifflin Harcourt. Beckes, Lane and James A. Coan (2011). Social baseline theory: The role of social proximity in emotion and economy of action. Social and Personality Psychology Compass 5: 976–988. doi:10.1111/j.1751-9004.2011.00400.x Beckes, Lane, James A. Coan, and Karen Hasselmo (2012). Familiarity promotes the blurring of self and other in the neural representation of threat. Social Cognitive and Affective Neuroscience 8: 670–677. doi:10.1093/scan/nss046 Beckes, Lane, Hans IJzerman, and Mattie Tops (2015). Toward a radically embodied neuroscience of attachment and relationships. Frontiers in Human Neuroscience 9: 266. doi:10.3389/fnhum.2015.00266. Capozzola, Christopher (2002). The only badge needed is your patriotic fervor: Vigilance, coercion, and the law in World War I America. Journal of American History 88: 1354–1382.
Lies, Swindles, and Self-deception 113
Coan, James A. and Erin L. Maresch (2014). Social baseline theory and the social regulation of emotion. In Handbook of Emotion Regulation, 2nd ed., James J. Gross (ed.), pp. 221–236: 228. New York: Guilford. Coan, James A., Hillary S. Schaefer, and Richard J. Davidson (2006). Lending a hand: Social regulation of the neural response to threat. Psychological Science 17: 1032–1039. doi:10.1111/j.1467-9280.2006.01832.x Cohen, Gabriel (2005). For you, half price. The New York Times, Sunday, November 27, 2005. Retrieved from: www.nytimes.com/2005/11/27/nyregion/thecity/for-youhalf-price.html Dezecache, Guillaume, Pierre Jacob, and Julie Grèzes (2015). Emotional contagion: Its scope and limits. Trends in Cognitive Sciences 19: 297–299. doi:10.1016/ j.tics.2015.03.011 Dixon, Thomas (2017). Labels, rationality, and the chemistry of the mind: Moors in historical context. Psychological Inquiry 28: 27–30. doi:10.1080%2F10478 40X.2017.1256605 Felin, Teppo, Jan Koenderink, and Joachim I. Krueger (2017). Rationality, perception, and the all-seeing eye. Psychonomic Bulletin & Review 24: 1040–1059. doi:10.3758/ s13423-016-1198-z Fleissner, Robert F. (1977). Stein’s Four Roses. Journal of Modern Literature 6: 325–328. Retrieved from: www.jstor.org/stable/3831176 Gross, James J. (2013). Emotion regulation: Conceptual and empirical foundations. In Handbook of Emotion Regulation, 2nd ed., James J. Gross (ed.), pp. 3–20. New York: Guilford Press. Hickey, Donald R. (1969). The Prager affair: A study in wartime hysteria. Journal of the Illinois State Historical Society 62: 117–134. Retrieved from: https://www.jstor.org/ stable/40191045 Kant, Immanuel (1784). An answer to the question: What is Enlightenment? Retrieve from: archive.org/stream/AnswerTheQuestionWhatIsEnlightenment/Kant EnlightmentDanielFidelFerrer2013_djvu.txt or en.wikisource.org/wiki/What_is_ Enlightenment%3F Krueger, Joel and Thomas Szanto (2016). Extended emotions. Philosophy Compass 11: 863–878. doi:10.1111/phc3.12390 Laland, Kevin N. (2017). Darwin’s Unfinished Symphony: How Culture Made the Human Mind. Princeton, NJ: Princeton University Press. LeDoux, Joseph (2012). A neuroscientist’s perspective on debates about the nature of emotion. Emotion Review 4: 375–379. doi:10.1177/1754073912445822 Moors, Agnes (2017). Integration of two skeptical emotion theories: Dimensional appraisal theory and Russell’s psychological construction theory. Psychological Inquiry 28: 1–19. doi:10.1080/1047840X.2017.1235900 Panksepp, Jaak (2011). Cross-species affective neuroscience decoding of the primal affective experiences of humans and related animals. PLoS ONE 6, no. 9: e21236. doi:10.1371/journal.pone.0021236 Pessoa, Luiz (2017). A network model of the emotional brain. Trends in Cognitive Sciences 21: 357–371. doi:10.1016/j.tics.2017.03.002 Richards, Robert J. (2003). Darwin on mind, morals, and emotions. In The Cambridge Companion to Darwin, Jonathan Hodge and Gregory Radick (eds.), pp. 92–115. Cambridge: Cambridge University Press. doi:10.1017/CCOL0521771978.005 Schwartz, Earl Albert (2002/2003). The lynching of Robert Prager, the United Mine Workers, and the problems of patriotism in 1918. Journal of the Illinois State Historical Society 95: 414–437. https://www.jstor.org/stable/40193598
114 Lies, Swindles, and Self-deception
Shweder, Richard A. (2014). The tower of appraisals: Trying to make sense of the one big thing. Emotion Review 6: 322–324. doi:10.1177%2F1754073914534500 Swenson, Kyle (2017). Stomped to death and burned, a Muslim immigrant’s fate offers tragic lesson in U.K. The Washington Post, July 7, 2017. Retrieved from: www. washingtonpost.com/news/morning-mix/wp/2017/07/07/stomped-to-death-andburned-a-muslim-immigrants-fate-offers-tragic-lesson-in-uk/. The Brooklyn Bridge. (n.d.) A world wonder. Retrieved from: http://www.brooklyn bridgeaworldwonder.com/ Truth. (n.d.). Oxford Living Dictionaries. Retrieved from: en.oxforddictionaries.com/ definition/truth Wittmann, Marc (2017). Felt Time: The Science of How We Experience Time. Cambridge, MA: MIT Press.
10 HUMAN ISOLATION AND LONELINESS Private Lives and Public Duties
It is fortunate indeed for the survival of our species that evolution has done a reasonable job of making most of us able to experience being around others of our kind as emotionally rewarding rather than alarming or simply terrifying. We are not unique in this way, but we are better than most other species at being social—except perhaps honey bees and other kinds of social insects. They, too, are remarkably skilled at sharing life’s workload with one another (Barker et al. 2017; Tomasello 2014). Nobody knows for sure whether any of the ants, bees, termites, wasps, or other insects classified by scientists as social genuinely relish being around others like themselves, but let’s not rule out this possibility altogether. However, the murder of Robert Prager in 1918 by a violent mob in the middle of the night in Collinsville, Illinois (as recounted in Chapter 9) can be seen as a cautionary tale. Being caught up in social life is not without its risks and unmistakable dangers. Sure, we have reason to be proud that as a social species we excel all others at creatively dumbing down the world we live in to make the challenges we face as humdrum and predictable as possible. Yet there are perils, too. We are not god-like in our powers. We are not all-seeing and wise. We are not even necessarily good at admitting our limitations, our possible carelessness, or the likelihood that we may be given at times to jumping to conclusions based on too little evidence and not enough caution. Moreover, nobody has to look back a century ago to find obvious examples of how misleading and perhaps downright dangerous it can be to be both social and human.
116 Human Isolation and Loneliness
The Perils of Social Media Down the road when the dust has settled on the politically turbulent second decade of the twenty-first century, we may know for sure what on earth happened then and why. Currently, for instance, so many are offering their insights on BREXIT in Great Britain and on Russian interference in the 2016 presidential election in the United States that it is anybody’s guess what should be made of both (Del Vicario et al. 2017). Yet there does seem to be a common thread running through such memorable political stories. That shared thread is called social media. A straightforward dictionary definition of these two words when paired together would run something like this: social media, websites and computer programs enabling people to communicate and share information with one another using a computer, tablet device, or mobile phone (“Social media,” n.d.). As innocent as this definition may sound, it has now become obvious that like any other human-made tool, social media can be abused and also intentionally misused. On the face of it, for example, such a communications tool would seem to be a wonderful way of connecting people together to chat, share photos, and yes, argue with one another anywhere in the world. So common sense would lead one to think such freedom of exchange should help break down barriers of geography, ethnicity, inequality, and the like. At best, this is only partly true. Surprisingly enough perhaps, the constraints of space and time are still real enough even though you would think that mobile phones and the like should be making it possible for billions of us to be in touch with others almost anywhere on the planet at any time of day or night. For instance, in a study published in 2011 based on 72.4 million phone calls and 17.1 million text messages sent and received over a one-month period in an unnamed European country, the biostatistician Jukka-Pekka Onnela at Harvard Medical School and his colleagues found that even though there is currently little difference between making a short-distance or long-distance connection (either voice or text), “the probability of communication is strongly related to the distance between the individuals, and it decreases by approximately five orders of magnitude as distance increases from 1 km to 1,000 km.” (Onnela et al. 2011). Similarly and more recently, Emma Spiro at the University of Washington and others working with her have found that even in the face of major technological advances, circumstances such as geography, status, wealth, and the like continue to play defining roles in shaping online (here specifically, Facebook) social ties (Spiro et al. 2016). The take-home message here? Even our own species with all its advanced technologies cannot, or any rate does not, shake itself free from the constraints of space, time, and social divisions.
Human Isolation and Loneliness 117
Social Realities In his landmark book Sociobiology, the Harvard evolutionist Edward O. Wilson in 1975 vividly documented the numerous ways in which most species on earth are social at least in the minimal sense of being able to deal with others of their kind to assure the reproduction of their very own type of life form. Wilson underscored, however, that a society is not just a bunch of individuals who happen to be in the same place at the same time. More than this, a society is a group of individuals who can communicate with one another in ways that go beyond merely being sexual (Wilson 1975, 7). Unfortunately, Wilson let common sense get the better of him back in 1975. As the anthropologist Fredrik Barth had already noted several years before then, practically all social science reasoning has traditionally relied on the commonsense notion that we all live in more or less discrete social, political, and economic somethings called “societies” or “groups” that have well-defined boundaries—groupings of individuals and families variously called tribes, ethnic groups, populations, races, societies, or cultures (Barth 1969). Indeed, as the sociologist Rogers Brubaker has commented: “Few social science concepts would seem as basic, even indispensable, as that of group” (Brubaker 2004, 7). Therefore, no wonder perhaps that even a zoologist as astute as Wilson would accept the conventional notion that since the “bond of a society,” to use his own phrase, is “simply and solely communication,” then it must follow that “societies” have boundaries that can be drawn on a map using “the curtailment of communication” as the way to find where on the landscape to cut one society off from another. To his credit, Wilson did recognize in Sociobiology that there always has been “some ambiguity about the cut-off point or, to be more precise, the level of organization at which we cease to refer to a group as a society and start labeling it as an aggregation or nonsocial population.” Nevertheless, since then he has not abandoned the traditional claim that humans live in well-defined groups despite the fact that it is now widely recognized in the social sciences that saying there is “some ambiguity about the cut-off point” isn’t saying even the half of it (Wilson 2012).1 In truth, nowadays when an Internet giant like Facebook can boast about having 2.5 billion monthly active users as of the third quarter of 2019, it is painfully clear that saying there is “some ambiguity” about defining “groupness” is an astonishing academic understatement.
Social Networks The formal study of how individuals join with others however near or far to get things done together is called social network analysis (SNA). Today researching the truly diverse networks of engagement that people all over our planet have with one another has invaded almost all the highways and byways of human
118 Human Isolation and Loneliness
life, as anyone who has been on the Internet knows well. For instance, one of the pioneers of SNA research, Mark Granovetter at Stanford University, is famous for using the tools and techniques of modern SNA to explore how the strength of our social ties with others of our kind can shape our mobility, the diffusion of ideas, the political organization of society, and on a more general level, the cohesion of society writ large (Granovetter 1973). Some of the details of Granovetter’s research findings are particularly instructive. He defines tie strength as the amount of time and commitment we give to our relationships with others as well as the rewards and benefits gained thereby. In keeping with what common sense also tells us, he and others have found that having strong ties with someone else often means you are also likely to have similarly strong ties with that individual’s friends, too. Furthermore, people who are thus closely connected with one another socially and emotionally are also likely to share opinions, passions, hobbies, and the like. Additionally, just as common sense would suggest, people who are strongly tied to one another often form more or less enduring social cliques marked by (let’s be polite in how we say this) a high degree of redundancy (our word choice, not Granovetter’s) in what they do, say, think, and the like—shared commonalities that can put social cliques at risk of becoming dull, boringly predictable, provincial, and closed-minded little social “bubbles.” Although not having strong ties with other people beyond your own bubble and comfort zone can lead to ignorance, disdain, and even hatred of outsiders, Granovetter and others doing social network analysis have found that most of us do not limit all our social contacts and activities solely to those in our own immediate social clique. The difference between the commonsense idea of “groups” and the SNA definition of “cliques” is that even in what might seem to be the most exclusive cliques there invariably are at least some individuals within them who have social connections—Granovetter formally calls them weak ties—with people in other cliques. And thank goodness. These weaker ties serve as “bridges” between cliques. Importantly, like bridges, weak ties can serve as avenues between cliques along which new information, innovations, material things, and even people, too, can “flow” from clique to clique (Granovetter 1983).
Being of Like Mind It was Fredrik Barth (1928–2016) perhaps more than anyone else who led social scientists in the twentieth century to challenge the traditional wisdom that humans can be assigned to separate and distinct “kinds” that may be variously labeled as tribes, races, cultures, and so on. To quote perhaps his most famous statement: Practically all anthropological reasoning rests on the premise that cultural variation is discontinuous: that there are aggregates of people who essentially share a common culture, and interconnected differences that
Human Isolation and Loneliness 119
distinguish each such discrete culture from all others. Since culture is nothing but a way to describe human behaviour, it would follow that there are discrete groups of people, i.e. ethnic units, to correspond to each culture. . . . Though the naïve assumption that each tribe and people has maintained its culture through a bellicose ignorance of its neighbours is no longer entertained, the simplistic view that geographical and social isolation have been the critical factors in sustaining cultural diversity persists. (Barth 1969, 9) As we have previously said, not everyone writing about our species evidently knows about Barth’s insightful commentary from decades ago. Continuing to think humans naturally come in kinds, however, is not the only old commonsense idea still confounding scholarly research on what it means to be human. Another related and similarly outdated notion is the belief that human social life critically depends on everyone in the same group having something basic in common that some evolutionary biologists, for example, have labeled as “shared intentionality” (or “collective intentionality), although probably most of us would simply call this “being of like mind.” For example, Michael Tomasello at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, sees being motivated by the same acquired goals and learned intentions as others in collaborative activities not only as a requirement for successful cooperation, but also as a mental state that is possible only for those of us who are human despite what you may think if you share your abode with a reasonably contented dog (Tomasello et al. 2005). He calls such alleged group uniformity of thought objective-reflective-normative thinking (Tomasello 2014, 4; also Lieberman 2013, 191–192, 201–202). Do people really need to think alike and share the same goals and intentions to be able to live with and work together? For that matter, and perhaps even more to the point, given that all of us are sailing in our own private yellow submarines, how likely is it that any of us really do think alike except perhaps in some decidedly vague and elusive fashion?
Social Life When you look closely at what people are actually like, what stands out is not how identical they are, but instead how different from one another they are in their looks, their ways, their wants, their ambitions, and maybe even in what rocks their boat (Raybeck 2009, 157). Yet despite such differences, as the anthropologist Anthony F. C. Wallace (1923–2015) observed years ago, most of us are able to get along with one another, nonetheless (Wallace 1970, 35). We think his explanation of this seeming absurdity is still the most reasonable one.
120 Human Isolation and Loneliness
First, like it or not, we have to accept that everyone’s life experiences inevitably are in some ways, perhaps even in most ways, unique to her or him as an individual. Therefore, every human brain contains a private set of more or less personal memories, thoughts, meditations, fears, joys, and so on. Consequently, as Wallace observed in 1961: The scientist starts with the knowledge that everywhere men satisfy their needs in culturally organized social groups. . . . Then he may assume, rather blithely, that if on some level of abstraction the needs are the same and the culture is the same, then the motives must be the same. The enthusiastic religious leader and the fanatical political reformer think along the same lines: they take the group as given, and declaim that its continued existence requires the sharing of motives. The humanist—the poet, the novelist, the dramatist, the historian— has tended to approach the question with a sense of tragedy (or humor) at the paradox, so apparent to him, that despite the continuing existence of the culture and the group, the individual is always partly alone in his motivation, moving in a charmed circle of feelings and perceptions which he cannot completely share with any other human being. (Wallace 1961, 129–130) Wallace labeled the personally unique set of experiences, feelings, observations, and learned meanings as every individual’s own private “mazeway”—a term he defined as “the cognitive map of the individual’s private world regularly evoked by perceived or remembered stimuli” (Wallace 1961, 131; 1970, 15–16). Second, Wallace further argued that what is critical to human well-being and to social cooperation generally speaking is not the degree of sameness in the mazeways of those who are engaged in various ways with one another, but rather the recognition—as the result of learning—that the behavior of other people under various circumstances is predictable, irrespective of knowledge of their motivation, and thus is capable of being predictably related to one’s own actions. . . . without extensive motivational or cognitive sharing. (Wallace 1970, 35) In short, contrary to what Tomasello and others have written, people can live together and get along not because of the sameness of their thoughts and feelings, but rather because their actions come across to one another as acceptably trustworthy, safe, predictable, and more or less appropriate. As Wallace
Human Isolation and Loneliness 121
summed up this observation in his book Culture and Personality, by getting to know one another, cooperation is possible not because the relationships among those involved are based on a sharing, but rather on a “complementarity of cognitions and motives” (Wallace 1970, 36). He called this sort of experienced compatibility “mazeway equivalence.” He was candid in suggesting that when the scientist claims that all men, or at least all members of the same culturally organized group, must share a common panel of interests and motives (ideal states-of-affairs to which strong affects are attached), the humanist can only raise his eyebrows and smile a wry smile at the naïvety of scientism. (Wallace 1961, 130–131) On a number of occasions, Wallace underscored that there is also a darker side to this kind of compatibility in human social life. This awareness of the limits of human communication, of the impossibility, despite all the labor of God, Freud, and the Devil, of one man fully understanding another, of the loneliness of existence, is not confined to any cult of writers; it is a pan-human theme. (Wallace 1961, 130)
Private Truths, Public Realities Is there a way to pull together these various thoughts to make a few general observations about human nature? We believe so. Many of the words that Arthur Conan Doyle put in the mouth of his famous detective are now part of the world’s fondest literary heritage. Somewhere near the top of the list of memorable Sherlock sayings are these remarks to Dr. Watson in Chapter VI (“Sherlock Holmes Gives a Demonstration”) of The Sign of the Four (1890), Doyle’s second published novel: “How came he, then?” I reiterated. “The door is locked, the window is inaccessible. Was it through the chimney?” “The grate is much too small,” he answered. “I had already considered that possibility.” “How then?” I persisted. “You will not apply my precept,” he said, shaking his head. “How often have I said to you that when you have eliminated the impossible whatever remains, however improbable, must be the truth?”
122 Human Isolation and Loneliness
Despite all that anthropologists have been writing about human biology, genetics, and the contentious issue of race for decades (Terrell 2018), ask most people nowadays and the chances are good you will still be told with little hesitation (although perhaps in guarded words) that our species is obviously subdivided geographically into different and quite distinct groups, races, ethnicities, tribes, and the like. As also just noted, it has long been accepted, too, that people must share the same ideas, values, goals, intentions, and the like to be able to live with one another and work successfully together to get things done that need doing. In this chapter we have been arguing to the contrary—however improbable such an observation may seem—that what stands out about our species is not how much alike people all are, but instead how different we all are even in the same families or small communities in our looks, ways, wants, ambitions, and, yes, even in what rocks our boat or makes us shiver in delight. In marked contrast, go ahead and open any field guide to the animals found in any given region of the world, say North America. Perhaps individuals belonging to the species Procyon lotor (the common raccoon) may be able to tell one another apart, but for most of us who are human, doesn’t the old saying “seen one, seen them all” readily come to mind? In fact, if this ease of species identifications was not the case, then field guides to animals, flowers, and trees would basically be impossible to write. So again, in marked contrast, any human being capable of responding to Sherlock’s famous remark “you see, but you do not observe” should be able to note plainly enough how easily individuals even within the same family or local community can be told apart from one another. Said simply, therefore, when it comes to being human, uniformity isn’t the word that comes immediately to mind. Instead, diversity is the hallmark of our species. Why is it important to insist on this? To reiterate what Wallace observed years ago, people are clearly able to be social not because of the sameness of their thoughts and feelings, but rather because their actions come across to one another as acceptably safe, predictable, and more or less appropriate. And this is not all. As Wallace also argued, everybody’s own experiences of life are always more or less private and unique to her or him as an individual. Consequently, each of us houses within our skull our very own “cognitive map,” or “mazeway,” of personal memories, thoughts, meditations, fears, joys, and so on. Although Wallace taught social anthropology at the University of Pennsylvania, he was well versed also in psychology as practiced in his day. This may explain why he saw so clearly the inherent isolation and loneliness of human existence. He never used our image of the human skull as like a little yellow submarine, but he did evocatively capture through his clever choice of words just how isolated each human brain is despite how critically necessary it is for all of us to recognize and respond appropriately to what is going on out there in the so-called real world beyond the bony confines of our own bony skull.
Human Isolation and Loneliness 123
Note 1 Although we will not try to explore the issue here, an analogous problem in mathematics is captured by the fact that for a number to be computable, it is first necessary to define what is to be the finite number of symbols completing the calculation (Bernhardt 2016, 141–142).
Works Cited Barker, Jessica L., Judith L. Bronstein, Maren L. Friesen, Emily I. Jones, H. Kern Reeve, Andrew G. Zink, and Megan E. Frederickson (2017). Synthesizing perspectives on the evolution of cooperation within and between species. Evolution 71: 814–825. doi:10.1111/evo.13174 Barth, Fredrik (1969). Introduction. In Ethnic Groups and Boundaries: The Social Organization of Culture Difference, Frederik Barth (ed.), pp. 9–38. Boston, MA: Little, Brown and Company. Bernhardt, Chris (2016). Turing’s Vision: The Birth of Computer Science. Cambridge, MA: MIT Press. Brubaker, Rogers (2004). Ethnicity without Groups. Cambridge, MA: Harvard University Press. Del Vicario, Michela, Fabiana Zollo, Guido Caldarelli, Antonio Scala, and Walter Quattrociocchi (2017). Mapping social dynamics on Facebook: The Brexit debate. Social Networks 50: 6–16. doi:10.1016/j.socnet.2017.02.002 Granovetter, Mark S. (1973). The strength of weak ties. American Journal of Sociology 78: 1360–1380. Retrieved from: www.jstor.org/stable/2776392 Granovetter, Mark S. (1983). The strength of weak ties: A network theory revisited. Sociological Theory 1: 201–233. doi:10.1007/978-3-658-21742-6_55 Lieberman, Matthew D. (2013). Social: Why Our Brains Are Wired to Connect. New York: Crown. Onnela, Jukka-Pekka, Samuel Arbesman, Marta C. González, Albert-László Barabási, and Nicholas A. Christakis (2011). Geographic constraints on social network groups. PLoS ONE 6, no. 4 (2011): e16939. doi:10.1371/journal.pone.0016939 Raybeck, Douglas (2009). Introduction: Diversity not uniformity. Ethos 37: 155–160. Retrieved from: www.jstor.org/stable/20486607 Social media. Cambridge English Dictionary. Retrieved from https://dictionary.cambridge. org/us/dictionary/english/social-media Spiro, Emma S., Zack W. Almquist, and Carter T. Butts (2016). The persistence of division: Geography, institutions, and online friendship ties. Socius 2. doi:10.1177/2378023116634340 Terrell, John Edward. (2018) “Plug and Play” genetics, racial migrations and human history. Scientific American, May 29, 2018. Retrieved from: blogs.scientificamerican. com/observations/plug-and-play-genetics-racial-migrations-and-human-history/ Tomasello, Michael (2014). The ultra-social animal. European Journal of Social Psychology 44: 187–194. doi:10.1002/ejsp.2015 Tomasello, Michael, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences 28: 675–735. doi:10.1017/S0140525X05000129 Wallace, Anthony F. C. (1961). The psychic unity of human groups. In Studying Personality Cross-Culturally, Bert Kaplan (ed.), pages 129–164. New York: Harper & Row.
124 Human Isolation and Loneliness
Wallace, Anthony F. C. (1970). Culture and Personality, 2nd ed. New York: Random House. Wilson, Edward O. (1975). Sociobiology: The New Synthesis. Cambridge, MA: Harvard University Press. Wilson, Edward O. (2012). The Social Conquest of the Earth. New York: Liveright (a division of W. W. Norton).
11 PROS AND CONS OF BEING HUMAN The War of the Worlds
The English writer Herbert George Wells (1866–1946), or “H. G. Wells” as he is better known, wrote many things during his highly successful career. Today he is probably best known for his science fiction stories—notably The Time Machine (1895), The Island of Dr. Moreau (1896), The Invisible Man (1897), and The War of the Worlds (1898). These have been turned into movies, stage productions, television plays, and comic books. A clue to both the popular appeal and the weightiness of his writings is the fact that he was nominated for the Nobel Prize in Literature in 1921, 1932, 1935, and 1946 (“Nomination database,” n.d.). If you are familiar with the story, you know that The War of the Worlds is about incredibly brainy aliens from Mars that (or is it who?) have evolved into being little more than huge heads about 4 ft in diameter with 16 slender tentacles around the mouth of each of these bizarre creatures. The key element of Wells’ story is that however improbable and grotesque, these odd beings have developed technologies far superior to anything made anywhere down here on Earth. Yes, this is a science fantasy. The War of the Worlds is more than just a fascinating read. This novel is filled with memorable lines, most notably perhaps the opening paragraph now widely recognized as one of the greatest beginnings in the genre of science fiction: No one would have believed in the last years of the nineteenth century that this world was being watched keenly and closely by intelligences greater than man’s and yet as mortal as his own; that as men busied themselves about their various concerns they were scrutinised and studied, perhaps almost as narrowly as a man with a microscope might scrutinise the transient creatures that swarm and multiply in a drop of water. . . . At most terrestrial men fancied there might be other men upon Mars, perhaps inferior to themselves and ready to welcome a missionary enterprise.
126 Pros and Cons of Being Human
Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us. And early in the twentieth century came the great disillusionment.
Who’s on Top? However mentally superior to ourselves these grotesque and sinister Martian intelligences may be, they are ultimately defeated in Wells’s tale in their moves to take over planet Earth not by anything we humans are capable of doing to save ourselves from destruction, but rather by the lowest of the low on the planet, namely common everyday microbes: “slain by the putrefactive and disease bacteria against which their systems were unprepared; . . . slain, after all man’s devices had failed, by the humblest things that God, in his wisdom, has put upon this earth.” So much for the vaunted superiority of Homo sapiens. However fanciful this fantasy about war between life forms native to different worlds, it strikes us as revealing that Wells has described Martians as having become so highly evolved that they are little more than large heads with spindly appendages. Earlier in a short essay published in 1893 titled “The Man of the Year Million,” he had envisioned a similar evolutionary trajectory for our own species (Haight 1958). As one commentator has observed, in this essay—as in The War of the Worlds—“the culmination of higher intelligence is a globular entity, brought about by the influence of steadily advancing technology” (Eisenstein 1976, 161). The premise that evolving an ever larger brain leads a life form to become ever more intelligent and technologically sophisticated may come across to you as entirely plausible. But where did this idea come from? Evidently Wells was strongly influenced by the evolutionism of Darwin’s close friend Thomas Henry Huxley (1825–1895) (Partington 2000). Possibly Wells, therefore, merely took it for granted that bigger is better from an evolutionary point of view. However, from a modern scientific perspective—considering the high metabolic cost of running a brain like our current one—it is not at all self-evident that there is an affordable evolutionary payoff for brains to grow bigger and bigger ad nauseum, so to speak, as time goes by. Therefore, is there any credible reason to embrace Wells’s speculations about the future course of human biological evolution? The short answer is no. A far longer answer is needed to explain why this is so.
Doing a SWOT According to the literary historian Anne Stiles, Wells was not alone a century or so ago in believing that the human brain is still evolving, and that future
Pros and Cons of Being Human 127
technological advances will lead to our bodies shrinking while our brains grow ever larger. “As ridiculous as Wells’s bodiless, large-headed ‘human tadpoles’ may seem, they were based on the most rigorous evolutionary science of their day.” Even nowadays, she adds, “the large-headed, big-eyed aliens envisioned by Wells in the 1890s saturate modern popular media to an extent that Wells himself could scarcely have imagined” (Stiles 2003, 317, 339). Back when Wells was writing about Martians and mad scientists like the villains in his science fiction classics The Island of Dr. Moreau and The Invisible Man, it was conventional to think not only that brain size and intelligence go hand in glove, but that genius and mental illness are suspiciously allied, as well. To quote Stiles again: “Rather than glorifying creative powers, Victorians pathologized genius and upheld the mediocre man as an evolutionary ideal” (Stiles 2009, 322). Belittling brainy people as eggheads is hardly a thing of the past. However, if bigger isn’t necessarily better, and brain size isn’t a good way to gauge the likelihood that our species will survive down here on Earth at least until, say, the Year 3030, if perhaps not until the Year Million, is there some other way to try to forecast what the future may have in store for Homo sapiens? Reading tea leaves or Tarot cards are two of the ways that fortune tellers claim to be able to foresee what lies ahead for their paying customers. In the business world, however, corporate analysts and forecasters often resort instead to something popularly known as doing a SWOT.
SWOT Basics What is a SWOT? The four letters in this rather odd acronym stand for Strengths, Weaknesses, Opportunities, and Threats. Although this strategy for perceiving what can be done to achieve a worthy (and hopefully lucrative) corporate goal was developed in the modern world of business and industry, this approach to decision-making is now widely seen as a useful framework “for identifying and analyzing the internal and external factors that can have an impact on the viability of a project, product, place or person” (“SWOT analysis,” n.d.). According to one business website, the proper way to do a SWOT analysis is first to define an objective and then use the analysis to determine what internal and external factors may support or hinder that objective. Strengths and weaknesses represent the internal factors affecting an individual or organization, while opportunities and threats constitute external, environmental factors. (“14 free SWOT,” n.d.) The same website goes on to add that a SWOT analysis can be done in combination with what is called a PEST analysis weighing broader Political, Economic,
128
Pros and Cons of Being Human
Social and Technological factors to achieve a “macro-environmental view” of what can be done to get from the here and now of today to a more favorable here and then in the future. The two of us are unwilling to accept the challenge of figuring out what on Earth is a “macro-environmental view.” But since we are not good at reading either tea leaves or crystal balls, we think doing a SWOT using what analysts in the business world refer to as a SWOT matrix may be the way to go. Fortunately, this is just a fancy name for what is conventionally known in the sciences as a 2×2 table (“A SWOT analysis,” n.d.). Therefore, “doing a SWOT” basically means fleshing out the four cells in this simple four-fold table (Figure 11.1). We will now do this exercise for Homo sapiens except we will not attempt to put all that we might say (and are trying to say in this book) into a single table. Nor will we offer you a companion PEST analysis, whatever this is, although talking about opportunities and threats necessarily touches on at least some relevant political, economic, social, and technological considerations.
Objectives of Our SWOT Analysis As we have said previously, thanks to our species’ Great Human High-Five Advantage, people excel at dumbing down the world to make the challenges they must deal with to survive and reproduce our kind as predictable and humdrum as possible. Yet what is the likelihood that the kinds of living conditions we construct for ourselves—when seen over the long span of evolutionary time as measured not in years but in generations—will prove to be little more than ambitious but ultimately shortsighted human folly, accomplishments that add up to be a global human-built house of cards? Despite our proven track record as a species at staying alive and prospering, it seems worth noting here that “end of time” doomsday tales have long been
FIGURE 11.1
SWOT matrix (Credit: adapted from SWOT analysis diagram in English Language by Xhienne).
Pros and Cons of Being Human 129
part of humankind’s intellectual and spiritual efforts to understand the meaning of life and the reality of death (Phillips 2017). Nowadays, however, many of us may be willing to grant that grim predictions about the end of the world as we know it are not solely fanciful ravings by demented minds or a perverse form of wishful negative thinking. The strong likelihood that even our species will one day vanish from the face of the earth is undeniable not just for religious reasons, but also real-world concerns. What may only be up for grabs is precisely when and how. This is not a comforting thought. With this grim likelihood in mind, therefore, what are the credible strengths and major weaknesses of the countless ways in which our species deals with life?
Strengths and Weaknesses Earlier we asked you to hold up your right hand, palm facing you, to observe the five ways that humans stand out among others as a species. These five characteristics are appropriate strengths in any SWOT analysis designed to forecast the future of our kind. However, as we have been hinting here and there previously, there is also a dark side to each of these five human strengths. Therefore, given that we have used, figuratively speaking, the fingers on your right hand extended in a high-five to count the ways in which evolution has so far made us highly successful creatures, please now assume as well that these same fingers when curled downward toward your palm one-by-one can be similarly employed to list five of the less than wonderful weaknesses of these same stalwart five human strengths (Figure 4.1). Their darker side, so to speak. Here then are both the Strengths and their complementary Weaknesses we want to include in this SWOT analysis of Homo sapiens. Strengths of social nurturance— our deeply felt willingness and desire to care for our young. Matthew Lieberman at the University of California, Los Angeles, has written about how literally painful it is for humans to be isolated from others of our kind. “Our brains evolved to experience threats to our social connections,” he explains, in much the same way they experience physical pain. By activating the same neural circuitry that causes us to feel physical pain, our experience of social pain helps ensure the survival of our children by helping to keep them close to their parents. He then adds: “The neural link between social and physical pain also ensures that staying socially connected will be a lifelong need, like food and warmth” (Lieberman 2013, 4–5).
130 Pros and Cons of Being Human
There is more than this that could be said about our deeply ingrained commitment as a species to living socially despite its challenges and pitfalls. Notably, for instance, such an evolved commitment to social life significantly undercuts the Enlightenment idea that each of us is by nature selfish and self-interested—a claim that is the bedrock premise of much of contemporary conservative political thinking including (but not limited to) Neoliberalism, Libertarianism, and the influential novels by Ayn Rand (Wolfe 2015). The fact, therefore, that Enlightenment scholars—and Ayn Rand—got human nature so wrong is a big deal (Forgas et al. 2016). By the same token, it is important not to lose sight of the reality that we are social to reap the benefits of living socially, not just because it may feel good to be around others. As we have repeatedly been saying, by living and working together to get things done, The Great Human High-Five Advantage has enabled us to dumb down the world around us. In the simplified and fairly predictable world we have created for ourselves, someone like Sherlock Holmes may grow restless and observably bored, but for most of us not having to deal with uncertainty, doubt, and danger is also a big deal when it comes to being human. Weaknesses of social nurturance—social neediness, our evident willingness to go along with the crowd when we really should not do so. The strength of our evolved human commitment to social living also helps explain why human beings can collectively become absolutely convinced they are right and others are wrong—or worse, should be jailed, mutilated, or killed. Examples that come to mind here are the lynching of Robert Prager (Chapter 8), the Salem witch trials in 1692–1693, and the character and actions of some politicians and political gatherings nowadays in the United States and elsewhere in the world. It may be true that familiarity breeds contempt, but going along with the crowd can also create a sense of comfort and security. Hence the dark side of being a species so socially motivated is that we can forget to think for ourselves when we really should not. Strength of social learning—our willingness to teach and learn from others of our kind. Lieberman has inferred that the human brain has a specialized avenue of connections ensuring that each of us will come to hold the beliefs and values of those around us (Lieberman 2013, 191–192, 196–198, 201–202). There is evidently brain-scan imagery suggesting that such a network within the brain may exist. As we have said previously, however, we think it is doubtful that a dedicated neural network is needed to account for why people often have ideas and beliefs at least somewhat similar to those being held by others around them.
Pros and Cons of Being Human 131
After all, none of us is swimming in a global sea of thoughts teeming with all sorts of random beliefs and values. Far from it. Because of the realities—the constraints—of time and space, we are mostly exposed during our lifetime to only a tiny fraction of humankind’s many diverse ways, beliefs, and values, past and present. Why so? Because most of what we know is learned socially from those we are in touch with. Hence it is more or less inevitable that our ways, ideas, and beliefs will be a lot like those we are living and working with. But let’s also not forget, as Anthony F. C. Wallace argued (Chapter 10), that we do not really have to think like our neighbors think to be able to work and live with them. Weakness of social learning— our evident willingness to accept as true what others tell us. It is important to remember that our willingness to teach and learn from others isn’t a strength that is isolated from the other four talents in The Great Human High-Five Advantage. Regardless how the brain actually accomplishes the job, there is no doubt that being ready, willing, and able to teach and to learn from others also means, in part, being susceptible to social persuasion and collective misbehavior because, as Lieberman has said, it genuinely hurts in more than just spiritual ways when we feel cut off from others but also when we feel we are being socially rejected by them (Thaler and Sunstein 2008). Strengths of our social networks—our networking skills extend the richness and diversity of our social, emotional, and practical resources. As discussed in Chapter 10, the sociologist Mark Granovetter and others have documented time and time again how most of us do not limit our social lives and activities solely to those with whom we have “strong ties” marked by frequent and often intense interactions. Our “weak ties” with others commonly serve as “bridges” facilitating the flow of new information, innovations, material things, and people, as well, from person to person, clique to clique, place to place. Weaknesses of our social networks—“going along with the group” can isolate us from others farther afield, and thereby stifle both creativity and our sense of broader social responsibility. It has long been recognized in the social sciences that we can “belong to” many social cliques at the same time depending on what we are doing with others at any given time as well as where we are doing it, whatever it is we are doing (Scott 2000). It is hardly surprising, therefore, that what is probably most revealing about each of us is who we hang out with most of the time.
132 Pros and Cons of Being Human
As Lieberman has observed, many of us are more than happy to give at least the impression to others that we share their beliefs and values so that we can get along with them and perhaps even be liked by those we are living around or working with (Lieberman 2013, 227). Unfortunately, as the networks specialists Stephen Borgatti and his colleagues have remarked: “In-group members can easily develop negative attitudes toward out-group members, and networks divided into multiple subgroups can suffer from warring factions” (Borgatti et al. 2013, 182). Some extreme examples of such undesirable effects are terrorist cells, criminal gangs, religious cults, and so on. More down-home examples would be gossip circles, racially exclusive country clubs, and political gerrymandering. Strengths of fantasy and imagination—our inner fantasy lives may be what is most distinctive about our species. As we have said throughout this book, despite what Enlightenment scholars wrote, we humans have evolved as a species not to be rational but instead to be clever, ingenious, and inventive. Weaknesses of fantasy and imagination—history and the latest news stories confirm that one of the defining characteristics of our species is our talented ability to rationalize our actions and believe what we want to believe. As we have also said before repeatedly, people often find it easy to forget that the real world with all its dangers and demands is just over there “outside Alice’s mirror.” Furthermore, we are not only capable of doing ridiculous things, but also skillful at fooling ourselves when we are doing so. The inventiveness of our Alice minds can easily lead us to rationalize our actions and spin tall tales rather than engage in the hard work required to make genuinely rational deductions worthy of Mr. Sherlock Holmes. Strengths of social collaboration—our talent for combining individual fantasies with productive social participation. During the Enlightenment, Thomas Hobbes (1588–1679), John Locke (1632– 1704), Samuel von Pufendorf (1632–1694), and the later eighteenth century savant Jean-Jacques Rousseau (1712–1778) wrote skillful, penetrating essays about human nature and civil life defending the claim that regardless how good-natured or beastly we may all be down deep inside, social life is fashioned to a large degree, and reasonably so, on rational and entirely justifiable self-interest (Terrell 2015, 9–12). The conclusions reached by these influential
Pros and Cons of Being Human 133
authors about liberty, freedom, and the importance of the individual still inform how many of us think about ourselves and see others as human beings. They badly misunderstood, however, what it means to be human. Not only do most of us need others in our lives to make us happy and productive, but evolution has also given most of us at least the basic social talents needed for us to work together more often than not for the common good. Weakness of social collaboration—social violence and the victimization of others are indisputably things that we humans are perfectly capable of doing. Nobody in their right mind would deny that humans can be foul, nasty, even murderous creatures who are entirely capable of ganging up on others to get their own way. Or prove a point. What is disputable is why. Many would favor the easy explanation that we are a “tribal” species, and so fighting with other tribes is just something we naturally do (Wilson 2014, 23–24, 29–30). This, of course, is no more an explanation for our capacity for nastiness than claiming that doing stupid things is attributable to something inside us that can be labeled simply as “stupidity.” A more insightful assessment of why individually and collectively we are sometimes mean and nasty has been offered by Jonathan Crowe, Professor of Law at the Bond University in Australia. The limits to what we are able to know as human beings can keep us from seeing just how much we do not know. Hence our reasoning processes are vulnerable to distortion and bias. It is not surprising, therefore, that people may find it easy to favor other people who are like them over those who are not so clearly cut from the same cloth (Crowe 2016).
Opportunities People are not particularly skillful at foreseeing the likely consequences of their deeds (or misdeeds). Without implying that the two of us are exceptions to this rule, we want to mention here two prominent opportunities that may shape what it will be like to be human in the future. Setting aside the frequent warnings nowadays that robots are about to take over the world and replace us, there is no question that advanced ways of artificially assisting our bodies and our brains in the performance of tasks both large and small are gaining ground (Pagliarini and Lund 2017). Robotic devices are currently being manufactured for an astonishing number of applications ranging from health care to entertainment, from military drones to wearable devices helping the lame walk again. So widespread are these uses that Newsweek magazine in October 2016 made this upbeat forecast. “Successful
134 Pros and Cons of Being Human
people in the AI [artificial intelligence] age will focus on work that takes advantage of unique human strengths, like social interaction, creative thinking, decision-making with complex inputs, empathy and questioning” (“How artificial intelligence,” n.d.). Kevin Maney, Newsweek’s technology columnist, added his own personal take on this forecast: “the most valuable people in an age of push-button answers will be the people who ask the most interesting questions”—an assessment that perhaps unintentionally raises uncomfortable questions about what it will mean to be a “most valuable person” in the future. What is the other developing opportunity we want to mention here that will possibly be shaping what life is like for our species in the coming centuries? As many of us know personally and well, digital forms of communication are making it possible for people around the globe to be more or less instantaneously in touch with one another through words, pictures, film, and the like. Some see the invasion of politicians (and foreign governments) into this vast world of social media as a sign of the decline of democracy. Others today are more optimistic. So are the two of us. Brian Loader and his colleagues have studied the reactions of younger citizens (aged 16–29) in three countries (Australia, United Kingdom, and the United States) to the use of social media by politicians. They have found that the overwhelming majority of those asked were open to politicians doing so provided they use social media in ways viewed as appropriate. “The clear implication being that social media could indeed be an effective channel for politicians to connect with young citizens, but that in order to do so they would have to develop more authentic digital personas” (Loader et al. 2016, 415; also Von Drehle 2017). However true this qualification, it seems likely that both AI and robotics, as well as the Internet and social media, are going to play major roles in the future in how well we deal with one another and the artificial, arbitrary, and diverse world we have been creating for ourselves since long before the dawn of written history. It remains to be seen, of course, how successfully we can reap the benefits of AI, robotics, and social media while avoiding the likely risks involved. The evident Russian interference in BREXIT and the US elections in 2016 suggests the risks can be real enough. It may be less obvious that taking too much of the workload of handling the outside world away from Sherlock and George may also not be all sunshine and roses. Offloading the job of staying alive to robots and AI could lead not only to monumental human boredom— Sherlock shudders at the mere mention of such a possibility―but also honest despair and mischievous behavior, to say the least. Although let’s not ignore that one outcome could also be truly astronomical sales in computer games designed to give the brain something at least playful to do.
Pros and Cons of Being Human 135
Threats Scientists, journalists, and yes, even politicians know that the world today faces many sorts of threats: global warming, rising sea levels, pollution, overpopulation, and more. There are, however, two threats less often voiced we want to put on the table for you to consider along with these more familiar ones. The first of these two dangers is something many of us know already all too well without having to be told about it. It is too easy for all of us to let our habitual George selves rule the roost over our more critical Sherlock minds when what is going on around us seems too boring and predictable. The second danger is also one that many have talked about before, a danger that reflects both how the human brain works as a thinking machine, and also how all life forms on Earth must contend with fairly predictable real-world problems. What are we talking about? Humans are clever enough as a species to be able to create worlds for themselves that are not just mostly artificial, but are often also arbitrary in the sense that people around the world have frequently invented different ways to solve more or less the same basic problems of living down here on Earth. For instance, there are many ways to define a family, design a house, or dawdle around wasting time doing something silly just for the fun of it. Consequently, when seen globally, our human ways of doing things, making a living, overcoming boredom, and getting along with one another are not just creative solutions to life’s problems, but they are also astonishingly diverse ones. The downside of this real diversity is that when we are dealing with others of our kind, we may be confronted by their seeming strangeness. We may then have trouble seeing also their sameness with us as human beings. So we are all too easily prone to making too much of the strangeness of others and their ways. It is hardly surprising, therefore, that over the long span of human history, it is evident that we humans have often had a hard time not only handling boredom, but also with accepting our inherent diversity as a species.
Between Two Worlds The nineteenth century English writer Matthew Arnold (1822–1888) was a much celebrated poet whose words and emotionally charged word pictures captured the spirit and uncertainties of the Victorian era—notably the challenges posed to religious faith by the Enlightenment, Darwinism, and the Industrial Revolution (Fuller 2010; Kahan 2012). First published in Fraser’s Magazine in April 1855, the 210 lines of verse in his poem “Stanzas from the Grande Chartreuse” are some of his most expressive. Arnold tells us therein of his journey one autumn to a monastery in the Chartreuse Mountains of southeastern France. He arrives there a bewildered pilgrim feeling, he imagines,
136 Pros and Cons of Being Human
much the way an ancient Greek traveler must have felt long ago on finding himself standing before an antique pagan Runic stone on some far northern European shore (lines 81–90): Thinking of his own Gods, a Greek In pity and mournful awe might stand Before some fallen Runic stone— For both were faiths, and both are gone. Wandering between two worlds, one dead, The other powerless to be born, With nowhere yet to rest my head, Like these, on earth I wait forlorn. Their faith, my tears, the world deride— I come to shed them at their side. The SWOT analysis we have sketched out here is certainly not poetic, and we are not foolish enough to think we have captured the spirit of the times we live in. Yet all of the propositions at the heart of this book stand on the premise that all of us are caught in often unexamined ways between two worlds. Unlike the vision of a conflict between earthlings and creatures from Mars in Wells’s tale, the war we see down the road on time’s horizon is one between our inner evolved selves and the outer world we have fashioned for ourselves, a world promising genuine new opportunities and posing dangerous threats.
Works Cited 14 Free SWOT analysis templates. Retrieved from: www.smartsheet.com/14-freeswot-analysis-templates A SWOT analysis, with its four elements in a 2×2 matrix. Retrieved from: https:// en.wikipedia.org/wiki/SWOT_analysis#/media/File:SWOT_en.svg Borgatti, Stephen P., Martin G. Everett, and Jeffrey C. Johnson (2013). Analyzing Social Networks. Los Angeles, CA: Sage. Crowe, Jonathan (2016). Human fallibility and the separation of powers. Policy: A Journal of Public Policy and Ideas 32: 42–46. Retrieved from: https://auspublaw.org/2015/07/ human-all-too-human/ Eisenstein, Alex (1976). “The Time Machine” and the end of man. Science Fiction Studies 4: 161–165. Retrieved from: www.jstor.org/stable/4239020 Forgas, Joseph P., Lee Jussim, and Paul A. M. Van Lange (2016). In search for Homo moralis: The social psychology of morality. In Social Psychology and Morality, Joseph P. Forgas, Lee Jussim, and Paul A. M. Van Lange (eds.), pp. 1–18. New York: Psychology Press. Fuller, Jack (2010). What is happening to news? Daedalus 139, no. 2: 110–118. Retrieved from: /www.jstor.org/stable/20749829 Haight, Gordon S. (1958). H. G. Wells’s “The Man of the Year Million.” NineteenthCentury Fiction 12: 323–326. Retrieved from: www.jstor.org/stable/3044429
Pros and Cons of Being Human 137
How artificial intelligence and robots will radically transform the economy. Retrieved from: www.newsweek.com/2016/12/09/robot-economy-artificial-intelligence-jobshappy-ending-526467.html Kahan, Alan (2012). Arnold, Nietzsche and the aristocratic vision. History of Political Thought 33: 125–143. Retrieved from: www.jstor.org/stable/26225689 Lieberman, Matthew D. (2013). Social. Why Our Brains Are Wired to Connect. New York: Crown Publishers. Loader, Brian D., Ariadne Vromen, and Michael A. Xenos (2016). Performing for the young networked citizen? Celebrity politics, social networking and the political engagement of young people. Media, Culture & Society 38: 400–419. doi:10.1177%2F0163443715608261 Nomination database. Retrieved from: www.nobelprize.org/nomination/archive/ show_people.php?id=10075 Pagliarini, Luigi and Henrik Hautop Lund (2017). The future of robotics technology. Journal of Robotics Networking and Artificial Life 3: 270–273. doi:10.2991/ jrnal.2017.3.4.12 Partington, John S. (2000). The death of the static: H. G. Wells and the kinetic utopia. Utopian Studies 11: 96–111. Retrieved from: www.jstor.org/stable/20718177 Phillips, Kristine (2017). The world as we know it is about to end—again—if you believe this biblical doomsday claim. Washington Post, September 17, 2017. Retrieved from: https://www.washingtonpost.com/news/acts-of-faith/wp/2017/09/17/the-worldas-we-know-it-is-about-to-end-again-if-you-believe-this-biblical-doomsdayclaim/?hpid=hp_no-name_hp-in-the-news%3Apage%2Fin-the-news&utm_ term=.2e7438bbee20 Scott, John (2000). Social Network Analysis: A Handbook, 2nd ed. Los Angeles, CA: Sage. Stiles, Anne (2009). Literature in “Mind:” H. G, Wells and the evolution of the mad scientist. Journal of the History of Ideas 70: 317–339. Retrieved from: https://www. jstor.org/stable/40208106 SWOT analysis (strengths, weaknesses, opportunities and threats analysis). Retrieved from: https://searchcio.techtarget.com/definition/SWOT-analysis-strengths-weaknessesopportunities-and-threats-analysis Terrell, John Edward (2015). A Talent for Friendship. New York: Oxford University Press. Thaler, Richard H. and Cass R. Sunstein (2008). Improving Decisions about Health, Wealth, and Happiness. New Haven, CT: Yale University Press. Von Drehle, David (2017). Steve Jobs gave us President Trump. Washington Post, September 5, 2017. Retrieved from: www.washingtonpost.com/opinions/steve-jobs-gaveus-president-trump/2017/09/05/f4f487e4-9260-11e7-aace-04b862b2b3f3_story. html?hpid=hp_no-name_opinion-card-d%3Ahomepage%2Fstory&utm_term=. c63669ceb63b Wilson, Edward O. (2014). The Meaning of Human Existence. New York: Liveright. Wolfe Alan (2015). Libertarianism’s iron cage: From Ayn Rand to Rand Paul. Commonweal 142 (15): 14–18. Retrieved from: https://www.commonwealmagazine.org/ libertarianisms-iron-cage
12 MAKING SENSE OF OUR FUTURE PROSPECTS Are We an Endangered Species?
Ireland’s Natural History Museum (Músaem Stair an Dúlra), known locally as the Dead Zoo, is in downtown Dublin near Merrion Square not far from Trinity College. The official website for this public attraction is unusually honest about what it is like to visit this venerable institution. The building is a ‘‘cabinet-style’’ museum designed to showcase a wide-ranging and comprehensive zoological collection, and has changed little in over a century. Often described as a “museum of a museum,” its 10,000 exhibits provide a glimpse of the natural world that has delighted generations of visitors since the doors opened in 1857. (“History & architecture,” n.d.) A new entrance to the museum was added in 1909 on the east end of the building that faces Merrion Street. As the official website notes: This reversed the direction from which visitors approached the exhibitions and explains why some of the large exhibits still face what appears today to be the back of the building: it was too difficult to turn the whales and elephants around to face the new entrance. A stone staircase inside the building collapsed in 2007, injuring 11 people in a group of primary school teachers attending a science appreciation course (“11 injured,” n.d.). In 2008 the museum was closed to the public and did not reopen until 2010. By 2015 compensation payments to those who had been hurt in the staircase disaster ended up costing the government over €900,000 (more than a million US dollars) (Deegan 2105).
Making Sense of Our Future Prospects 139
Global economic hard times in recent years on top of generally poor funding during the twentieth century have turned what was once a scholarly institution into a living fossil. As one of the two remaining staff members joked in June 2016: “If you want to experience 1916, come to our museum” (Neylon 2016).
Extinction Happens No doubt about it. The really big stuffed mammals are in fact looking in the wrong direction when you come into this museum through the 1909 east entrance. It is also true that while visiting the museum in 2017, the backside of the dusty mounted elephant on display was found to be so badly cracked that your heart went out to it. More upsetting, the rhino on display had its horns ripped off thereby exposing its pure white artificial innards. There was a sign explaining the condition of this unfortunate specimen: “The National Museum of Ireland – Natural History apologises for the condition of this rhino. The horns have been removed due to the risk of theft . . .” Once you got over the shock, there was at least the opportunity to see how rhino hide had been turned into something resembling the living animal a century ago. As disturbing as all this evidence of neglect may be, the magnificence of what greets you on the Ground Floor (Figure 12.1) makes a visit to this odd museum well worthwhile: three mounted skeletons of the extinct giant
FIGURE 12.1
Extinct “Irish elks” (Megaloceros giganteus) on display at the Natural History Museum (Músaem Stair an Dúlra) in downtown Dublin, Ireland
140 Making Sense of Our Future Prospects
“Irish elk” (Megaloceros giganteus), an animal that helped make the late Stephen Jay Gould become one of the twentieth century’s best-known evolutionists. Writing in the inviting style he was the master of, this is how Gould described the mystery of these astonishing beasts of yore: The gigantic antlers, spreading 10 to 12 feet in large stags, have generated a fund of legend and anecdote. They have served as gateposts to the homes of Irish gentry and as temporary bridges to span rivulets. They have inspired the admiration of kings and the competition of Victorian gentlemen stocking their trophy rooms. But, more important in our context, they have continually confronted evolutionary theory as an outstanding datum demanding some explanation. There is scarcely a textbook in evolutionary biology that does not illustrate some important principle with a well chosen pair of antlers. (Gould 1974, 191) Being the careful scientist he was, Gould went on to note that this remarkable creature was neither an elk (but rather a deer), nor had it been exclusively Irish in its geographic distribution for it had once upon a time roamed widely in Europe, northern Asia, and northern Africa, too.
Extinctions in the Past Gould died in 2002 when he was only 60. Research since then has continued the effort to explain the demise of Megaloceros giganteus (van der Plicht et al. 2016). Importantly, however, this was not the only species to suffer this fate during the last Ice Age and the 10,000 or so years since then during our current era. Hence asking why this remarkable creature vanished from the face of the earth despite all its magnificence is not a frivolous question. There must be more to the story than just the demise of a type of deer having really, really big antlers. It is estimated that about half of the mammal species that inhabited prehistoric Europe during the Last Ice Age have vanished from the face of the earth in the last 21,000 years or so. Among the deceased are such showcase species as the woolly mammoth (Mammuthus primigenius) and the woolly rhino (Coelodonta antiquitatis), large carnivores such as the cave bear (Ursus spelaea), and large deer species such as the Irish elk. Most of the species that disappeared became globally extinct, although some have managed to survive in Africa, Asia, or North America (Varela et al. 2015). The obvious mystery is why these life forms disappeared. Was it climate change? Maybe, but is that all that led to their doom? This was also the time in the earth’s history when humankind was starting to expand all over the globe as far as the New World. Our kind had already got to Australia and the islands
Making Sense of Our Future Prospects 141
of the southwestern Pacific well before then. Could it have been us, therefore, who did these species in? Or a combination of both climate change and human exploitation? Something else entirely, or as well? Nobody is sure today how to properly apportion the blame for these extinctions. Computer modeling of what might have been involved suggests, however, that climate change alone may not have been enough to kill off, for instance, the Irish elk. Generally speaking, modeling of this sort suggests that both global climate change and human impact had a hand in exterminating the species that disappeared. Whatever the mix of reasons, as Sara Varela at Charles University in the Czech Republic and her colleagues recently concluded: “Future conservation efforts can benefit from considering the lessons learned from past extinctions, achieving greater success by understanding the effect of the combination of climate change and direct human pressure” (Varela et al. 2015, 1480).
Current Extinctions Scientists are warning all of us today that the Earth is currently experiencing a new and dangerous episode of extinction. A recent report (Ceballos et al. 2017) published in the Proceedings of the National Academy of Sciences (PNAS) was uncommonly direct in describing the magnitude of what is evidently happening to our home planet: Our data indicate that beyond global species extinctions Earth is experiencing a huge episode of population declines and extirpations, which will have negative cascading consequences on ecosystem functioning and services vital to sustaining civilization. We describe this as a “biological annihilation” to highlight the current magnitude of Earth’s ongoing sixth major extinction event. The kicker here is that they are not just talking about species extinctions. They also have in their sights the less conventional observation that focusing only on species that have already gone extinct in recent decades leads to a common misunderstanding of what is really happening: This view overlooks the current trends of population declines and extinctions. Using a sample of 27,600 terrestrial vertebrate species, and a more detailed analysis of 177 mammal species, we show the extremely high degree of population decay in vertebrates, even in common “species of low concern.” Dwindling population sizes and range shrinkages amount to a massive anthropogenic erosion of biodiversity and of the ecosystem services essential to civilization. This “biological annihilation” underlines the seriousness for humanity of Earth’s ongoing sixth mass extinction event.
142 Making Sense of Our Future Prospects
What is behind this massive loss of life, not just of species? Much is debatable, but the immediate causes seem obvious enough: habitat conversion, climate disruption, overexploitation, toxification, species invasions, and disease. Among the overriding causes, however, are assuredly the continuing pace of human population growth and overconsumption by many of us— both of which reflect “the fiction that perpetual growth can occur on a finite planet.” How does this report conclude its message to the world? Here is the main point: “the sixth mass extinction is already here and the window for effective action is very short, probably two or three decades at most.”
A Dismal Future? It would be an understatement to say that the PNAS report offers those willing to read it a dismal picture of the future of all life on Earth, including humankind. Moreover, since it was issued in 2017, possibly even more alarming statistics have been published. In October 2019, for instance, the Audubon Society warned that two-thirds of bird species in North America are at risk of extinction because of rising temperatures, higher seas, heavy rains, and urbanization (Yarnold 2019). Furthermore, on November 5, 2019, under the banner headline “World Scientists’ Warning of a Climate Emergency,” 11,258 scientists from 153 countries issued a public statement advising all of us that Planet Earth is clearly and unequivocally facing potentially irreversible climate change that could make huge areas of Earth uninhabitable (Ripple et al. 2019). Despite similar warnings in 1979, 1992, 1997, and the 2015 Paris Agreement, greenhouse gas (GHG) emissions are continuing to rise rapidly with increasingly damaging effects on the Earth’s climate. Despite the past 40 years of global climate negotiations, we are still doing business as usual, and have mostly failed to address this crisis seriously. “Especially worrisome are potential irreversible climate tipping points and nature’s reinforcing feedbacks (atmospheric, marine, and terrestrial) that could lead to a catastrophic ‘hothouse Earth’ well beyond the control of humans.” This frankly terrifying report suggests steps that governments, businesses, and the rest of humanity could take to lessen the worst effects of what lies ahead. “Mitigating and adapting to climate change while honoring the diversity of humans entails major transformations in the ways our global society functions and interacts with natural ecosystems.” The report finally concludes with a firm appeal to human sanity. The good news is that such transformative change, with social and economic justice for all, promises far greater human well-being than does business as usual. We believe that the prospects will be greatest if
Making Sense of Our Future Prospects 143
decision-makers and all of humanity promptly respond to this warning and declaration of a climate emergency and act to sustain life on planet Earth, our only home. (Ripple et al. 2019, 4) Hopeful words, but how likely is all this to come about? Can something really be done to save ourselves and the rest of life on our shared planet? Or should we now collectively just hide our heads in the sand or issue self-serving decrees protecting our own interest at the expense of everyone else on Earth? Here is one thing the two of us know for sure.
Evolution Cannot Save Us As we have said before, evolution is blind. Evolution is not foresighted. Evolution has no goals of its own. Nor, despite frequent claims to the contrary, is evolution a story of progress, although conventional wisdom and even current scholarship according to some well-placed academics (Pinker 2018) would have it that things have definitely been getting better and more complex. Sorry folks, and never fun to say it, but this isn’t how life on our Earth evolves (Ruse 1993; Shanahan 2012). The French biologist François Jacob (1920–2013), who won the Nobel Prize in Medicine in 1965, was one of the great architects of modern biology (Shapiro and Losick 2013). In a short and now famous essay published in the journal Science in 1977, he explained with elegant simplicity how evolution never creates anything new totally from scratch. Instead, evolution works like a tinkerer—a tinkerer who does not know exactly what he is going to produce but uses whatever he finds around him whether it be pieces of string, fragments of wood, or old cardboards; in short it works like a tinkerer who uses everything at his disposal to produce some kind of workable object. ( Jacob 1977, 1163) It is in this piecemeal fashion that evolution makes a wing out of what was once a leg, part of an ear out of what had once been a piece of jaw bone. Incidentally, this is also one of the reasons evolution generally takes such a very long time to get anything done.1 Sadly, the fact that evolution works in such a fumbling, piecemeal fashion also explains why we humans cannot rely on evolution to save us from extinction as a species even though, as the PNAS report observes, during periods of mass extinction in the past at least the spark of life on Earth did somehow manage to survive. New life forms were able to evolve to replace those that had been lost. Yet it is hardly comforting to think that if (or is it when?)
144 Making Sense of Our Future Prospects
we humans die off entirely, other species may yet evolve sooner or later to take our place. What is the bottom-line then? Put crudely, evolution does not give a damn about us. Never has, never will. So how much are we at risk of extinction as a species? What is the likelihood that we are even now unknowingly perhaps on the chopping block and are no longer able to save ourselves? More to the point, however shocking the thought may be, do any of us who are alive today actually care all that much about what the future will be like for our planet and its inhabitants after we ourselves are dead? More worrisome still, what is the likelihood that our species is not even capable of thinking far enough down the road to see why we ought to try?
Four Propositions In Chapter 1, we introduced three propositions about how the human brain engages with the world around it. Instead of asking you to go back to read them, here they are once more. 1 2 3
We live in two different worlds at the same time. The outer world we live in is mostly one of our own making. We believe we live in the real world and only escape occasionally into our fantasies. In truth, however, we live in our own minds all of the time, and must struggle to be in touch with what is actually happening out there that we can see filtering in only through the looking-glass of our minds.
In Chapter 5, we introduced a fourth proposition, although we did not label it then as such. Instead, we called it a mantra. Here again is this fourth proposition: SENSE—RECOGNIZE—REACT These three words taken together form the elements of a fundamental proposition about how your brain tries to recognize what your senses are picking up about what is happening outside your body (and inside, too) in light of its own previous experiences with the world and how you have previously seen it working. When this kind of cerebral recognition happens, the brain must then go on to decide (1) how to react, or handle, such seemingly informative bodily stimulation, (2) ignore what your senses are relating to your brain, or (3) decide instead to seek further input from the world and your body’s interior circumstances before taking action should you decide doing something may need to be done. As we also commented previously, this simple three-word mantra is more controversial than it might seem. What is perhaps most debatable about the
Making Sense of Our Future Prospects 145
triplet of terms in this proposition is what the word RECOGNIZE implies, namely the idea that the brain plays an active and often creative role not only in converting the electrochemical signals that your body’s senses are sending it into useful information, but also in what, if anything, to make of these signals once they have been converted into useful information inside your brain’s cranial vault. Said differently, your brain takes an active role in converting incoming sensations not just into something computer scientists would classify as information, but also what the rest of us would probably be content to call our sensibly informed ideas, impressions, and beliefs about the world, about other people, and about ourselves. Some of these thoughts are ones we come up with all on our own; others are ones we piece together with the help of others we know, talk with, read about, and so forth. Examples of the latter would be shared ideas about The Big Bang, the bible, Shakespeare’s plays, and The Apostles’ Creed.
Can We Save Ourselves? Making sense of the world, in other words, is what the human brain is all about, with or without the help of our friends, family, teachers, preachers, and let’s not forget our enemies, too. In 2007, Jason G. Matheny at Johns Hopkins University remarked that it has been only in the last century, notably with the invention of nuclear weapons, that catastrophic events leading to the extinction of all humanity could now be both caused and prevented by human action. Although such extreme events “may be very improbable, their consequences are so grave that it could be cost effective to prevent them” (Matheny 2007, 1335). Wondering whether keeping our own species alive and kicking would be “cost effective” strikes us as odd and coldly dispassionate. Clearly, however, this is how Matheny wants us to reflect on our possible demise as a life form on Earth. In the very next sentence, he notes clinically that “virtually nothing has been written about the cost effectiveness of reducing human extinction risks.” Matheny then goes on to speculate about why this is so. Maybe this is because human extinction seems impossible, inevitable, or, in either case, beyond our control; maybe human extinction seems inconsequential compared to the other social issues to which cost- effectiveness analysis has been applied; or maybe the methodological and philosophical problems involved seem insuperable. More recently, Anders Sandberg at the Oxford University’s Future of Humanity Institute added something more for us to worry about. “Despite its obvious interest and relevance to humans, rigorous study of human extinction [has] so far been relatively rare. There are more academic papers on dung beetles than the fate of H. sapiens” (Sandberg 2018).
146 Making Sense of Our Future Prospects
As we remarked in Chapter 5, the late Herbert Simon had much to say about the limits of human rationality. Nonetheless, he accepted with seemingly few reservations the conventional wisdom that all of us generally have reasons, however good or bad, for why we behave the way we do: Everyone agrees that people have reasons for what they do. They have motivations, and they use reason (well or badly) to respond to these motivations and reach their goals. Even much, or most, of the behavior that is called abnormal involves the exercise of thought and reason. Freud was most insistent that there is method in madness, that neuroses and psychoses were patients’ solutions—not very satisfactory solutions in the long run—for the problems that troubled them. (Simon 1986, S209) What Simon is saying here sounds a lot like what many academics have been saying at least since the Enlightenment. Humans generally “exercise thought and reason” when they are tackling issues and concerns. By now you know we are not convinced. No one has to be an old-fashioned Freudian to be concerned that people sometimes, maybe even often, do not actually think through what they are doing. Nor do we feel it is too cynical to suggest that people often only think of reasons, good or bad, when they suspect they may be held accountable for their actions, their doings. Say when a worried parent is likely to ask why a child, whom they love dearly, has strangely come home very, very late from a party at a friend’s house on the other side of town. So is there something we can do to save our species from extinction?
The Willing Suspension of Belief In Chapter 3, we brought up Samuel Taylor Coleridge’s claim in the early nineteenth century that the willing suspension of disbelief is central to the human enjoyment of poetry, theater, and other forms of entertainment. We hope we have convinced you that if truth be told, all of us who are human frequently find it easy to accept, believe, enjoy, and even act upon almost anything we set our minds on having, believing, or doing regardless how improbable, unbelievable, or downright dangerous such thoughts may be. Indeed, we do not see it as being overly cynical to suggest that suspending disbelief is as easy for all of us who belong to the species H. sapiens as falling off a log, forgetting to turn off the stove, or losing our reading glasses. The theoretical physicist Carlo Rovelli has remarked that it is only by keeping in mind that our beliefs can be wrong or at least naïve is it possible for us to free ourselves from silly ideas and learn something new. Perhaps contrary to the popular notion that scientists like Rovelli tend to be obnoxious know-it-alls,
Making Sense of Our Future Prospects 147
he adds that science as one of humankind’s loftier achievements “is born from this act of humility: not trusting blindly in our past knowledge and our intuition” (Rovelli 2017, 259). Dictionaries tell us that the opposite of humility is arrogance, egotism, pretentiousness, self-importance, and pride. This seems about right, but selfdeception is also in the running. According to the statistician Regina Nuzzo in a short commentary published several years ago in the distinguished science journal Nature, even an honest person is a master of self-deception. Our brains evolved long ago on the African savannah, where jumping to plausible conclusions about the location of ripe fruit or the presence of a predator was a matter of survival. (Nuzzo 2015, 182) She may be giving the African savannah of long ago too much credit for making us the kind of animal we are today. We suspect, however, she only means that the human brain is a product of evolution, and evolution isn’t about ending up perfect and wonderful to behold. In any case, Nuzzo goes on to repeat with evident approval what the Nobel Laureate Saul Perlmutter, an astrophysicist at the University of California, Berkeley, has said about the scientific method: “Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.” Perlmutter’s observation, too, is surely not intended to be taken too literally. After all, how many of us intentionally want to fool ourselves, even if we are sometimes naughty enough to want to fool others? So again, like Nuzzo, Perlmutter has perhaps merely chosen a dramatic way of underscoring that evolution has not made us as rational as we might want to believe.
Making Sense In this book we have offered you a model of the human mind peopled by the three well-known storybook characters. We have also argued that the human brain is not a remarkably sophisticated computer, despite the current popularity of the notion that our brains “process information” and are constantly engaged in trying to predict the future (Graziano 2019). Instead, we have suggested that whatever its evolved talents at building skyscrapers, writing poems, or spinning yarns, the human brain is fundamentally a biologically constructed pattern recognition, learning, and making device engaged in trying to resolve basic survival issues such as (Chapter 5): Have I seen this before? Do I need to do something? Should I look again? The first of these three questions is one we humans are skilled at answering. The second one, however, is a lot harder for us to handle than we suspect most
148 Making Sense of Our Future Prospects
of us realize. We aren’t too bad at making such judgments when what is being called for is immediate action. But the old saying “out of sight, out of mind” need not only be about how easily we can forget people, things, and events as time goes by. These same words can be applied also to how easy it is for us to be shortsighted about the future consequences of our current acts, deeds, and accomplishments. If we are to succeed at making sense of the future and not just the past, we would suggest that there are two things all of us need to do. First, we must learn to distrust what our own brains are telling us (Chapter 3). Second, we need to remember that there is a dark side to The Great Human High-Five Advantage (chapters 4 and 11). The strengths we possess as human beings also have their genuine and sometimes dangerous weaknesses. Our strengths at making sense of the world are undeniable. Unfortunately, it seems all too easy for all of us to forget that evolution has not made us angels. As Linnaeus knew, but others have often overlooked, we are undoubtedly smart enough as a species for each of us to live up to the ancient admonition γνῶθι σεαυτόν, “Know thyself.” This does not automatically entitle us to call ourselves Homo sapiens. This is a title we are going to have to earn by what we all do to save ourselves from extinction.
Note 1 It would not be farfetched to say also that like evolution, Alice also works like a tinkerer to come up with new ideas, ways of doing things, and the like.
Works Cited 11 injured as museum staircase collapses. Retrieved from: https://www.rte.ie/news/ 2007/0705/90868-museum/ Ceballos, Gerardo, Paul R. Ehrlich, and Rodolfo Dirzo (2017). Biological annihilation via the ongoing sixth mass extinction signaled by vertebrate population losses and declines. Proceedings of the National Academy of Sciences 114: E6089‒E6096. doi:10.1073/pnas.1704949114 Deegan, Gordon (2015). “Taxpayers’ €900k bill over Dead Zoo staircase collapse.” Herald.ie October 1, 2015. Retrieved from www.herald.ie/news/taxpayers-900kbill-over-dead-zoo-staircase-collapse-31572396.html Gould, Stephen Jay (1974). The origin and function of “bizarre” structures: Antler size and skull size in the “Irish elk,” Megaloceros giganteus. Evolution 28: 191–220. doi:10.2307/2407322 Graziano, Michael S. A. (2019). Rethinking Consciousness: A Scientific Theory of Subjective Experience. New York: W. W. Norton. History & Architecture of Natural History. Retrieved from: www.museum.ie/ Corporate-Media/History-of-the-Museum/History-Architecture-(3)
Making Sense of Our Future Prospects 149
Jacob, François (1977). Evolution and tinkering. Science 196: 1161–1166. doi:10.1126/ science.860134 Matheny, Jason G. (2007). Reducing the risk of human extinction. Risk Analysis: An International Journal 27: 1335–1344. doi:10.1111/j.1539-6924.2007.00960.x Neylon, Laoise (2016). “It’s lonely at the Dead Zoo.” Dublin InQuirer June 8, 2016. Retrieved from: www.dublininquirer.com/2016/06/08/it-s-lonely-at-the-dead-zoo Nuzzo, Regina (2015). Fooling ourselves. Nature 526: 182–185. doi:10.1038/526182a Pinker, Steven (2018). Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. New York: Viking (Penguin). Ripple, William J., Christopher Wolf, and Thomas M. Newsome (2019). World scientists’ warning of a climate emergency. BioScience, biz088. doi:10.1093/biosci/biz088 Rovelli, Carlo (2017). Reality is Not What It Seems. The Journey to Quantum Gravity. New York: Riverhead Books. Ruse, Michael (1993). Evolution and progress. Trends in Ecology and Evolution 8: 55–59. doi:10.1016/0169-5347(93)90159-M Sandberg, Anders (2018). Human extinction from natural hazard events. Oxford Research Encyclopedia of Natural Hazard Science. doi:10.1093/acrefore/9780199389407.013.293 Shanahan, Timothy (2012). Evolutionary progress: Conceptual issues. In eLS. John Wiley & Sons, Ltd: Chichester. doi:10.1002/9780470015902.a0003459.pub2 Shapiro, Lucy and Richard Losick (2013). François Jacob (1920–2013). Science 340: 939. doi:10.1126/science.1239975 Simon, Herbert (1986). Rationality in psychology and economics. Journal of Business 59: S209–S224. Retrieved from: www.jstor.org/stable/2352757 van der Plicht, Johannes, V. I. Molodin, Yaroslav V. Kuzmin, S. K. Vasiliev, A. V. Postnov, and V. S. Slavinsky (2015). New Holocene refugia of giant deer (Megaloceros giganteus Blum.) in Siberia: Updated extinction patterns" Quaternary Science Reviews 114: 182–188. doi:10.1016/j.quascirev.2015.02.013 Varela, Sara, Matheus Souza Lima-Ribeiro, José Alexandre Felizola Diniz-Filho, and David Storch (2015). Differential effects of temperature change and human impact on European Late Quaternary mammalian extinctions. Global Change Biology 21: 1475–1481. doi:10.1111/gcb.12763 Yarnold, David (2019). “Birds Are Telling Us It’s Time to Take Action on Climate.” Audubon Magazine, Fall 2019 climate issue. Retrieved from: https://www.audubon. org/magazine/fall-2019/birds-are-telling-us-its-time-take-action-climate
Taylor & Francis Taylor & Francis Group
http://taylorandfrancis.com
INDEX
Note: Page numbers followed by “n” denote endnotes. The Act of Creation (Koestler) 48 Adolphs, Ralph 107 Age of Reason 25, 26, 111 Ahmad, Subutai 62n2 Alice (Carroll) 5, 11, 20–21, 22, 49, 78, 81 Alice’s Adventures in Wonderland (Carroll) 8–9, 87, 102–103 The American Economic Review (Simon) 61 American Sociological Review (Merton) 33 Andrews-Hanna, Jessica 82 arbitrary 46, 107, 134, 135 Arnold, Matthew 135 artificial 8, 46, 107, 134, 135 artificial intelligence (AI) 134 Barrett, Deirdre 84, 98–99 Barrett, Lisa 111 Barth, Fredrik 117, 118–119 Baum, L. Frank: The Wonderful Wizard of Oz 13–14 Beckes, Lane 62n2, 105, 106, 110 behaviorism 59, 60, 61; ruling premise of 59 being human 40, 46, 48, 51, 55, 69, 70, 73; capability 122; demands of 16; fantasy and imagination 49; opportunities 133– 134; pros and cons of 125; strengths and weaknesses 129–133; SWOT 126–129, 136; threats 135; vulnerabilities of 28 belief 119; beyond 35–36; suspending 32; willing suspension of 146–147
Beyond Freedom and Dignity (Skinner) 60 bisociation 48, 49 Borgatti, Stephen 132 Bowden, Margaret 48 brain, human 9–10, 22, 56, 70–71 Brooklyn Bridge, selling of as a swindle 103–104 Brubaker, Rogers 117 Carroll, Lewis 20, 49; Alice’s Adventures in Wonderland 8–9, 87–88, 102, 103; Through the Looking-Glass, and What Alice Found There 9 Challenger, George Edward (Doyle) 5, 11, 18–20, 21, 67, 69, 70, 71, 73, 78, 96 Chemero, Anthony 62n2 Chomsky, Noam 60 Chopin, Frédéric 50 Christie, Agatha: “The Disappearance of Mr. Davenheim” 82 climate change 142–143 Coan, James 105, 110 cognitive map 122 Cohen, Gabriel 103–104 Coleridge, Samuel Taylor 32, 146 collaborative human creativity 50 common sense 56, 83, 116, 117, 119; convictions 29, 36; idea of “groups” 118; wisdom 28 conscious awareness 14, 86, 97
152 Index
conventional wisdom 1, 29, 42, 47, 56, 83–84, 107, 143, 146 Craig, Scott 33 creativity 2, 49, 87, 92–93; collaborative human 50; constructing fantasies 94–95; dreaming 97; essence of fantasy 96–97; human 50, 85; Placebo effect 95–96; quantum reality 91; worrying about things 97–98 Crichton, Michael 18, 19, 21, 67 Crowe, Jonathan 133 Culture and Personality (Wallace) 121 Darwin Awards 28–29 Darwin, Charles 68–71, 111; On the Origin of Species by Means of Natural Selection 69; theory of evolution 42, 46 Darwinian evolution 42 Davidson, Richard J. 105 default network 82 Dehaene, Stanislas 77 Descartes, René 6 descent with modification 69 “The Disappearance of Mr. Davenheim” (Christie) 82 disbelief 32–36, 95, 146 diverse 40, 46, 107, 131, 134, 135 Doyle, Sir Arthur Conan 18, 54–56, 66; “The Adventure of the Greek Interpreter” 54, 66; “The Boscombe Valley Mystery” 20; The Lost World 18, 19, 67; “A Scandal in Bohemia” 29, 80; The Sign of the Four 16, 17, 121; A Study in Scarlet 51 dreams, dreaming 83–84, 87, 97–99 dual process theory 14, 82, 83 Dunbar, Robin 45 Dyson, Freeman 80, 81 emotions 26, 47, 104, 106–107, 110–111, 118; social regulation of 105 Enlightenment, the 25, 26, 59, 74, 111, 130, 132, 135, 146 evolution, evolutionary biology 42, 44, 46–47, 56, 67, 68–70, 140 extinction 139–140, 143–145; current 141–142; human 145; in past 140–141 fantasy 48–50, 67, 81, 98, 103; constructing 94–95; essence of 96–97; and imagination 48–50; problem-solving 84; science 125, 126; strengths and weaknesses of 132
Freudian psychoanalysis 78, 79, 81 Freud, Sigmund 5, 111 Fuentes, Agustín 10, 49, 50 functional magnetic resonance imaging (fMRI) 81, 105 Gantman, Ana P. 62n2 Goodenough, Ward 5 Gould, Stephen Jay 140 Granovetter, Mark 118, 131 Great Human High-Five Advantage 43, 51, 81, 88, 96, 148; brain basics 41–42; fantasy and imagination 48–49; social collaboration 49–50; social learning 45–46; social networks 46–48; social nurturance 43–44; SWOT analysis of 128–133 Grewal, Iqbal S. 30 Gross, James J. 104 habits 14, 19, 20, 75, 80, 83; forming 73–74; instincts versus 67–68; and learning 71; self-serving 21 Hawkins, Jeff 62n2 Hayes, Taylor R. 62n2 Henderson, John M. 62n2 Henly, William Ernest 91–92 Herculano-Houzel, Suzana 41 Hitchcock, Alfred 110 Hobbes, Thomas 132 Holmes, Sherlock (Doyle) 5, 11, 16–18, 19–21, 22, 29–30, 36, 49, 51, 54, 55, 58, 66–67, 70, 71, 72–73, 74, 78, 80, 81, 82, 99, 121, 122, 130, 132 Homo sapiens 26–27, 36, 47, 50, 126–129, 148 homunculus fallacy 88n1 human behavior 26, 110; mainspring of 74; mentalistic explanations for 59; science of 60 human brain 6, 9–10, 14–16, 18, 19, 21, 22, 28, 43, 44, 94, 97, 106, 110, 120, 122, 126, 130, 135, 145, 147; basics 41–42; fantasy and imagination 48–49; mysteries of 10; three propositions 7–9 human conceit 9, 25–26 human creativity 50, 85 human extinction 145 human foolishness 26, 36 human imagination 8, 49 human isolation and loneliness: perils of social media 116; private truths and public realities 121–123; social life
Index 153
119–121; social networks 117–118; social realities 117 human nature 5, 40, 41, 74, 132; and brain 22, 61; characteristics of 43, 49; mysteries of 11, 21, 36; worries 97 human senses 6, 7, 51, 54–55, 56, 57, 61, 62, 63, 73, 83 Huxley, Thomas Henry 126 imagination 18; fantasy and 48–50; human 8, 49; strengths and weaknesses of 132 instincts 71, 72, 74; versus habits 67–68 The Invisible Man (Wells) 127 The Island of Dr. Moreau (Wells) 127 Jacob, François 143 James, William 42 Kahneman, Daniel 10, 14–15, 16, 79–81, 88, 99; Thinking, Fast and Slow 10, 14, 79 Kandel, Eric 79 Kekulé, August 84 Kelly, Walt 36 Koestler, Arthur: The Act of Creation 48 Krueger, Joel 110 lazy, laziness 79–81, 99 Levins, Richard: on modeling 15–16 Lieberman, Matthew D. 43, 44, 129, 130, 131–132 life’s balancing act 55–56 Linnaeus, Carl 26, 50–51, 148; Systema Naturae 51 Loader, Brian 134 Locke, John 132 The Lost World (Doyle) 18, 19, 67 Lupyan, Gary 56, 57 Maney, Kevin 134 “The Man of the Year Million” (Wells) 126 Matheny, Jason G. 145 mazeway, mazeway equivalence 120–122 Mendeleyev, Dmitri 83 Merton, Robert K.: American Sociological Review 33 mind-body dualism 11n2 mindfulness 16–18 mind wandering 82 mob violence 109, 110 modal model of emotion 62n1 multiple personalities 16
niche construction 94 Nuzzo, Regina 147 objective-reflective-normative thinking 119 obligate social animals 44, 46 obligate social learners 46 Onnela, Jukka-Pekka 116 On the Origin of Species by Means of Natural Selection (Darwin) 69 Parker, George C. 104 Passingham, Richard: Cognitive Neuroscience: A Very Short Introduction 11n1 Pasteur, Louis 86–87 perception 56–58, 61–62, 120 Perlmutter, Saul 147 persecutory delusional disorder 56 PEST analysis 127–128 Pinker, Steven 68, 72 Placebo effect 95–96 Poincaré, Henri 84–86, 87 positron emission tomography (PET) 81 Powers of Two: Finding the Essence of Innovation in Creative Pairs (Shenk) 2 Prager, Robert 107–112, 115, 130 Proceedings of the National Academy of Sciences (PNAS) 141, 142 propositions about the human brain 7–9, 91–92 quantum reality 91 Rand, Ayn 130 rapid eye-movement sleep 97 rational behavior, rationality 8, 20, 21, 22, 24–25, 59, 61, 81, 132, 146 Rousseau, Jean-Jacques 132 Rovelli, Carlo 146–147 Rünger, Dennis 73 Sandberg, Anders 145 A Scandal in Bohemia (Doyle) 29, 80 Schaefer, Hillary S. 105 Schwartz, Earl 109 self-deception 104, 107 senses, human 6, 7, 51, 54–55, 56, 57, 61, 62, 63, 73, 83 shared intentionality 119 Shenk, Joshua Wolf: Powers of Two: Finding the Essence of Innovation in Creative Pairs 2 Siegel, Robert Anthony 95–96
154 Index
The Sign of the Four (Doyle) 16, 17, 121 Silberstein, Michael J. 62n2 Simon, Herbert A. 8, 59, 61–62, 88, 107, 146; The American Economic Review 61; pattern recognition device 61 six degrees of separation 47 Skinner, B. F. 58–60, 61, 73; Beyond Freedom and Dignity 60 sleep 97 social brain hypothesis 45, 110–111 social cliques 118, 131 social collaboration 49–50; strengths of 132; weakness of 133 social learning 45–46, 48; strength of 130–131; weakness of 131 social life 115, 119–121, 130 social living, human commitment to 130 socially embedded selves 106 socially motivated brains 105–106 social media 34, 116, 134 social network analysis (SNA) 117, 118 social networking skills 48 social networks 46–48, 106, 110, 117–118; strengths of 131; weaknesses of 131–132 social nurturance 43–44, 48; strengths of 129–130; weaknesses of 130 Sociobiology (Wilson) 117 Spencer, Herbert 71 Steinbeck, John: Sweet Thursday 98 Stein, Gertrude 103 Stiles, Anne 126–127 Spiro, Emma 116 Sussman, Robert 41 Sweet Thursday (Steinbeck) 98 SWOT analysis 126–135, 136; analysis of human nature 128–133 Systems (dual process theory) 10, 14–15, 16, 79, 80–82, 88, 99 Systema Naturae (Linnaeus) 51 Szanto, Thomas 110
Terrell, John 42, 98 Thaler, Richard 88 Thinking, Fast and Slow (Kahneman) 10, 14, 79 three-dimensional model of the mind 11, 22 three propositions, human brain 7–9, 91–92, 144 Through the Looking-Glass, and What Alice Found There (Carroll) 9 Tomasello, Michael 119, 120 tribe, tribalism 47, 133 trust 4–6, 24–25, 106 two-dimensional model of the mind 10, 11, 14, 15, 22
Table of Periodic Elements 83 Terrell, Gabriel 4–5, 92–93, 94
Yellow submarine, the human skull as 2, 5, 7, 51, 73, 77, 122
unconscious work 86 unexpected insights 84–86 unintended consequences 33 Van Bavel, Jay J. 62n2 von Pufendorf, Samuel 132 Varela, Sara 141 Wallace, Alfred Russel 83 Wallace, Anthony F. C. 119–121, 122, 131; Culture and Personality 121 The War of the Worlds (Wells) 125–126 Watson, John B. 58 Wells, Herbert George 136; The Invisible Man 127; The Island of Dr. Moreau 127; “The Man of the Year Million” 126; The War of the Worlds 125–126, 136 Whitehead, Alfred North 48 willing suspension of belief 32, 146–147 Wilson, Edward O. 47, 68, 72; Sociobiology 117 The Wonderful Wizard of Oz (Baum) 13–14 Wood, Wendy 73 worrying 97