320 90 7MB
English Pages 264 Year 2019
PERCEP TUAL LE ARNING
PERCEP TUAL LE ARNING The Flexibility of the Senses
Kevin Connolly
1
3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America. © Oxford University Press 2019 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer. CIP data is on file at the Library of Congress ISBN 978–0–19–066289–9 1 3 5 7 9 8 6 4 2 Printed by LSC Communications, United States of America
For Paul Liam, perceptual learner extraordinaire
CONTENTS
Preface
xi
PART I
THE NATURE OF PERCEPTUAL LEARNING
1 . How to Understand Perceptual Learning 1.1 Introduction 1.2 What Is Perceptual Learning? 1.3 A Taxonomy of Perceptual Learning Cases 1.4 The Offloading View of Perceptual Learning 1.5 Looking Ahead
3 3 7 18 28 36
2 . Is Perceptual Learning Genuinely Perceptual? 2.1 Introduction 2.2 Skepticism about Perceptual Learning as Genuinely Perceptual 2.3 Introspective Evidence That Perceptual Learning Is Genuinely Perceptual
38 38 41 45
Contents
2.4 Neuroscientific Evidence That Perceptual Learning Is Genuinely Perceptual 2.5 Behavioral Evidence That Perceptual Learning Is Genuinely Perceptual 2.6 Conclusion
48 57 59
PART II
THE SCOPE OF PERCEPTUAL LEARNING
3 . Learned Attention and the Contents of Perception 3.1 Introduction 3.2 The Phenomenal Contrast Argument 3.3 The Attentional Reply to the Phenomenal Contrast Argument 3.4 The Blind Flailing Model of Perceptual Learning 3.5 A New Attentional Reply to the Phenomenal Contrast Argument 3.6 Learned Attention and the Offloading View
65 65 69 72 76 88 99
4 . Learned Attention II: Sensory Substitution 101 4.1 Introduction 101 4.2 Attentional Weighting in Distal Attribution 103 4.3 Latent Inhibition as a Kind of Learned Attention 109 4.4 Applying Principles of Attentional Training to Sensory Substitution 116 4.5 Perceptual Learning and Perceptual Hacking 121 4.6 An Empirical Test for Determining the Nature of SSD Experience 124 4.7 Conclusion 126
viii
Contents
5. “Chunking” the World through Multisensory Perception 127 5.1 Introduction 127 5.2 The Kind of Conscious Awareness We Have in Multisensory Perception 135 5.3 Unitization as a Perceptual Learning Mechanism 139 5.4 Applying Unitization to Multisensory Cases 142 5.5 Objections and Replies 147 5.6 Unitization and the Offloading View 151 5.7 Conclusion 153 6. Learning to Differentiate Properties: Speech Perception 154 6.1 Introduction 154 6.2 The Phenomenal Contrast Argument for Hearing Meanings 156 6.3 The Argument from Homophones 159 6.4 The Role of Differentiation in Speech Perception 164 6.5 Why Perceptual Learning Does Not Support the View That We Hear Meanings 170 6.6 The Offloading View and Speech Perception 174 6.7 Conclusion 177 7. Learning to Differentiate Objects: The Case of Memory Color 7.1 Introduction 7.2 Memory Color and Cognitive Penetration 7.3 A Brief Survey of Memory Color Studies 7.4 Why Memory Color Is Not a Mechanism for Color Constancy
ix
179 179 181 185 193
Contents
7.5 Memory Color and Perceptual Learning 7.6 Memory Color and the Offloading View 7.7 Conclusion
196 204 207
Conclusion: Perceptual Learning beyond Philosophy of Mind
209
Acknowledgments References Index
219 223 241
x
PREFACE
One day, in the middle of writing this book, I found myself waiting in a medical specialist’s examination room, typing up a section of the manuscript on my laptop. A couple of months earlier, my doctor had suggested I visit a specialist about a small area on my right forearm. My doctor did not seem to think it was a big deal, but he suggested I visit a specialist just to be safe. So I found myself sitting, waiting for the specialist, and typing up this manuscript. The specialist, who looked to be well into his seventies, finally opened the door. I put away my laptop, and he began to look at my arm slowly and meticulously. As he proceeded, it became clear that he was alarmed, much more so than the primary-care doctor had been. He called for a lab test. What the specialist was worried about turned out to be early melanoma, a serious form of skin cancer, caught soon enough that it had not yet spread. Many times during the course of writing this book I have thought back to that specialist. Sometimes I find myself wondering what sorts of prior events had led to his perception at that moment when I was sitting in that room. I imagine the long lines of patients he
P r e fa c e
has seen and the textbooks and journal articles he has read. I think about and I wonder what exactly enabled him to see in an expert way that day, and so many others. This book aims to make progress in our theoretical understanding of perceptual learning, both in terms of its nature and its scope. Discussions of perceptual learning can be found throughout the history of philosophy and psychology. William James (1890), for instance, writes about how a person can become able to differentiate by taste between the upper half and the lower half of a bottle for a particular kind of wine (p. 509). This can be understood as a case of perceptual learning—a long-term change in perception that results from practice or experience. Psychologists have been studying perceptual learning under that name since Eleanor Gibson wrote the first review article on the topic more than a half-century ago. Philosophers do not typically use the term. Yet cases of perceptual learning can be found in the literature from Diogenes Laertius’s third-century discussion of Stoic philosophy, to the work of the fourteenth-century Hindu philosopher Vedānta Deśika, and the work of the eighteenth-century Scottish philosopher Thomas Reid. Much more recently, cases of perceptual learning can be found in the work of Susanna Siegel, Christopher Peacocke, Charles Siewert, Galen Strawson, Berit Brogaard, Casey O’Callaghan, Tim Bayne, and many others. This book catalogs cases of perceptual learning in the philosophical literature for the first time. Why is perceptual learning philosophically significant? One reason is that it says something about the very nature of perception— that perception is more complex than it may seem from the first- person point of view. Specifically, the fact that perceptual learning occurs means that the causes of perceptual states are not just the objects in our immediate environment, as it might seem at first xii
P r e fa c e
glance. Rather, there is a long causal history to our perceptions that involves prior perception. When the expert wine taster tastes the Cabernet Sauvignon, for example, that glass of wine alone is not the sole cause of her perceptual state. Rather, the cause of her perceptual state includes prior wines and prior perceptions of those wines. Although there are some recent exceptions (see, for instance, O’Callaghan, 2011; Bayne, 2009; Brogaard, 2018; Brogaard & Gatzia, 2018; Chudnoff, 2018),1 philosophers have relied largely on their intuitions and on introspection to understand cases of perceptual learning. Arguably, however, psychology and neuroscience are now in a position to weigh in on philosophical claims about perceptual learning. This book offers an empirically informed account of perceptual learning for philosophers. The book also offers a way for philosophers to distinguish between different kinds of perceptual learning. In some cases, perceptual learning involves changes in how one attends; in other cases, it involves a learned ability to differentiate two properties, or to perceive two properties as unified (see Goldstone, 1998; Goldstone & Byrge, 2015). This taxonomy can help to classify cases of perceptual learning in the philosophical literature and to evaluate the philosophical claims drawn from these cases.
1. O’Callaghan (2011) explores the case of hearing speech before and after one learns the relevant language, and understands it as an instance of perceptual learning. He uses empirical evidence to argue that the phenomenal difference in a person’s perception when one learns a language is not because they now hear meanings, but because they now hear the linguistic sounds differently. (For more on this, see c hapter 6.) Bayne (2009) uses the case of associative agnosia, in which patients perceive the form of objects but not their categories. By contrasting this case with the perception of a typical perceiver, he argues that perception comes to represent high-level categories, such as when we come to perceive a tomato as a tomato. In much the same way that I do in this book, Brogaard (2018), Brogaard and Gatzia (2018), and Chudnoff (2018), all draw on the perceptual learning tradition of Eleanor Gibson, as well as more recent perceptual learning experiments in cognitive psychology, in order to support a wide array of conclusions in philosophy of mind and epistemology.
xiii
P r e fa c e
While there is a diverse array of cases of perceptual learning in the philosophical literature, this book also offers a unifying theory. The theory, very roughly, is that perceptual learning serves a function. It embeds into our quick perceptual systems what would be a slower task were it done in a controlled, cognitive manner. This frees up our cognitive resources for other tasks. For instance, a novice wine taster drinking a standard Cabernet Sauvignon might have to think about its features first and then infer that it is a Cabernet Sauvignon. An expert, by contrast, would be able to identify the type of wine immediately. This learned ability frees up cognitive resources for the expert, which enables her to think about other things, such as the vineyard or the vintage of the wine. My account gives us a new way to understand perceptual learning cases in terms of cognitive resources and cognitive economy. Part I of the book focuses on the nature of perceptual learning; Part II focuses on its scope, rethinking several domains in the philosophy of perception, given perceptual learning. To give just one example (which I take to involve attentional learning), some philosophers (most notably, Siegel, 2010) have held that because natural kinds, such as pine trees or wrens, can come to look different to us through perceptual learning, it is evidence that perception can represent such natural kinds (in addition to low-level properties such as colors, shapes, and textures). I argue that what actually happens in such cases is that we come to attend to different low-level properties (such as the prototypical pine-green color of the pine tree or the round shape of the wren). Such cases involve the training of attention. The book begins, however, with an introductory chapter on perceptual learning that answers the following questions: What is perceptual learning? What are the different kinds of perceptual learning? And what function does it serve for us? These are the issues we now turn to in chapter 1. xiv
18
9
0
–9
–18 –2
–1
0
1
2
L – M (% cone contrast)
Figure 7.2. Hansen and colleagues (2006) asked participants to make fruit stimuli neutral gray, using a dial. Participants should have dialed the colors to point (0, 0) on the axes. Instead, they overshot, making the fruits closer to their opponent colors. Hansen and colleagues interpreted this result to mean that participants saw the grayed fruits as having more of their prototypical color than the fruits actually had. Source: Hansen et al. (2006).
Figure 7.3. Witzel and colleagues (2011) found the memory color effect for ten out of fourteen of these artificial objects (that is, for everything except the fire extinguisher, heart, Coca-Cola logo, and mouse cartoon figure). Source: Witzel et al. (2011).
(a)
(b)
Figure 7.5. This figure illustrates one major challenge for explaining memory color as a case of color constancy. Starting with the two cube images (a), the blue tiles on the cube on the far left are actually the same shade as the yellow tiles on the cube to the right of it. Yet they are perceived as different shades because of color constancy. Importantly, nearly all humans will experience this effect. By contrast, the discolored Pink Panther (b) on the right will be experienced as pinker than it actually is only if someone has seen the Pink Panther before. Source: (a) Lotto and Purves (2002); (b) The original undiscolored image is from Witzel et al. (2011).
Figure 7.6. This picture shows clusters of bananas in front of a banana plant background. Several psychology studies have provided evidence that under some conditions (such as dim lighting), we see objects that have prototypical colors (such as yellow bananas) as more like their prototypical color. My claim is that this effect enables us to more easily differentiate objects from their backgrounds, such as bananas from the banana plant background in this figure. This is especially relevant in dim lighting situations. Image from http://www. banana-plants.com/index.html.
PART I
THE N ATURE OF PERCEP TUAL LE ARNING
Chapter 1
How to Understand Perceptual Learning
1.1 INTRODUCTION People sometimes say things like the following: Cabernet Sauvignon tastes different to an expert wine taster than to a novice; or, Beethoven’s Ninth Symphony sounds different to a seasoned conductor than it does to someone just hearing it for the first time. Both these examples are cases of perceptual learning, very roughly (to be elaborated on in this chapter), cases of long-term changes in perception that result from practice or experience (see Gibson, 1963, p. 29). Opening examples aside, one need not be an expert to have undergone perceptual learning. Practice or experience with Cabernet Sauvignon or Beethoven’s Ninth Symphony might result in long-term perceptual changes, even if those changes fall short of full-blown expertise. As I mentioned in the preface, philosophers do not typically use the term “perceptual learning.” Yet, in the philosophical literature, there are a great many examples that would seem to count as cases of it.1 Christopher Peacocke (1992), for instance, writes 1. Note that unless otherwise specified, the philosophers that propose the following examples suggest them as cases of perceptual changes, not just extra-perceptual changes. I return to this distinction in section 1.2.
3
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
about what happens perceptually when someone learns to read a language written in Cyrillic script. He claims there is a difference “between the experience of a perceiver completely unfamiliar with Cyrillic script seeing a sentence in that script and the experience of one who understands a language written in that script” (p. 89). Susanna Siegel (2010) describes how pine trees might look visually salient to someone who has learned to recognize them. She motivates this by suggesting that if you were tasked to cut down all and only the pine trees in a particular grove of trees, and you had never seen pine trees before, pine trees might begin to look visually salient to you after a while (p. 100). Similarly, Charles Siewert (1998) writes that after we learn to recognize a sunflower, certain features “ ‘stand out for us as significant’ and ‘go together’ ” (p. 256). He says there is “a difference between the way things look to us when they merely look somehow shaped, colored, and situated, and how they look to us when they look recognizable to us as belonging to certain general types” (p. 256). In the philosophical literature, cases of perceptual learning are not just limited to the last few decades. For instance, in his discussion of Stoic philosophy, the 3rd-century historian of philosophy Diogenes Laertius (1925) writes that “a statue is viewed in a totally different way by the trained eye of a sculptor and by an ordinary man” (p. 161). One way of understanding this claim is that there is a difference in the perception of an expert versus a layperson, when they see a statue. In discussing the perceptual expertise of jewelers, the 14th-century Hindu philosopher Vedānta Deśika writes, “[T]he difference among colours [of a precious stone], which was first concealed by their similarity, is eventually made apparent as something sensual” (translated, Freschi, 2011, pp. 12–13). On Vedānta Deśika’s view, perceptual learning enables the expert to see two colors of a 4
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
gem as distinct, where as a novice he saw them as the same colors. Later on, in the 18th-century, Thomas Reid ([1764]1997) famously wrote of how people “acquire by habit many perceptions which they had not originally” (p. 171). In just one of the many examples he gives, Reid writes about how a farmer acquires the ability to see the rough amount of hay in a haystack or corn in a heap (p. 172).2 One way of understanding Reid’s claim is that the farmer has undergone long-term changes in his perception, following experience with things he has encountered in his farm life. Cases of perceptual learning also occur in senses besides vision. Ned Block (1995), for instance, claims, “[T]here is a difference in what it is like to hear sounds in French before and after you have learned the language” (p. 234). As Casey O’Callaghan (2011, pp. 786–787) points out, Galen Strawson (2010, pp. 5–6), Michael Tye (2000, p. 61), Susanna Siegel (2006, p. 490), Jesse Prinz (2006, p. 452), and Tim Bayne (2009, p. 390) each make essentially the same claim as Block about what happens perceptually when we learn to hear a language. This auditory case and the visual cases given by Peacocke, Siegel, Siewert, Reid, Vedānta Deśika, and Diogenes Laertius, can all be understood as cases of perceptual learning: cases of long-term perceptual changes that result from practice or experience. This book develops an account of perceptual learning and its philosophical significance. In the next section, I give a more precise statement about what perceptual learning is, in order to differentiate cases of perceptual learning from cases that are not perceptual learning. In section 1.3, I distinguish three different kinds of
2. There is a recent debate about whether acquired perception for Reid is genuine perception. Copenhaver (2010, 2016) argues that it is; Van Cleve (2004, 2015, 2016) argues that it is not.
5
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
perceptual learning, and I use these distinctions to offer the first taxonomy of cases in the philosophical literature. This taxonomy is important both in this chapter and throughout the book, as it helps to clarify the roles that perceptual learning can legitimately play in the arguments that philosophers have made. In section 1.4, I offer a theory of the function of perceptual learning, which I call the “Offloading View,” a view that I continue to argue for throughout the book. The view is that perceptual learning serves to offload onto our quick perceptual systems what would be a slower and more cognitively taxing task were it to be done in a controlled, deliberate manner. The upshot is that this frees up cognitive resources for other tasks. At the outset, one might wonder why we should think perceptual learning really occurs (and is genuinely perceptual). In c hapter 2, I give an abductive argument that perceptual learning does occur and is perceptual. I draw on three converging bodies of evidence for this, evidence from three different levels of analysis. First, there are the introspective reports just mentioned of long-term changes in perceptual phenomenology due to learning, which philosophers and others have been raising independently from one another for many centuries now. Secondly, there is evidence from neuroscience. In particular, a battery of studies provide evidence that perceptual learning creates neural changes specifically in the primary sensory areas (both visual and non-visual; see Furmanski, Schluppeck, & Engel, 2004; De Weerd et al., 2012; Braun et al., 2000). Thirdly, in line with both the phenomenological and neuroscientific evidence, there is a body of behavioral evidence from psychology, much of which I will introduce in this chapter. The book also offers a further argument that perceptual learning occurs (in the perceptual sense in which I am understanding it). In chapters 3 through 7 I offer independent arguments that perceptual 6
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
learning occurs in each of five different perceptual domains: in natural kind recognition, sensory substitution, multisensory perception, speech perception, and color perception. If I am right about those cases, then this is a further argument that perceptual learning really occurs (and is genuinely perceptual).
1.2 WHAT IS PERCEPTUAL LEARNING? What are the common characteristics of Peacocke’s Cyrillic case, Siegel’s pine tree case, Siewert’s sunflower case, Diogenes Laertius’s statue case, Vedānta Deśika’s gemstone case, Reid’s haystack case, and Block’s case of hearing French? Loosely following E. J. Gibson (1963, p. 29), I understand these and other cases of perceptual learning to be cases of long-term changes in perception that are the result of practice or experience. Let me say something about each part of this description, and by doing so, demarcate cases of perceptual learning from cases in the philosophical literature that are similar but do not count as perceptual learning.
Perceptual Learning as Long-Term Perceptual Changes Perceptual learning involves long-term changes in perception. This rules out short-term adaptive effects such as the waterfall illusion (see Gibson, 1963, p. 29; Gold & Watanabe, 2010, p. R46). For instance, if one looks at the trees on a riverbank, and then at an adjacent waterfall for a long time, and then back at the trees, one’s perception of the trees may have changed. In particular, they may appear as if they are moving upward (in the opposite direction of the waterfall’s downward motion). This is a perceptual change, and it is also the result of experience (in particular, the experience of 7
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
looking at the waterfall). However, it is not a long-term perceptual change. The waterfall illusion and other short-term adaptive effects3 fade away after a short period of time, so they are not cases of perceptual learning. Cases like the waterfall illusion are the strongest reason to hold the long-term criterion when it comes to perceptual learning, but in my view, the criterion also captures an important fact about learning. If I just so happen to hit a single backhand in tennis but am unable to hit a backhand ever again, I have not actually learned a tennis backhand. Likewise, in a perceptual case, to be an authentic case of perceptual learning, the perceptual change cannot simply be temporary. It has to be long-term (although cf. Henke, unpublished manuscript). For example, if two very similar gemstones look different for a moment to a budding jeweler, but then appear the same as ever for the rest of her life, then this is not a case of perceptual learning. This is because while there was a short-term perceptual change, the change did not stick, and so it is not a case of learning at all. There are bound to be some difficult cases where it is unclear whether a putative case of perceptual learning counts as a long-term perceptual change or not. It may be that in difficult cases we need to look at the mechanism involved. If the mechanism is a clear perceptual learning mechanism, and meets the other criteria for perceptual learning, then we should count the case as a case of perceptual learning. However, if the mechanism is clearly not a perceptual learning mechanism, we should not count the case as an instance of perceptual learning. In short, the long-term criterion is a good heuristic for adjudicating between a large number of candidate cases 3. Another ordinary case of perceptual adaptation is the case of coming indoors from the snow and needing a short period of time for your eyes to adjust to the new environment.
8
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
of perceptual learning, but in some borderline cases, we may need to look at the mechanisms involved to determine whether the case counts as perceptual learning. Perceptual learning is not the same as cognitive penetration, and one reason why is because perceptual learning involves long-term perceptual changes, while cognitive penetration need not. Cases of cognitive penetration are cases in which “the phenomenal character of perceptual experience [are] altered by the states of one’s cognitive system, for example, one’s thoughts or beliefs” (Macpherson, 2012, p. 24).4 To take an example from Siegel (2012), suppose “Jill believes that Jack is angry at her, and this makes her experience his face as expressing anger” (p. 202). This would be a case of cognitive penetration. Yet, it need not be a case of perceptual learning. Cases of perceptual learning are cases of long-term perceptual changes. But the change in the look of Jack’s face need not persist beyond the moments that Jill sees it (assuming, reasonably, that Jill does not always believe that Jack is angry with her). If Jill updates her belief soon after and no longer believes that Jack is angry at her, the cognitive penetration goes away. So at least some cases of cognitive penetration are not cases of perceptual learning, since they are not long-term. Arguably, some cases of perceptual learning, on the other hand, are also not cases of cognitive penetration. Chicken sexers,
4. Following Pylyshyn (1999), Macpherson also often adds a “semantic criterion” to this description of cognitive penetration (see Macpherson, 2012, p. 26; 2017, pp. 9–10). For instance, in her 2017 paper, she considers a case in which “[y]ou believe that aliens might land on Earth. The belief causes you to be stressed. The stress causes a migraine which causes you to experience flashing lights in the sky” (p. 9). Many philosophers will have the intuition that such a case does not count as cognitive penetration. Because of this, Macpherson thinks that the definition of cognitive penetration requires “that there be a causal, semantic link between each of the steps in the chain that lead from the belief to the subsequent perceptual experience” (2017, p. 10).
9
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
for instance, undergo long-lasting attentional shifts as the result of practice sexing chicks (see Biederman & Shiffrar, 1987, as cited in Pylyshyn, 1999, p. 359). As I will argue, this is a case of perceptual learning. It is a long-lasting perceptual change as the result of practice or experience (more on this in section 1.3). In the literature on cognitive penetration, however, attentional shifts are very often not considered to be cases of cognitive penetration (see Macpherson, 2012, p. 28; Deroy, 2013, p. 95; Raftopoulos, 2005, p. 81; and Pylyshyn, 1999, pp. 359, 364; although cf. Stokes, 2014, pp. 28–30; 2018; Cecchi, 2014; Mole, 2015; and Wu, 2017). So the long-term attentional shifts that happen in chicken sexing are taken to count as perceptual learning, but often not as cognitive penetration (see Raftopoulos, 2001, sec. 2.3.2; and Raftopoulos & Zeimbekis, 2015, for more on the relationship between perceptual learning and cognitive penetration).
Perceptual Learning as Perceptual Perceptual learning involves long-term changes in perception. This distinguishes perceptual learning from changes in beliefs and emotions, among other non-perceptual changes that can occur. To see one reason why it is helpful to distinguish long-term perceptual changes from non-perceptual ones, consider a famous thought experiment from Daniel Dennett (1988) involving Mr. Chase and Mr. Sanborn, two coffee tasters at Maxwell House. Their duty is to make sure the coffee stays consistent over time. After six years on the job, Mr. Chase turns to Mr. Sanborn and says, “I hate to admit it, but I’m not enjoying this work anymore. . . . [T]he coffee tastes just the same today as it tasted when I arrived. But, you know, I no longer like it! My tastes have changed. I’ve become a more sophisticated coffee drinker. I no longer like that taste at all” (Dennett, 10
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
1988, p. 52). Mr. Chase has not undergone perceptual learning, and the reason is that although he has undergone a long-term change in his experience and that change is the result of practice or experience, it is not a change in perception. His aesthetic tastes have changed, but his perception has not. When it comes to his perception, as he puts it, “[T]he coffee tastes just the same today as it tasted when I arrived” (p. 52). Another reason it is important to understand perceptual learning in terms of long-term changes in perception is that doing this demarcates perceptual learning from changes in behavioral reactions that are based on perception. As Fred Dretske (2015) points out, we could allow a notion of perceptual learning to include improvements in our ability to perform perceptual tasks such as learning to distinguish between triangles and squares (p. 166, n. 6). However, such an account of perceptual learning would be problematic because it would allow in cases that do not involve perceptual changes at all (p. 166).5 In my account, perceptual learning is perceptual, not just behavioral. More generally, there is a difference between perceptual learning itself, and learning that is simply based on perception (such as perceptually based behavioral learning).6
Perceptual Learning as Resulting from Practice or Experience Perceptual learning involves long-term changes in perception that result from practice or experience. This distinguishes perceptual learning from other long-term perceptual changes. One might, for 5. Importantly, this view comes from Dretske’s later work. In c hapter 2, I discuss how Dretske (1995) is skeptical that there are widespread, genuinely perceptual changes due to learning. 6. Cf. Law and Gold (2008); and Chudnoff (2018, sec. 2), who do count mere perception- based learning as perceptual learning, in some cases.
11
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
instance, undergo a long-term change in perception as the result of a sudden eye injury or brain lesion. But these are not cases of perceptual learning, since these long-term perceptual changes were not the result of practice or experience. My claim is that some cases of long-term perceptual changes are not perceptual learning since they are not the result of practice or experience. This is helpful for further understanding Dennett’s thought experiment involving the coffee tasters Chase and Sanborn. In the thought experiment, Mr. Chase says that his aesthetic tastes have changed (as I discussed earlier), but Mr. Sanborn also reports a change, albeit one of a different kind. Like Mr. Chase, Mr. Sanborn reports liking the coffee years ago when he started, but not liking it anymore. Unlike Mr. Chase, however, Mr. Sanborn reports, “[M]y tastes haven’t changed; my . . . tasters have changed. That is, I think something has gone wrong with my taste buds or some other part of my taste-analyzing perceptual machinery” (Dennett, 1988, p. 52). Unlike Mr. Chase, Mr. Sanborn has undergone a perceptual change. Yet the change as Dennett describes it is a mere biological change, a change in his “perceptual machinery,” as Mr. Sanborn puts it. It is not described as a long-term change in perception that is due to practice or experience. Dennett’s famous coffee taster thought experiment may seem at first glance to involve perceptual learning, but if my analysis is correct, it does not. Mr. Chase undergoes a change in aesthetic taste, but not one in perception. And Mr. Sanborn undergoes a change in perception, but it is a change due merely to biological factors, not to practice or experience. So, like the case of Mr. Chase, the case of Mr. Sanborn is not a case of perceptual learning. In perceptual learning, long-term perceptual changes result from practice or experience. But what types of experience? First, 12
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
it is important to recognize that pine trees or Cyrillic letters begin to look different to you, not just because of experience in general, but because of experience with particular types of stimuli, such as pine trees or Cyrillic letters. At the same time, to count as perceptual learning, we need not say that the change in the look of, say, a Marsh Wren must result from prior experience specifically with Marsh Wrens. After all, the change may result from prior experience with wrens more generally. When thinking about what kind of practice or experience is involved in perceptual learning, another important distinction is between supervised and unsupervised perceptual learning. Some perceptual learning occurs without any direction whatsoever. For instance, though nobody has ever told you what to listen for, you may well be able to tell the difference, by sound alone, between hot water being poured and cold water being poured, and also be able to tell which is which.7 This is a case of unsupervised perceptual learning, where through mere exposure to water pouring, you have become able to distinguish different kinds of it by sound alone. Other perceptual learning is supervised. For instance, someone might tell you what to listen for when you are trying to tell two words apart in a language you are learning, or they might tell you which features are important for identifying a pine tree. Unsurprisingly, supervision can speed up the learning process (see, for instance, Biederman & Shiffrar, 1987; Jarodzka et al., 2013). Supervised perceptual learning may involve coaches, trainers, or educators, who try to develop skills in those they are coaching, training, or educating. One such example is the baseball
7. To test yourself, see National Public Radio, What does cold sound like? See if your ear can tell temperature [Radio Segment], All Things Considered, July 5, 2014. Retrieved from http://www.npr.org/2014/07/05/328842704/what-does-cold-sound-like.
13
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
or softball coach who constantly tells batters to keep their eye on the ball. Such supervision might tune a batter’s attention for the long-term.
Other Contrast Classes 1. PERCEPTUAL DEVELOPMENT I have mentioned several classes so far that contrast with perceptual learning, including cases of cognitive penetration, merely biological perceptual changes, changes in judgments based on perception, and changes in behavioral reactions to perception. One further and important contrast class with perceptual learning is perceptual development. As infants and young children, our perceptual systems develop, and one question is whether this development is the result of learning or not. In the perceptual learning literature, Kellman and Garrigan (2009) consider and dismiss the view that all perceptual development is the result of learning; in particular, they think that recent advances in studying infant perception, including electrophysiological techniques, have provided data that tell against the view: Although perception becomes more precise with age and experience, basic capacities of all sorts—such as the abilities to perceive objects, faces, motion, three-dimensional space, the directions of sounds, coordinate the senses in perceiving events, and other abilities—arise primarily from innate or early- maturing mechanisms (Bushnell, Sai, and Mullin, 1989; Gibson et al., 1979; Held, 1985; Kellman and Spelke, 1983; Meltzoff and Moore, 1977; and Slater, Mattock, and Brown, 1990). (Kellman & Garrigan, 2009, p. 57) 14
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
Put another way, the empirical evidence from the last few decades is clear: many basic perceptual abilities that come out of perceptual development are not learned, so they should not count as perceptual learning. Assuming that Kellman and Garrigan are correct that not all perceptual development is the result of learning, this prompts a need to distinguish between perceptual development, on the one hand, and perceptual learning on the other. A plausible way to do it is to say that the abilities mentioned by Kellman and Garrigan—such as the abilities to perceive three-dimensional space, the directions of sounds—which arise from innate or early maturing mechanisms, fall under the category of perceptual maturation. Perceptual maturation contrasts with perceptual learning, and perceptual development then involves both perceptual learning and perceptual maturation. What exactly is the difference between perceptual learning and perceptual maturation? The distinction is difficult to draw cleanly, but there are several tendencies worth noting. As Fahle (2002) puts it, “unlike learning,” maturation “ascribe[s]the main thrust of the changes in a behavior to genetics, not the environment” (p. xi; see also Goldstone, 1998, p. 586). According to Fahle (2002), “[C]hanges in observable behavior are seen as the consequences of the growing and maturation of the organism along a largely predefined path rather than as a consequence of information gathered through interaction with the outside world” (p. xi). Put another way, in perceptual maturation more so than in perceptual learning, the main thrust of the perceptual changes is genetics. While there are bound to be messy cases, the tendency is that in perceptual learning more so than in perceptual maturation, the main thrust of the perceptual changes is interaction with the environment. Furthermore, perceptual maturation is distinct from perceptual learning in that it 15
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
tends to follow a predefined path. That is to say, perceptual maturation can be thought of as the natural unfurling of perceptual abilities. 2. PERCEPTION-B ASED MOTOR SKILLS A further point of contrast with perceptual learning is motor skill, even those motor skills that are perception based. Consider, for instance, a study by Williams and Davids (1998), which reported that when expert soccer players defend against opponents, they focus longer on an opponent’s hips than non-experts do. Suppose, quite plausibly, that nobody ever instructed the soccer players to attend in that way. Suppose instead that practice and experience tune the attention of the players, such that when they then defend, they attend more to the opponents’ hips. As I will argue, this tuned attention is a long-term change in perception that results from practice or experience. That is, it is an instance of perceptual learning (see section 1.3). At the same time, it serves to enable certain perception-based motor skills. For instance, attending to the hips is part of what enables soccer players to keep an offender in front of them, or to keep an offender from completing an easy and uncontested pass, or to keep an offender from shooting and scoring. That is to say, without the attentional tuning, expert soccer players would not be able to perform as high above baseline as they do. Perceptual learning can enable perception-based motor skills, yet it is important to distinguish these motor skills from perceptual learning. In fact, arguably, perceptual learning does not in itself give you a skill, properly speaking. One reason for this, drawing on Stanley and Krakauer (2013), is that acquiring a skill quite plausibly requires receiving instruction (at least initially) or observing someone else, whereas the attentional tuning case just described would seem to require none of that (para. 13). Furthermore, arguably, as Stanley and Krakauer put it, “our skilled actions are 16
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
always under our rational control” (para. 14; see also Stanley & Williamson, 2017, p. 718). Yet there is an important sense in which one cannot control a tuned attentional pattern. Goldstone, for instance, cites a study on attentional tuning by Shiffrin and Schneider (1977). In that study, letters were used first as targets in the experiment but were later used as distractors to be ignored (Goldstone, 1998, p. 589). Because of their prior training with the letters, the participants’ attention became automatic with respect to the letters in the scene, even though they were deliberately trying to ignore them. It may seem at first glance that this effect would be short- lived, and so would not count as perceptual learning. However, Qu, Hillyard, and Ding (2016) found that such an effect lasted for at least three to five months. The general lesson is this: After training, it can be difficult to control a tuned attentional pattern because the attention can be automatic toward particular properties. Perceptual learning is not always under our rational control, unlike skills (as Stanley and Krakauer describe them). 3. OTHER CASES In addition to perceptual development and perception-based motor skills, other points of contrast with perceptual learning include perceptual changes brought about by drunkenness or depression (both mentioned by Siegel, 2010, p. 107), as well as drug-induced perceptual states, and perceptual changes that occur due to time dilation. Changes brought about by drunkenness or drugs may well be perceptual changes. But they are rarely long-term, nor are the particular perceptual changes brought about by practice or experience in a way that would count as learning.8 The same can be said about 8. An interesting case is the case of hallucinogen-persisting perception disorder (HPPD). People with HPPD continue to see hallucinations long after taking hallucinogenic drugs.
17
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
cases of time dilation—cases where one illusorily perceives objects, properties, or events as slowing down (see, for instance, Phillips, 2013; Lee, 2014, pp. 10–11). Time dilation involves a change in one’s perception, but such cases are not long-term, and the perceptual changes are not brought about by practice or experience as part of a learning process. In instances of depression, as Siegel points out, the world is said to look gray. This would be a perceptual change, and it could potentially be long-term. However, the perceptual change is not brought about by practice or experience in a way that would count as learning.
1.3 A TAXONOMY OF PERCEPTUAL LEARNING CASES Now that we have a better sense of what perceptual learning is, my goal in what follows is twofold. First, I want to draw some distinctions between different cases of perceptual learning. Second, I want to say what cases of perceptual learning share in common. In this section, I draw some distinctions. I also classify cases of perceptual learning in the philosophical literature for the first time, using these distinctions. In the next section, I give an account of the common feature of cases of perceptual learning. Cases of perceptual learning can be framed in different ways. We might talk about the perceptual change in a single person over time. In doing so, we are discussing perceptual learning as diachronic and intrapersonal. For instance, we might discuss how wine tastes
For instance, someone might see trails from moving objects years after they have last taken LSD. This is a long-term change in perception, but it does not result from practice or experience in a way that would count as learning.
18
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
different to a person as she becomes an expert wine taster. However, we might also discuss perceptual learning interpersonally, that is, as involving two different people. So, for instance, we might compare the perception of an expert wine taster with the perception of another person who is a novice. We might also think of the perceptual learning process as being slow or fast. All perceptual learning involves long-term changes; however, the period needed to get to the long-term changes may differ. For instance, suppose I show you a picture of black and white patches that you have never seen. Suppose that I then point out to you that some of the patches make up a Dalmatian, and get you to see it in that way.9 This is a case of fast perceptual learning, assuming that it is a long-term change in your perception. However, there are also many instances of slow perceptual learning. For example, it may take months or years of exposure to wine before it tastes different to you due to learning (see Raftopoulos, 2001, p. 443, for further discussion of slow perceptual learning).10
9. See Bach, M., Hidden figures–Dalmatian dog, Visual Phenomena & Optical Illusions, August 11, 2002. Retrieved from http://www.michaelbach.de/ot/cog-Dalmatian/index.html. 10. A helpful point of comparison with slow perceptual learning is change blindness, what Fred Dretske (2004) describes as “difficulty in detecting, visible—sometimes quite conspicuous—differences when the differences are viewed successively” (p. 1). Demonstrations of change blindness, for instance, might have a participant look at a picture of a farm on a computer screen. A cornfield taking up most of the picture is slowly gaining in height, but the participant will not typically detect the change. In many instances, slow perceptual learning is like such a case of change blindness, because in many instances of slow perceptual learning, the learner is unaware of the particular gradual change. This is not to say that the changes in perceptual learning are always entirely gradual, in the way that the perceptual changes when looking at a demonstration of change blindness are. Furthermore, unlike in change blindness, in perceptual learning the blindness is to a feature of one’s own changing percept, not to changing features of the world. So while a participant in a change blindness study might not detect the change in the actual height of the cornfield, someone undergoing perceptual learning might not detect a change in the look of a pine tree. Slow perceptual learning also typically happens over a much longer period of time than has been shown in the standard demonstrations of change blindness in the literature. The farm-picture
19
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
In what follows, I want to distinguish three different kinds of perceptual learning: differentiation, unitization, and attentional weighting. These kinds of perceptual learning have been well studied by psychologists (see, for instance, the surveys of Goldstone, 1998; Goldstone, 2003; and Goldstone and Byrge, 2015). They are the three types of perceptual learning described in the most recent survey on the topic (which is Goldstone and Byrge, 2015).11
Differentiation In cases of “differentiation,” as a result of practice or experience, what previously looked12 to a subject (phenomenally) like a single object, property, or event, later looks to her like two or more objects, properties, or events. 13 William James (1890), for instance, demonstration might take fifteen seconds, whereas the changes in perception during perceptual learning might take several days or months to occur. 11. See also Goldstone, Braithwaite, and Byrge (2012) and Goldstone and Byrge (2015). Goldstone (1998) also lists a fourth mechanism of perceptual learning called stimulus imprinting, in which “detectors (also called receptors) are developed that are specialized for stimuli or parts of stimuli” (p. 591). As an example, Goldstone highlights the fact that cells in the inferior temporal cortex can have a heightened response to particular familiar faces (Perrett et al., 1984, cited in Goldstone, 1998, p. 594). I do not treat the case of stimulus imprinting in this book. For one, the recent Goldstone and Byrge (2015) survey on perceptual learning does not include it, nor does their 2012 survey (Goldstone, Braithwaite, and Byrge, 2012). Also, I am not entirely convinced that the effects of stimulus imprinting are truly perceptual, instead of just involving an increase in speed and accuracy (that is, a change only in response rather than in perception). For more on stimulus imprinting, see Goldstone (1998, pp. 591–596) and Goldstone (2003, pp. 239–241). 12. I use the term “looks” in this section, but I do not mean to imply by it that the types of perceptual learning I describe are strictly visual. Feel free to substitute “feels,” “smells,” “tastes,” etc. as appropriate. Also, in addition to learning effects on the ways objects, properties, or events look, feel, smell, taste, etc., my view is that there are also multisensory learning effects. For instance, there are changes in the way one experiences the visual jolt and the auditory clang of a cymbal as coupled, due to the fact that one has learned through experience that the jolt and the clang are part of the same event. In c hapter 5, I argue for this view and expand upon it. 13. The view that underwrites my project is that there are intentional objects (objects, properties, events, relations), and that they change due to learning. There is no need to
20
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
writes about how a person can become able to differentiate by taste between the upper and lower half of a bottle, for a particular kind of wine (p. 509). This can be understood in terms of differentiation: what one previously tasted as a single thing, one later tastes as two distinct things, as a result of learning. In another example of differentiation, experimenters trained native Japanese speakers living in the United States, for whom English was their second language, to better distinguish between the phonemes /r/and /l/, which they originally had a difficult time doing (Logan, Lively, & Pisoni, 1991).14 Another potential example is the case of dressmakers, who develop fine-grained hand-eye skills through their stitching and sewing, and have been found to have better stereoscopic acuity than non-dressmakers (Chopin, Levi, & Bavelier, 2017).15 That is to say, they can better differentiate between different distances using binocular vision. The concept of differentiation can help us to better understand Block’s (1995) claim that “there is a difference in what it is like to hear sounds in French before and after you have learned the language” (p. 234). Siegel (2010) describes the prior experience as an experience in which one “can’t parse into words and sentences” (p. 99). Casey O’Callaghan (2011) makes the same point: “[H]earing foreign language is like hearing a mostly unbroken
posit looks in addition to intentional objects on my view. When I say that the look of an object changes, I do not mean that this is something else changing in addition to the change in intentional objects. 14. Following O’Callaghan (2015), I use the right-side up “r” rather than the upside-down “r” for readability throughout this book. Importantly, this deviates from the way it is used in the International Phonetic Alphabet. 15. The authors note that it is always possible that dressmakers go into dressmaking because they have this ability, rather than the ability being a product of their training and experience. They also note that the ability could be a mixture of both factors (see Chopin, Levi, & Bavelier, 2017, p. 3).
21
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
sound stream. Speech in your native language, however, auditorily seems segmented into units separated by gaps and pauses” (p. 801; O’Callaghan cites Barry C. Smith, 2009, p. 185, and Galen Strawson, 2010, p. 6, as making the same point; Jack Lyons does as well in Lyons, 2009, p. 104). Put another way, in hearing a language one cannot understand, one largely does not hear differentiated words or sentences. On the other hand, in hearing a language one can understand, one does hear differentiated words and sentences. The concept of differentiation helps us better understand the case.
Unitization In cases of “unitization,” as a result of practice or experience, what previously looked to a subject (phenomenally) like two or more objects, properties, or events, later looks to her like distinct parts of a single complex object, property, or event. To understand unitization, consider the results of a study done by Gauthier and Tarr (1997). On a computer, they constructed a set of photorealistic action-figure-like objects called “Greebles” (see Figure 1.1). Each Greeble has a particular body type and four appendages. The family of each Greeble is determined by the shape of the large central part of the body, rather than by the appendages (Gauthier et al., 1999, p. 569). Each Greeble also has a sex, which is defined by the direction of the appendages (as either all up or all down; Gauthier et al., 1998, p. 2403). Individual Greebles of the same family and sex differ only in the shape of the appendages (Gauthier et al., 1999, p. 569). In Gauthier and Tarr (1997), participants were tasked with identifying Greebles. At first, the participants would slowly locate the important features of a particular Greeble, and then infer that it was of a particular family and sex. After practice, however, the participants would begin to see the Greebles not as collections of
22
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
Figure 1.1. Gauthier and Tarr (1997) used “Greebles” to demonstrate perceptual learning. At first, participants would have to detect several features of a particular Greeble and then infer its sex and family. After repeated exposure to the Greebles, however, participants were able to detect their sex and family immediately and accurately. Psychologists call this “unitization,” when “a task that originally required detection of several parts can be accomplished by detecting a single unit” (Goldstone, 1998, p. 602). The two rows in Figure 1.1 divide the Greebles into two different sexes, while the five columns divide them into five different families. Source: Behrmann et al. (2005).
features but as single units. This showed up in the fact that those trained with the Greebles performed much better than novices did on speed and accuracy tests. Unitization occurs for several categories of things in addition to Greebles. For instance, due to practice or experience, we come to unitize words (as opposed to non-words), seeing them as single units. Different words are, of course, qualitatively distinct from one another. Nonetheless, when we see a word, we standardly see it as
23
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
unitized. This is part of what it is to see a member of the word category. By contrast, we do not typically unitize non-words (for a discussion of this literature, see Goldstone, 1998, p. 602). Unitization also happens for human faces, and for dog breeds when perceived by dog experts (although not typically by non- experts) (see Goldstone, 1998, p. 603). The concept of unitization helps us to better understand Siewert’s (1998) claim that when we recognize kinds (such as sunflowers), certain “aspects of shape, color, and position . . . ‘go together’ ” (p. 256). Just as when the novice looks at Greebles, she has to slowly run through a checklist of features, since she is unable to see that certain features go together, so too is the person who has never seen a sunflower before unable to see that certain aspects of shape, color, and position go together. Through practice or experience, however, each person is able to see certain features of the Greeble or the sunflower as going together.
Attentional Weighting In cases of “attentional weighting” (also called “attentional tuning”), the phenomenal look of an object, property, event, or spatial region changes as a result of learning to attend (or learning not to attend) to that object, property, event, or spatial region. Attentional weighting effects have been demonstrated at length in sports science, among other domains. I have mentioned a 1998 study on expert soccer players that found that when they defend against opponents, they focus longer on an opponent’s hips than non-experts do (Williams & Davids, 1998). A 2002 study on expert goalkeepers found that during penalty kicks, they fixate longer on the non-kicking leg, while non-experts fixate longer on the trunk area (Savelsbergh et al., 2002). As well, a 2010 study on expert fencers found that they 24
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
focus longer on the upper trunk area of their opponents than non- experts do (Hagemann et al., 2010). All of these cases involve attentional weighting, a kind of perceptual learning whereby “perception becomes adapted to tasks and environments . . . by increasing the attention paid to perceptual dimensions and features that are important, and/or by decreasing attention to irrelevant dimensions and features” (Goldstone, 1998, p. 588; Goldstone & Byrge, 2015, p. 819, give the same definition). For instance, the expert soccer goalkeeper better performs the task of defending a penalty kick by attending more to the non-kicking leg of the shooter and attending less to the trunk area (more on the relevant notion of attention later on in this section). As I detail in a bit more length in c hapter 3, my view is that long- term shifts in attention should be thought of as involving changes in perception. Very roughly, this is because there is evidence that where you attend alters your perception of all sorts features, including a color’s saturation (Blaser, Sperling, & Lu, 1999), the size of a gap (Gobell & Carrasco, 2005), spatial frequency (Abrams, Barbot, & Carrasco, 2010), and contrast (Carrasco, Ling, & Read, 2004). Ned Block, who was the first philosopher to highlight these experiments, sums up that they “provide strong evidence for the claim that the phenomenal appearance of a thing depends on how much attention is allocated to it” (2010, p. 34). I agree with Block’s evaluation. The concept of attentional weighting helps us to better understand Peacocke’s Cyrillic case. As Siegel interprets the case: When you are first learning to read the script of a language that is new to you, you have to attend to each word, and perhaps to each letter, separately. In contrast, once you can easily read it, it takes a special effort to attend to the shapes of the script separately from its semantic properties. You become disposed to attend to the 25
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
semantic properties of the words in the text, and less disposed to attend visually to the orthographic ones. (2010, p. 100)
If Siegel’s interpretation of the case is correct, then it is a case of attentional weighting. One starts by attending separately to each word (or maybe even to each letter), but as one becomes familiar with the script, one decreases attention to the orthographic features. To get more precise about the relevant notion of “attention” in attentional weighting, consider the common (but subtle) distinction between bottom-up and top-down attention. Wayne Wu (2014) puts the distinction as follows. Bottom-up attention is “roughly understood as attention which is engaged due purely to sensory input. . . . [I]t is defined as attention whose occurrence does not depend on non-perceptual representations such as the subject’s goals (e.g., an intention to attend to an object)” (p. 281). When a sudden flashing light or loud bang immediately grabs your attention, this is a standard case of bottom-up attention. By contrast, top-down attention is “attention whose occurrence depends on a non-perceptual state, such as an intention to attend to a specific object” (p. 285). When you start to look for the exit because you intend to leave a building, this is a standard case of top-down attention. As Wu describes it, the top-down versus bottom-up distinction is about “how attention gets initiated, i.e., whether the subject is passive or active in that initiation” (2014, p. 34). After the initiation of attention, a further distinction becomes relevant: the distinction between controlled and automatic attention. As Wu puts it, controlled attention is attention that is consistent with one’s intention (p. 33). If I continue to look around for the exit because I intend to leave the building, this is controlled attention. Automatic attention is the absence of control. To complicate things slightly, on Wu’s account, one might ask whether attention is controlled or automatic with respect 26
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
to many different features of the process of attention: “where attention is directed and in what sequence, how long it is sustained, to what specific features in the scene, and so on” (p. 34). The attention involved in attentional weighting is by and large (if not always) top-down attention. It is not the sort of attention drawn suddenly by flashing lights or loud bangs. However, especially as attention becomes weighted, certain features of the attentional process can become automatic. As an example, consider the Shiffrin and Schneider (1977) study, where letters were used first as targets in the experiment, but later as distractors to be ignored. In the later phase, while the participants initiated their own attention (with the intention of spotting the targets), their attention was automatic with respect to the letters in the scene, even though they were deliberately trying to ignore the letters. In other words, over the course of the study, the participants’ attention had become weighted towards the letters. Throughout the study, their attention stayed top-down, since it was still initiated by the intention of finding the target. However, their attention began controlled and became automatic with respect to the letters.
In addition to the examples I have mentioned so far, in which philosophers offer examples of perceptual learning, other philosophers deny perceptual learning in particular cases (although this is consistent with admitting it in others). A. D. Smith (2002), for instance, has us compare our perception of a typewriter with that of someone from a very different background who has never seen typewriters before. Smith writes, “The differences between the two of you are, of course, highly significant in relation to possible actions, reactions, beliefs, desires, intentions, and evaluations; but there is, or need be, no perceptual difference” (pp. 96–97). 27
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
Given the distinctions made in this section, we can spell out the conditions that must be met for Smith’s claim to be true. First, it cannot be the case that in one person’s perception, certain features of the typewriter go together (i.e., are unitized), while in the other person’s they do not. Second, there cannot be a difference in the way the two people differentiate the features of the typewriter or in the way they differentiate the typewriter from its background. Third, there can be no difference in their attention. Seen in this way, Smith has committed himself to a questionable view: that despite very different backgrounds between you and the other person, there is nonetheless not a single difference in how he attends to what to him (but not to you) is a novel object, nor is there a difference in how he differentiates its parts, nor a difference in how he sees (or fails to see) certain parts as going together.
1.4 THE OFFLOADING VIEW OF PERCEPTUAL LEARNING So far, I have drawn distinctions between particular cases of perceptual learning, and applied these distinctions to cases in the philosophical literature. I now want to say more about what these cases have in common. In particular, while philosophers have said that these cases occur, what has been overlooked is why such perceptual changes happen. What I want to say is that cases of perceptual learning do not just occur in different domains, explanatorily isolated from one another. Rather, each case happens for the same general reason. In what follows, I offer an account of perceptual learning that explains their functional role, showing what purpose these cases serve. To illustrate my view, consider the president of a country, and the aides that serve the president, such as the speechwriter and the 28
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
press secretary. These aides do tasks that the president could otherwise do. Yet the tasks are taken on by the aides in order to free up the president to do more important tasks, such as responding to foreign and domestic crises. What I want to argue is that perceptual learning works in much the same way (though there are some points of disanalogy). In perceptual learning, tasks get offloaded onto one’s quick perceptual system. This frees up cognitive resources for other tasks.16 Call this the “Offloading View” of perceptual learning. (For more on the idea of perceptual learning serving to free up cognitive resources, also see Schneider and Shiffrin, 1977; Shiffrin & Schneider, 1977; Shiffrin, 1988; Kellman, 2002, pp. 266–267; Kellman & Garrigan, 2009, pp. 63–64; Kellman & Massey, 2013, p. 120; Goldstone, de Leeuw, & Landy, 2015, pp. 24, 26.) Why should we think that perceptual learning offloads tasks onto our perceptual system, thereby freeing up cognitive resources for other tasks? The answer to this question spans the entire book. To start, one piece of evidence comes from a battery of studies on experts. In a founding study in the literature, Bryan and Harter (1899) examined the process by which attention changes as one learns to understand telegraph messages. One of the authors, Harter, was himself a telegraph expert, and they had access to seven experts, twenty-two telegraphers of average ability, and eight below average telegraphers (Bryan & Harter, 1897, p. 27). Bryan 16. As Kellman and colleagues (2008) put it, there are two ways to reduce cognitive load in order to avoid load limits. One is by modifying the task. For instance, Chandler and Sweller (1991) found an increase in cognitive load when instructions used text and diagrams that referred to the same thing (such as building a cabinet), but where the text and diagrams were separate, and not integrated with one another. This required a person to mentally integrate the information. Cognitive load was reduced by integrating the text and diagrams in the instructions, such that when a person is, say, building a cabinet, the diagram includes the text within it. In addition to modifying the task, in order to reduce cognitive load, Kellman and colleagues suggest changing the learner through the process of perceptual learning (p. 364).
29
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
and Harter asked the telegraphers, “To what is attention mainly directed at different stages of progress?” The authors reported the following response: The answers agreed entirely, and were as follows: (a) At the outset one ‘hustles for the letters.’ (b) Later one is ‘after words.’ (c) The fair operator is not held so closely to words. He can take in several words at a mouthful, a phrase or even a short sentence. (d) The real expert has all the details of the language with such automatic perfection that he gives them practically no attention at all. He can give his attention freely to the sense of the message, or, if the message is sent accurately and distinctly, he can transcribe it upon the typewriter while his mind is running upon things wholly apart. (1899, p. 352, italics added)
In learning telegraphy, receiving a message starts out as a slow, deliberate, and cognitively taxing task, but later gets offloaded onto the perceptual system. The benefit of this offloading process is that it frees up cognitive resources for other tasks, such as thinking about the meaning and implications of the message. Some presidents (e.g., former US president Barack Obama) start off as elected officials at a much more local level. There they write their own speeches and deal with the press themselves. But as they move to higher offices, such tasks typically get shifted onto aides. Such systems of intelligence—ones that shift tasks as they grow—are ubiquitous. Think about the professor who starts off by grading her own papers and doing all of her own research labor. As she moves up in the academic system, these tasks may get shifted onto other people as she gets paper graders and research assistants. Think of a foreman in a construction company who started off at an entry-level job with the company, removing debris and putting up 30
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
drywall. As he moves up in the company, those tasks are offloaded to other people, freeing him up to do managerial tasks. In domains of perceptual expertise, such as bird watching, the expert develops through a process of offloading more and more onto the perceptual system. A novice birdwatcher, for instance, might start with a list of features for recognizing a wren, checking them off one-by-one as she sees a bird. As a result of practice, however, her perceptual system processes them in such a way that enables her to identify wrens quickly and automatically. This same process of offloading onto her perceptual system happens over and over again as the birdwatcher develops her skills. Because of the previous offloading, the person is now in a better position to identify a House Wren and a Marsh Wren, and the same offloading process begins anew for identifying those particular types of wrens. More generally, perceptual learning often involves the process of repetitive offloading (see Kellman, 2002, p. 267; Kellman & Garrigan, 2009, pp. 63–64; Kellman & Massey, 2013, p. 120). This is evident in Bryan and Harter’s (1897, 1899) studies on telegraphy. At first, the offloading process enables the identification of letters. Then it enables the identification of words. Next, it enables the identification of phrases. Over time, the process of repetitive offloading makes it such that the telegrapher does not have to think about the sounds, and can just focus on the meaning and implications of the message. The offloading process that takes place in perceptual learning allows experts in certain domains to better juggle a large number of tasks at the same time. Think about a jeweler in a busy shop trying to do several things at once. Because tasks get offloaded onto the jeweler’s perceptual system, she does not have to think as much about each feature of a jewel she is appraising. This enables her to think about what the value of a stone is, take calls on the phone, 31
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
or build rapport with a customer. The jeweler may often need to do many things well in a short time so as to make more money for the shop. As the case of the jeweler attests, a major benefit of freeing up cognitive resources occurs in cases where identification needs to be done quickly. This potentially has significant consequences. To take an ecological case, imagine someone who has become good at distinguishing animal tracks and who needs to identify newly sighted tracks quickly in case a predator notices them. In such cases, freeing up a small amount of cognitive resources can yield a significant advantage. Offloading is a general phenomenon that can occur in many different domains (including non-perceptual ones), and we can distinguish different types of offloading by where tasks get offloaded. Sometimes tasks get offloaded externally. The president of a country offloads tasks onto government aides, while the professor offloads tasks to her graduate assistants, and the foreman offloads tasks onto the construction laborers. Tasks can get offloaded not just onto other people, but onto external objects, such as a notebook in the case of Otto, the Alzheimer’s sufferer who writes everything down in his notebook to help him in his life (Clark & Chalmers, 1998, pp. 12–16). Importantly, however, perceptual learning is distinct from all these types of offloading because it is offloading onto the perceptual system, rather than onto something external. Moreover, unlike in Otto’s case, where the offloading occurs through writing, the offloading in perceptual learning occurs in a particular way, through practice or experience. One untested but promising empirical hypothesis is that the need for cognitive resources in fact predicts perceptual learning, at least within some constraints. Suppose experimenters take a participant who is already pretty good at a particular task (say, at 32
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
identifying Greebles) because of perceptual learning. Then suppose experimenters add a challenging but just barely doable cognitive task for that participant to perform at the same time she is identifying Greebles. Perhaps this would make the perceptual learning more automatic. It might force the person to become able to identify the Greebles while using fewer cognitive resources. Of course, if experimenters just bombarded participants with a large cognitive load, that would likely make it harder for perceptual learning to occur for them, so there are some constraints. To my knowledge, experimenters have never tested whether the need for cognitive resources in fact predicts perceptual learning. However, there is some evidence for this hypothesis from other domains of learning. For instance, in a study by Spelke, Hirst, and Neisser (1976), two participants silently read short stories while lists of words were dictated to them. The participants were tasked with writing down the words as they were reading. At first, the participants were unable to do the writing while reading at their normal speed (p. 219). However, after several weeks, they became able to write down the words with accuracy while also reading at their normal speed. (To ensure that the participants really were reading, they were graded on reading comprehension questions afterward.) One way to interpret this study is that the need for cognitive resources induced the automatization of the writing task. It remains an open empirical question whether perceptual learning can be induced in a similar way, by strategically starving participants of cognitive resources. It might seem at first glance that perceptual learning simply involves quicker cognition, rather than an offloading of a task onto the perceptual system. On such a view, we would understand perceptual learning as being like the following kind of learning done by racecar drivers. Racecar drivers frequently look at gauges, and they learn to make very quick inferences about what those 33
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
gauges monitor. They could not always do this, so it is learned. Furthermore, their learned ability frees up cognitive resources to do other things. However, note that if perceptual learning only worked in such a way, it would not be as beneficial to us. This is because cognitive processes that are trained to be quicker still use up cognitive resources. So, when a racecar driver looks at the gauge, he is still using up cognitive resources while he makes a quick perceptual inference. On the other hand, offloading a task onto the perceptual system frees up cognitive resources without using them at the same time. Whatever additional resources perception might use as the result of having a task offloaded onto it (if any), the important point is that the task offloaded to the perceptual system no longer competes for cognitive resources. As Goldstone, de Leeuw, and Landy (2015) put it, perceptual learning enables “an individual to become tuned to frequently recurring patterns in its specific local environment that are pertinent to its goals without requiring costly executive control resources to be deployed,” where executive control resources are understood as cognitive in nature (p. 24, italics added). Even if perceptual learning does result in the use of additional perceptual resources (and, again, I disagree),17 the trade-off for freeing up cognitive resources has significant advantages. In particular, freeing up cognitive resources greatly enhances the prospect of future learning. It “paves the way for the discovery of even more complex relations and finer detail, which in turn becomes progressively easier to process” (Kellman & Massey, 2013, p. 120). Put another way, freeing up cognitive resources enables a person to engage more 17. For instance, if someone learning a language struggles at first to tell apart two different phonemes, but her perception later becomes tuned to tell them apart, is this taking up more perceptual resources? Perhaps it is not that the processing uses more perceptual resources, but just that the processing becomes different.
34
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
in the process of discovery. And, as Kellman and Massey describe, this can create a “positive feedback loop” in which “[i]mprovements in information extraction lead to even more improvements in information extraction” (p. 120). Finally, by understanding perceptual learning in terms of offloading, we now have a more precise way to describe cases of perceptual learning in the philosophical literature, an account of why these cases occur, not just that they occur. The diverse array of cases, which occur in several distinct perceptual domains, all happen for the same general reason. Recall Peacocke’s example in which Cyrillic letters look different to someone who knows the language. As in Bryan and Harter’s telegraphy study, at first, recognizing the letters and words might be cognitively demanding. Through practice and experience, however, the perceptual learning process eventually enables you to identify the letters quickly and automatically, freeing up cognitive resources for other tasks, like trying to understand the implications of the passage you are reading. Block’s case of hearing French is similar. At first, you may have to struggle to differentiate the words. Through practice, however, you become able to do so perceptually, freeing up cognitive resources to focus on the meanings of the words. In Vedānta Deśika’s case, which we can now classify as a case of differentiation, an expert jeweler is able to see two or more colors of a gem as different, where as a novice he saw them as the same. Because the perceptual system has offloaded this task, the jeweler is better able to juggle other tasks as she does a new appraisal, whether it is by asking about the origin of the gemstone or thinking about the price. In Reid’s case, a farmer acquires the ability to see the rough amount of hay in a haystack or corn in a heap. This offloading of a task enables him to think about what to do with the haystack or heap of corn, or about his next chore on the farm. 35
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
1.5 LOOKING AHEAD The goal of this chapter has been to explain what perceptual learning is and to detail the Offloading View of perceptual learning. According to this view, perceptual learning offloads onto our quick perceptual systems tasks that would be more cognitively taxing and slower were they done in a controlled, deliberate manner, thereby freeing up cognitive resources for other tasks. Along the way, I also distinguished a variety of different types of perceptual learning, and classified cases of perceptual learning in the philosophical literature according to those types. But applying these concepts to cases of perceptual learning in the philosophical literature is just a start. The real work to be done is in applying these concepts to domains in the philosophy of perception in order to re-frame those domains. Later on, in Part II of the book, each chapter takes a mechanism of perceptual learning (attentional weighting, unitization, or differentiation) and applies that mechanism to a particular issue in the philosophy of perception. Whether it is debates about the contents of perception, sensory substitution, multisensory perception, speech perception, or color perception, viewing these issues through the lens of perceptual learning can help us to see whole domains in the philosophy of perception anew. There and elsewhere, I use a scientific understanding of perceptual learning to clarify the roles that perceptual learning can legitimately play in philosophical arguments. Before we move on to Part II, however, I will address skepticism about perceptual learning. Some philosophers (including John McDowell and A. D. Smith) deny in specific cases that perceptual learning occurs, at least in the sense in which I have been understanding it. Specifically, they deny in specific cases that there are perceptual changes due to learning. Of course, denying perceptual
36
H o w t o U n d e r s ta n d P e r c e p t u a l L e a r n i n g
learning in specific cases is consistent with admitting it in others. However, other philosophers are more generally skeptical of perceptual changes due to learning. Fred Dretske (1995), for instance, writes, “Through learning, I can change what I believe when I see k, but I can’t much change the way k looks (phenomenally) to me, the kind of visual experience k produces in me” (p. 15). Dretske’s view is that putative cases of perceptual learning involve cognitive changes, not perceptual changes. In the next chapter, however, I argue that the weight of the evidence—behavioral, neuroscientific, and introspective—supports the view that perceptual learning is genuinely perceptual.
37
Chapter 2
Is Perceptual Learning Genuinely Perceptual?
2.1 INTRODUCTION Chapter 1 began with some common claims people sometimes make: that Cabernet Sauvignon tastes different to an expert wine taster, or that Beethoven’s Ninth Symphony sounds different to a seasoned conductor. If such claims are true, learning can literally change our percepts, and perceptual learning (in the sense in which I have been understanding it) occurs. That is, there are genuine perceptual changes due to learning. The view just described, which is the view I will defend in this chapter, is often met with an objection. The objection gives an alternative account of expertise. According to the alternative story, Cabernet Sauvignon doesn’t taste different to an expert. Instead, it tastes the same. However, there is a post-perceptual difference between the expert and the non-expert. Perhaps the expert simply has different concepts for the wine, and these concepts allow her to take the wine to be a Cabernet Sauvignon, even if it doesn’t taste any different to her than it does to the novice. Similarly, according to the alternative story, Beethoven’s Ninth Symphony doesn’t
38
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
sound different to the expert than it does to a novice. Perhaps it’s just that the expert and the novice differ in their concepts, and so the expert is able to take the second movement to be a scherzo, whereas the non-expert is not so able. This idea amounts to a kind of skepticism about perceptual learning as I have characterized it. It is skepticism of the claim that there are in fact long-term changes in perception that result from practice or experience. On the skeptical view, putative cases of perceptual learning are not genuinely perceptual. The skeptical view is correct about at least some putative cases of perceptual learning. In a 2008 study, for instance, Law and Gold trained two rhesus macaque monkeys on a pattern of dots presented for one second or less. As a group, the dots either moved left or moved right. However, the coherence strength of the dot group could vary. If 99% of the dots moved left, that would be a very strong coherence strength, and somewhat easy to detect. However, if only 25% of the dots moved left while the others remained relatively stationary, that would be more difficult to detect. Over time, the monkeys got better at detecting the direction of the more-difficult- to-detect dot patterns, and became able to do so even when the dots were presented for a shorter duration. Importantly, however, Law and Gold found no changes in the visual areas of the brain between before and after the monkeys were trained, while they did find changes in sensory-motor areas responsible for decision-making. So, Law and Gold’s (2008) study does seem to indicate that the skeptical view is right about at least some putative cases of perceptual learning: they are not genuinely perceptual. However, as I will argue, the skeptical view is incorrect about a battery of cases in the perceptual learning literature. Based on the evidence, the Law and Gold study seems to be the exception, not the rule.
39
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
To sharpen what is at issue here, consider two different views of what is happening in a case of, say, wine expertise. According to the “Perceptual View,” the wine literally tastes different to the expert than it tastes to the layperson. According to the “Cognitive View,” the wine tastes the same to both, but the expert is able to judge that the wine is a Cabernet Sauvignon, while the non-expert cannot. I call the contrasting view to the Perceptual View the Cognitive View here, but really, the contrast could be with any post-perceptual interpretation of perceptual learning, such as a view that says that the changes occurring in perceptual learning are just sensory-motor changes. My claim is that the Perceptual View is preferable, based on the total evidence from philosophical introspection, neuroscience, and psychology. My plan in this chapter is to outline the Cognitive View, and then argue that there is better evidence for the Perceptual View. Let me clarify something about the dialectic here. I am arguing that perceptual learning is genuinely perceptual. This is consistent with allowing that in a case like wine expertise, there are both perceptual differences and cognitive differences between the expert and the non-expert. As long as there are perceptual differences (that are long-term and the result of a learning process), perceptual learning occurs, in the sense in which I have been understanding it. The Cognitive View, as I am understanding it, and as held by several of the philosophers mentioned in this chapter, claims that in cases like wine expertise, there are cognitive differences between the expert and the non-expert, but not perceptual differences. In chapter 1, I gave an account of perceptual learning that had three parts. First, perceptual learning involves long-lasting changes. Second, these changes are changes in perception. Third, the changes result from practice or experience. This chapter provides support for the second part of this account: that cases of perceptual learning are genuinely perceptual changes. I examine the 40
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
issue at several different levels of analysis1 and make an abductive argument. The argument draws on philosophical introspection spanning several hundred years and many different places, as well as evidence from psychology and neuroscience. Before I get to this evidence, however, let me explain in more depth why certain philosophers are skeptical that cases of perceptual learning are genuinely perceptual.
2.2 SKEPTICISM ABOUT PERCEPTUAL LEARNING AS GENUINELY PERCEPTUAL Several philosophers have registered skepticism about whether particular putative cases of perceptual learning are genuinely perceptual. John McDowell (2008), for instance, discusses a case where he possesses a concept of a cardinal, but an acquaintance of his does not. McDowell says that he will differ from her in what he takes the bird to be, since he will take it to be a cardinal while she will not. However, McDowell claims, there need not be a perceptual difference between them. Their perceptual experiences might be exactly the same in how it is makes the bird visually present to them (2008, p. 3). Similarly to McDowell, as I mentioned in chapter 1, A. D. Smith discusses a case of two different people from very different backgrounds, one who has the concept of a typewriter, and one who has never seen a typewriter before. Smith (2002) says that the two people will differ in their actions with regard to the typewriter and
1. I have been influenced by the “levels of analysis” approach found in Marr (1982) and then Kosslyn (2007, pp. 10–11), and as described in parts of Kosslyn and Nelson (2017), especially in Kosslyn (2017) and in Genone and Van Buskirk (2017).
41
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
in the beliefs they have about it. However, he says, there need not be a perceptual difference between them (pp. 96–97). Both McDowell and Smith are denying perceptual learning in particular cases. McDowell denies that perceptual learning plays a role, or at least needs to play a role, in distinguishing him from his acquaintance as they both look at the cardinal. Likewise, Smith denies that perceptual learning needs to play a role in distinguishing two people from very different backgrounds both looking at a typewriter. A second kind of skepticism focuses not on particular cases, but on offering general reasons to be skeptical about the occurrence of perceptual learning (in the perceptual sense in which I have been understanding it). Dretske (1995) and Tye (2000) have both suggested that because of the evolutionary history of our perceptual systems, we are not much able to change our perception through learning. The argument runs as follows: First, our perceptual phenomenology, at a basic level, is fixed by our evolutionary history. As Dretske (1995) puts it, “The quality of a sensory state—how things look, sound, and feel at the most basic (phenomenal) level— is . . . determined phylogenetically” (p. 15; see also Tye, 2000, p. 56). Our perceptual systems are hardwired to deliver all humans the same basic perceptual phenomenology, phenomenology that is the result of hundreds of thousands of years of evolution. Dretske and Tye do not deny that there are changes in phenomenology due to maturation. However, their view is that after an early period in our life, our perception remains relatively fixed. As Dretske puts it, we cannot easily change our perceptual phenomenology, since “we inherit our sensory systems, [and] since they are (at a fairly early age, anyway) hard-wired” (p. 15; see also Tye, 2000, p. 56). Next, the argument continues, because our perceptual phenomenology is fixed at a basic level by our evolutionary history, this restricts the 42
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
way that learning can affect our perception. In particular, as Dretske puts it, “Through learning, I can change what I believe when I see k, but I can’t much change the way k looks (phenomenally) to me” (p. 15; see also Tye, 2000, p. 56). According to the view just outlined, perception-based learning occurs, but it occurs at the level of belief, not at the sensory level. There is something intuitive about this idea. Perception affects all sorts of things that can be downstream from it, including our behav ior and our beliefs. So, it is reasonable that our brain would have a stable foundation in perception, which then feeds into these other systems that can learn and adapt as necessary. The view outlined by Dretske and by Tye can still accommodate cases of learning and expertise. Start with Dretske’s statement, “Through learning, I can change what I believe when I see k, but I can’t much change the way k looks (phenomenally) to me” (1995, p. 15). Now consider the claims from the beginning of the chapter that Cabernet Sauvignon tastes different to an expert wine taster, or that Beethoven’s Ninth Symphony sounds different to a seasoned conductor. On Dretske’s view, both the expert wine taster and the non-expert taste a Cabernet Sauvignon. It may well taste the same to them, at least given that there are no major genetic differences in their gustatory systems. However, there is an important difference between them. The expert has concepts that the non-expert lacks. So, while they both taste a Cabernet Sauvignon, the expert is able to taste that it is a Napa Valley Cabernet Sauvignon, while the non-expert cannot. The two have the same basic sensory phenomenology, as fixed by their sensory systems. However, they have different concepts, which are acquired through learning. One way that Dretske (1995) puts this is as follows: “We can change what we see something as—what we, upon seeing it, take it to be—even if we cannot, not in the same way, change what we see” (p. 15). On this 43
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
view, our sensory phenomenology remains relatively stable through learning, but we can change what we take things to be upon seeing them. This is a kind of cognitive learning that does not involve a sensory change. Turning from the wine case to the symphony case, both the expert and the non-expert hear Beethoven’s Ninth Symphony. It may well sound the same to them, at least given their genetic similarity (such as that the non-expert is not tone-deaf), and given a relatively similar listening position. But the expert has concepts that the non-expert lacks. So while both hear the scherzo in Beethoven’s Ninth Symphony, only the expert hears that it is a scherzo. As Dretske sometimes puts it, both are aware of the scherzo, but only the expert is aware that it is a scherzo. When it comes to the expert, through learning, she has changed what she believes when she hears the scherzo, even if she cannot much change the way it sounds to her. Recall from section 2.1 that I call the view just outlined the Cognitive View, as opposed to the Perceptual View that I will be defending. According to the Cognitive View, the perceptual phenomenology of the expert and non-expert alike is the same, at least given similar genetics, a similar angle of perceiving, and so on, but they differ in the cognitive inferences that they make. According to the Perceptual View, by contrast, the perceptual phenomenology of the expert can differ from that of the non-expert, even given similar genetics, a similar angle of perceiving, etc. One important point that the Cognitive View illustrates, in my view, is the following: It illustrates just how unclear it is from behavior alone whether certain learned abilities are perceptual in nature or just cognitive in nature. This is important because a very significant number of studies on perceptual learning are behavioral studies. What the Cognitive View illustrates is that it is unclear in 44
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
these studies whether the evidence supports the Perceptual View, since it could instead support the Cognitive View. Take the studies on Greebles mentioned in c hapter 1. In these studies, participants become quicker and more accurate at identifying action-figure- like objects (Greebles) created for lab experiments. However, it is unclear in what exactly the trained participant’s learning consists. When asked to identify the sex and family of a given Greeble, does the participant just quickly infer those things from what she sees? Or is it instead that through learning, the participants see the Greeble differently, and quickly judge the sex and family of the Greeble based on this different perception? The Cognitive View shows just how difficult it is to determine an answer here based on behavior alone. As Kent Bach points out in a review of Dretske’s 1995 book, Dretske’s account precludes cases of perceptual learning (in the way that I have been understanding it; Bach, 1997, p. 461). The view allows for perception-based changes. However, these changes are cognitive changes, and specifically, changes in belief. Put another way, Dretske’s view allows for perception-based changes, but without sensory changes. So, it rejects perceptual learning as I am understanding it, that is, as involving sensory changes. In this section, I have outlined the Cognitive View and applied it to wine and musical expertise. I now want to argue that there is better evidence for a Perceptual View in cases such as these.
2.3 INTROSPECTIVE EVIDENCE THAT PERCEPTUAL LEARNING IS GENUINELY PERCEPTUAL Why think that cases like the wine and music cases involve genuinely sensory changes? In my view, the best way of arguing for 45
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
this is to build different bodies of evidence for its conclusion. This is converging evidence that comes from different levels of analysis: from philosophical introspection, neuroscience, and psychology. Starting with introspective evidence, first, there are individual cases that often elicit pro-perceptual learning intuitions. Siegel (2006, 2010), for instance, asks us to suppose we’ve been tasked to cut down all and only the pine trees in a particular grove of trees (2010, p. 100). After a while, she says, pine trees might look different to you. Specifically, she says that there is a change in your sensory phenomenology. Many others have shared this intuition. This shows that, as we start to build evidence for sensory changes, philosophical introspection can yield intuitions in its favor. I mentioned earlier that there are some philosophers (such as McDowell and Smith) who are skeptical that particular cases are genuine instances of perceptual learning (in the sensory way in which I have characterized perceptual learning). If there were only dissenters, this would give us some reason for rejecting that sensory changes occur due to learning. However, there are a multiplicity of philosophers from different times and places who independently argue, based on introspection, that perceptual learning (as I have been understanding it) occurs. This includes the fourteenth-century Hindu philosopher Vedānta Deśika, the eighteenth-century Scottish philosopher Thomas Reid,2 and the contemporary philosophers Charles Siewert, Susanna Siegel, Casey O’Callaghan, and Berit Brogaard, among others. The multiplicity of philosophers (again, from different times and places) provides some prima facie evidence that perceptual learning 2. Again, here I am understanding Reid under the Copenhaver (2010, 2016) interpretation, and not the Van Cleve (2004, 2015, 2016) interpretation.
46
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
occurs. At the same time, those who dissent from sensory changes in particular learning cases offer a very limited dissent. Denying perceptual learning in each of McDowell’s and Smith’s cases is consistent with admitting perceptual learning in others. In fact, strictly speaking, both Smith’s and McDowell’s claims are consistent with perceptual learning happening in the very examples that they discuss. After all, McDowell (2008) simply says of the other person viewing the cardinal, “Her experience might be just like mine” (p. 3, italics added). Similarly, Smith (2002) writes of the difference between the two people viewing the typewriter: “[T]here is, or need be, no perceptual difference” (p. 97, italics added). So, Smith’s and McDowell’s skepticism of perceptual learning turns out to be quite limited. As the preceding paragraph attests, there is some defeasible evidence based on historical cases of introspection for the existence of perceptual learning in general. At the same time, there is also defeasible evidence, again based on introspection, that specific cases of perceptual learning occur (again, in the perceptual sense in which I have been understanding it). For instance, as O’Callaghan (2011, pp. 786–787) points out, no fewer than six philosophers over the span of a decade and a half have written that a language sounds different once a person learns to speak it than it did before they learned the language. O’Callaghan cites Block (1995, p. 234), Strawson (2010, pp. 5–6), Tye (2000, p. 61), Siegel (2006, p. 490), Prinz (2006, p. 452), and Bayne (2009, p. 390) as all making essentially this same claim. It is notable that even Tye agrees in this particular case, despite his general reasons for rejecting perceptual learning. So there is some prima facie evidence, based on introspection, that this particular case of perceptual learning occurs. And, of course, if that particular case of perceptual learning occurs, then perceptual learning occurs. 47
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
Introspective evidence provides support for the claim that perceptual learning occurs (in the perceptual sense that I have been understanding it). At the same time, it is not the only source of evidence for perceptual learning. We can make the case for perceptual learning stronger by adding bodies of converging evidence from other levels of analysis, namely psychology and neuroscience. In the next section, I look at some evidence from neuroscience, followed by behavioral evidence from psychology in the section after that.
2.4 NEUROSCIENTIFIC EVIDENCE THAT PERCEPTUAL LEARNING IS GENUINELY PERCEPTUAL Many leading scientists working on perceptual learning are well- aware of the view of putative cases of perceptual learning as cognitive. On the balance of the evidence, however, these scientists think that cases of perceptual learning are in fact perceptual, largely due to neuroscientific evidence from the last few decades (see Goldstone, 2003, p. 238; Fahle, 2002, p. xii). In particular, they give a great deal of weight to neuroscientific evidence that perceptual learning modifies the primary sensory cortices. It is understandable why scientists opt for the Perceptual View of perceptual learning cases rather than a cognitive one, on account of learning-induced changes in the adult primary sensory cortices. After all, if the Cognitive View of putative cases of perceptual learning were right, the changes in adult primary sensory cortices would be a curious data point. If putative cases of perceptual learning are really cognitive changes, why would there be changes in the primary sensory areas at all? If the Perceptual View of perceptual learning is right, however, perceptual changes in adult primary 48
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
sensory cortices make sense. The idea is that perceptual learning involves changes in perception, so accompanying changes in the areas of the brain responsible for perception are to be expected. In neuroscience, the standard view on perceptual learning has evolved. While it was previously thought that cortical plasticity ended when the critical period for perceptual development was over, this is no longer thought to be the case. Why was it once thought that cortical plasticity ended after the critical period? One major reason is the specific results of a 1963 study by David Hubel and Torsten Wiesel (for a very brief summary, see Hubel & Wiesel, 1977, p. 46). Wiesel and Hubel (1963) took seven kittens and one adult cat and deprived them each of vision in one of their eyes for one to four months. They did this either by closing an eyelid or by covering an eye. This deprived the animals of shape and (to a lesser extent) illumination information. Wiesel and Hubel (1963) monitored how this deprivation affected cells in the primary visual cortex. They found that the outcome depended on the age of the cat. The kittens were deprived within the first few months of life. With them, the deprivation led to many more cells in the primary visual cortex being activated by visual stimulation of the non-deprived eye, and many fewer cells being activated by visual stimulation of the deprived eye (pp. 1009–1010). By contrast, when the adult cat was deprived of visual stimulation for three months, there was no effect on the activation of the cells that Wiesel and Hubel monitored in the primary visual cortex (pp. 1010–1011). As Garraghty and Kass (1992, p. 522) and Gilbert and Li (2012, p. 250) have reported, throughout the 1960s and 1970s, this outcome from Wiesel and Hubel was taken to show that the primary visual cortex is malleable during a critical period in early development, but not afterward. In more recent years, however, the view has shifted. Whereas it was previously thought that cortical changes ended with the end of the 49
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
critical period, the view now is that experience-dependent changes extend well into adulthood (see Garraghty & Kass, 1992, p. 522; Gilbert, 1996, p. 269; Gilbert & Li, 2012, p. 250; Goldstone, 2003, p. 239; Fahle, 2002, p. x; Watanabe & Sasaki, 2015, pp. 198, 200; Sagi, 2011, pp. 1552–1553). Recall the view from Dretske and Tye that although perception changes during the critical period in early human development, after this period, our perception remains relatively fixed (Dretske, 1995, p. 15; Tye, 2000, p. 56). That view is consistent with an older conception of plasticity in neuroscience, but not with the more recent conception of it. A new understanding of cortical plasticity as something that happens throughout life has replaced the classical understanding of it, according to which cortical plasticity occurs only during a short period in early life. Why exactly has the standard view of primary sensory cortices changed from Wiesel and Hubel’s day, so that now they are considered to be malleable even after the critical period? In the rest of this section, I survey several studies that provide evidence for today’s standard view (for good surveys in this area on vision, see Sagi, 2011; Gilbert & Li, 2012). Consider a study by Furmanski, Schluppeck, and Engel (2004). The study speaks to the following question: When you train people to consistently tell apart two things that are very similar visually, does this alter early visual processing in their brains? In the study, participants were trained to improve their detection of a 45-degree- angled stimulus (see Figure 2.1). The participants trained for 25 to 40 minutes a day for 29 days. During the training, there were two 300-millisecond intervals, only one of which contained a stimulus, and participants were forced to choose which one of the two temporal intervals contained the stimulus (there was a 1300-ms gap between the two intervals). At various times throughout the trial, the 50
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
Figure 2.1. Furmanski, Schluppeck, and Engel (2004) trained participants to improve their detection of a 45-degree stimulus for low-contrast (i.e., faded- looking and hard to detect) presentations of the stimulus. Using functional magnetic resonance imaging (fMRI), the experimenters scanned participants’ brains before and after the training. They found increased activation in the primary visual cortex after learning when, and only when, the participants were shown the trained 45-degree stimulus. Source: Furmanski, Schluppeck, and Engel (2004).
stimulus was harder to detect (due to various lower-contrast values of the stimulus, which essentially make the stimulus look more and more faded). Throughout the training, participants improved at being able to detect lower-contrast presentations of the stimulus. The crucial part of the study for our purposes involved functional magnetic resonance imaging (fMRI) scanning before and after training. While they were being scanned, the participants performed the following task: They were forced to choose which of two stimuli was more clockwise. One of the stimuli was the 45- degree stimulus involved in the training. The other stimulus was 45 degrees plus some increment. Each stimulus was presented in its own 300-millisecond interval, and there was a 1300-millisecond gap between the intervals, as well as a 1100-millisecond response period afterward. Before and after training, the participants performed this task as they underwent fMRI scanning. The fMRI scanning revealed that there was an increased activation in the primary visual cortex after learning. Specifically, the amplitude of the fMRI signal 51
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
increased by 39% on average in response to the 45-degree stimulus on which they had been trained, versus control stimuli. Furmanski, Schluppeck, and Engel (2004) is just one study, but I want to get my argument out on the table now, after which I will run through other similar studies. The idea is this: there is neuroscientific evidence for changes very early in visual processing after learning. This evidence lends support to the view that perceptual learning is genuinely perceptual rather than cognitive (or otherwise post-perceptual). The argument runs as follows: P1: If the Perceptual View were correct, then we would expect changes in visual processing after perceptual learning. P2: If the Cognitive View were correct, then we would not expect changes in visual processing after perceptual learning. P3: There is neuroscientific evidence for changes in visual processing, after perceptual learning. C: There is neuroscientific evidence for the Perceptual View over the Cognitive View.
In this way, studies like Furmanski, Schluppeck, and Engel (2004) support the Perceptual View. A study by De Weerd and colleagues (2012) provides a further piece of evidence that perceptual learning modulates the primary visual cortex. Very roughly, the study ran as follows: About 45 minutes after training the participants in a discrimination task, the experimenters shot magnetic pulses into the relevant trained part of the primary visual cortex. The result was decreased performance during testing the next day. This provides evidence that perceptual learning crucially modulates the primary visual cortex and that when this modulation process is disrupted, so too is the learning. 52
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
Here is the De Weerd et al. 2012 study in more detail. While the participants were looking at a dot in the middle of the screen, they were presented for 500 milliseconds with a sinusoidal grating (see Figure 2.2). The participants were tasked with judging whether the grating was more clockwise or more counterclockwise than 135 degrees (an orientation chosen by the experimenters), and then were given feedback as to whether their judgments were right or wrong. The stimuli were presented in two different quadrants on a screen: for 30 minutes in the lower-left quadrant, and for 30 minutes in the upper-right quadrant. Participants had 1000 milliseconds to make a judgment after each stimulus was presented. Crucially, prior to the training, 7 of the 13 participants had undergone fMRI scans while looking at stimuli in the lower-left quadrant. From the fMRI scans, the experimenters determined the area in V1 (the primary visual cortex) that the lower-left stimulus
ing
ing
Train
Train
R L
Figure 2.2. De Weerd and colleagues trained participants to detect whether sinusoidal gratings presented for 500 milliseconds were more clockwise or more counterclockwise than 135 degrees, giving them feedback on whether their guesses were right or wrong. They did this in two different quadrants on a screen, the lower-left quadrant and the upper-right quadrant. Using transcranial magnetic stimulation, they then targeted the neural area in V1 that corresponded to the training on the lower-left stimulus. They found that administering transcranial magnetic stimulation in that area disrupted the learning process for the stimulus, providing evidence that V1 plays a crucial role in perceptual learning, specifically with regard to the consolidation of prior learning. Source: De Weerd et al. (2012). 53
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
activated. About 45 minutes after the participants had trained, the experimenters then targeted that area in V1 with transcranial magnetic stimulation. The experimenters found that this targeting of the area in V1 disrupted the consolidation of prior learning.3 Specifically, the next day after transcranial magnetic stimulation, the experimenters looked at how good participants were at judging whether a grating was more clockwise or more counterclockwise than 135 degrees. The participants who underwent transcranial magnetic stimulation did not improve as much at that task with the lower-left stimulus as they did with the upper-right stimulus. Furthermore, when compared with a control group, the participants who underwent transcranial magnetic stimulation did not improve as much with the lower-left stimulus. In short, by showing that the transcranial magnetic stimulation of V1 can disrupt learning consolidation, the study provides evidence that V1 is crucially involved in perceptual learning (specifically, in learning consolidation). The studies by Furmanski, Schluppeck, and Engel (2004) and De Weerd et al. (2012) both used sinusoidal gratings as stimuli. However, Tanaka and Curran (2001) ran an electroencephalography (EEG) study using stimuli that were pictures of common types of dogs and birds. The participants in the study were bird experts and dog experts, where expertise was determined by membership in local bird and dog organizations. Tanaka and Curran found differences in early visual processing (at 164 ms) when bird experts looked at pictures of birds versus when dog experts looked at pictures of birds, and also when dog experts looked at pictures of dogs versus when bird experts looked at pictures of dogs. Tanaka and Curran (2001) take this to show, along with other studies, 3. Interestingly, sleep deprivation can also disrupt the consolidation of perceptual learning, and sleep can contribute to consolidation (see Sasaki & Watanabe, 2015, esp. p. 346).
54
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
that “the pattern of neural activity associated with the early stages of object perception can be modified by real-world experience and learning” (p. 47). Two philosophical upshots follow from the three studies I have discussed so far. First, recall that both Dretske and Tye held that although perceptual changes occur in early human development, after this critical period, our perception remains relatively fixed. By contrast, all three of the previous studies suggest that perceptual changes happen well into adulthood. Secondly, recall that Dretske’s and Tye’s view of perceptual expertise locates perception-based learning at the level of belief rather than the level of perception. However, all three of the previous studies suggest that perceptual expertise involves learning at the level of perception. I want to clarify one further point. After presenting introspective evidence for perceptual learning in section 2.3, I have summarized three neuroscientific studies that provide evidence that there are changes in the primary visual cortex after perceptual learning. However, I do not mean to imply that the changes happening in primary visual cortices are the sole neural correlates responsible for changes in sensory phenomenology due to learning. That would be a mistake. After all, the changes happening in primary visual cortices could be part of a larger correlate, or there could just be a more complicated story to be told about how these changes yield the sensory phenomenal changes. The neuroscientific studies I have presented were looking specifically at changes happening in the primary visual cortex, but that says nothing about other changes. Nonetheless, the changes in primary visual cortices are surprising and difficult for the Cognitive View (or any post-perceptual view) to explain. Why would there be changes in primary visual cortices if perceptual learning actually occurs post-perceptually? 55
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
The studies I have mentioned so far in this section all involve vision. Yet studies on other sense modalities have yielded similar results. In touch, for instance, Braun and colleagues (2000) stimulated a participant’s thumb and pinky finger at the same time, for an hour a day for 20 days, and did so for five participants. Using EEG, after connecting electrodes to each participant’s scalp, they monitored activity in the primary somatosensory cortex before and after training. They found that after training, when the thumb and pinky were simultaneously stimulated, the area in the primary somatosensory cortex activated by thumb stimulation and the area activated by pinky stimulation were closer together than before training (see Figure 2.3). The study provides evidence that learning can modulate the primary somatosensory cortex, not just the primary visual cortex. In addition to the primary somatosensory cortex, the same
Pre-training
Post-training ∆θ < 0
∆θ
L
D1 D5
R
Figure 2.3. Braun and colleagues (2000) stimulated a participant’s thumb and pinky finger at the same time for an hour a day for 20 days and did so for five participants. They monitored the participants’ brains as their thumb and pinky were simultaneously stimulated. They found that the area in the primary somatosensory cortex activated by thumb stimulation (D1) and the area activated by pinky stimulation (D5) were closer together after training than prior to training. This provides evidence that perceptual learning modulates the primary somatosensory cortex. Source: Braun et al. (2000). 56
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
principle seems to hold for the primary auditory cortex as well. As Gilbert and Li (2012) have noted, “[A]nimals trained on an auditory frequency discrimination task have a larger representation of the trained frequency in primary auditory cortex (Recanzone et al., [1993])” (p. 255). Perceptual learning in other sense modalities besides vision also seem to modulate primary sensory cortices.
2.5 BEHAVIORAL EVIDENCE THAT PERCEPTUAL LEARNING IS GENUINELY PERCEPTUAL In this chapter, the question is whether perceptual learning is genuinely perceptual. The previous section discussed neuroscientific grounds for thinking that it is. In particular, the main idea was that perceptual learning modulates adult primary sensory cortices, and this is evidence that it should be considered to be a perceptual phenomenon. Besides the neuroscientific evidence, there is also behavioral evidence for the same conclusion: that perceptual learning modulates adult primary sensory cortices. The argument runs as follows: One outcome of many behavioral studies on perceptual learning is that perceptual learning often fails to generalize. For illustration purposes, suppose a participant in a study shuts her left eye, and with her right eye alone, she learns to detect which of two intervals (of, say, 300 ms each with a 1300-ms gap in between) contains a low-contrast sinusoidal grating. Similar experiments have provided evidence that such learning might not generalize to her right eye. So, if we test her behavior when she sees the stimulus with her right eye alone, she may be unable to reliably judge which of the two intervals contains the grating, even though she can do so with her left eye. The 57
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
learning has failed to generalize. As Sinha and Poggio detail, failed generalization has been shown across many different domains: [L]earning often fails to generalize across stimulus orientation (Ramachandran and Braddick 1973; Berardi and Fiorentini 1987), retinal position (Nazir and O’Regan 1990; Dill and Fahle 1999; [Dill 2002]), retinal size (Ahissar and Hochstein, 1993, 1995, 1996), and even across two eyes (Karni and Sagi 1991, 1993; Polat and Sagi 1994; [Zenger and Sagi 2002]). (Sinha & Poggio, 2002, p. 275)
Sinha and Poggio also report a common conclusion that has been drawn from the fact that failed generalization has been shown across so many domains: “These results have led investigators to infer that the locus of visual learning is relatively early along the visual processing pathway, perhaps in cortical area V1” (p. 275). The reasoning is as follows: The studies showing a lack of generalization provide evidence that the locus of perceptual learning occurs early in visual processing because the neurons in V1 are unique in their level of selectivity (including for each eye). So, the behavioral results provide evidence that neurons in V1 are involved in the learning. As Fahle puts it, behavioral studies on perceptual learning indicate that through learning, our perceptual systems become sensitive to certain features such as stimulus orientation. Such sensitivity does not happen as early on in information processing as the retina, and so it must be happening later on in information processing. At the same time, the behavioral studies indicate that the learning can be sensitive to which eye is trained. This does not fit the profile of cortical areas beyond the primary visual cortex. However, it does fit the profile of neurons in the primary visual cortex (Fahle, 2002, p. xii; see also, Gilbert & Li, 2012, p. 255). 58
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
2.6 CONCLUSION This chapter has argued that perceptual learning is genuinely perceptual, and is fairly widespread (for instance, occurring in different sense modalities and for different stimulus types). The weight of the evidence— philosophical, neuroscientific, and behavioral— tells against Dretske’s (1995) claim that we cannot much change the way things look to us (p. 15). The philosophical evidence consisted of introspective evidence from philosophers, largely in support of the Perceptual View. The neuroscientific evidence can be summed up as follows: An older conception of neural plasticity, according to which the sensory cortices are fixed after childhood, supports the Cognitive View, but a newer conception supports the Perceptual View. In terms of the behavioral evidence, most of the behavioral evidence seems neutral on the two views. However, some behavioral evidence—namely, the evidence mentioned in section 2.5, supports the Perceptual View. I mentioned Dretske’s (1995) claim that we cannot much change the way things look to us (p. 15). It is worth mentioning that later on, in an article published 20 years later, Dretske (2015) seems to have modified his stance on perceptual learning, and for some of the same empirical reasons mentioned in this chapter. In that article, he writes that he does not want to “deny the possibility of genuine perceptual learning” (p. 166). Dretske lists a number of putative cases of perceptual learning and acknowledges that there is evidence that perceptual learning is genuinely perceptual: Yes, perhaps a person’s perceptual experiences can be enriched by exposure to and prolonged training with relevant stimuli. Perhaps after long experience tasting and comparing wines, the connoisseur actually begins to taste things (the hint of tannin) 59
T h e N at u r e o f P e r c e p t u a l L e a r n i n g
he didn’t taste before. Maybe musicians hear things (a change of key?) novices never hear. Maybe prolonged exposure or practice ‘tunes’ earlier processes in the sensory pathways to make them more sensitive to subtle differences important to specialists. If this is so—and there is certainly evidence that it is so—these improvements in perceptual acuity will result in greater discriminatory power. (2015, p. 166)
The Dretske (2015) view acknowledges the evidence for perceptual learning (in the perceptual sense in which I have been understanding it). What seems to have happened in the decade between Naturalizing the Mind and this article is that he became aware, in particular, of the empirical evidence for perceptual learning. Specifically, Dretske (2015) references some of the same neuroscientific evidence to which I appeal in this chapter: how “prolonged exposure or practice ‘tunes’ earlier processes in the sensory pathways to make them more sensitive to subtle differences important to specialists” (p. 166).
We now turn from Part I of the book to Part II, from the nature of perceptual learning to its scope. In Part II, I show the role of perceptual learning in several domains in the philosophy of perception: in natural kind recognition, sensory substitution, multisensory perception, speech perception, and color perception. As I mentioned in chapter 1, in each of the subsequent chapters, I take one of the three mechanisms of perceptual learning (attentional weighting, unitization, or differentiation), and I apply it to one of the domains in philosophy of perception. For instance, in c hapter 3, I show how taking attentional weighting into account illuminates debates about the contents of perception. In the book’s conclusion, I then extend 60
I s P e r c e p t u a l L e a r n i n g G e n ui n e ly P e r c e p t u a l ?
the scope of perceptual learning even further, to domains outside of philosophy of mind. The argument in Part II also supplements the argument made in this chapter. As I just mentioned, I argue in Part II that perceptual learning occurs in several different domains of perception. If perceptual learning occurs in those perceptual domains, however, then perceptual learning occurs, which is what I have been arguing for in this chapter. In Part II, I also clarify the different dialectical roles for perceptual learning cases in philosophy. Cases of perceptual learning have long played an important role in philosophy of mind and epistemology, from Vedānta Deśika to Thomas Reid, and to contemporary philosophers such as Charles Siewert, Ned Block, Susanna Siegel, Casey O’Callaghan, Berit Brogaard, and Christopher Peacocke. My account clarifies the roles that perceptual learning can legitimately play in the arguments of these philosophers. For instance, Siegel (2006, 2010) uses perceptual learning cases to argue that natural kind properties can be presented in perception. But in the next chapter, I show that a fuller, systematic account of the nature of perceptual learning suggests that, at most, it results in tuned attention to different low-level properties.
61
PART I I
THE SCOPE OF PERCEP TUAL LE ARNING
Chapter 3
Learned Attention and the Contents of Perception
3.1 INTRODUCTION In the past decade, the case of perceptual learning that has received by far the most attention in philosophy is Susanna Siegel’s (2006, 2010) pine tree case. Siegel asks us to suppose that we have never seen a pine tree before, but are tasked to cut down all and only the pine trees in a particular grove of trees. After several weeks pass, she says, pine trees might begin to look different to us.1 I agree with Siegel’s description to this point, but I disagree with the philosophical conclusions that she goes on to draw from the case. Let me explain. Some philosophers hold that we perceive high-level kind properties (in addition to low-level properties such as colors, shapes, size, orientation, illumination, textures, and bare sounds).
1. Eleanor Gibson (1969) imagines a similar case about a psychologist who is unfamiliar with goats but is tasked with running experiments involving a large herd of goats. She describes what happens: “The goats, at first, look to him as almost identical. . . . But after a few months’ acquaintance, he can spot his goat in the herd at a moment’s notice and even from a fair distance” (p. 82). For many similar examples described in rich detail, see Siewert (1998, sec. 7.9).
65
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
High-level kind properties include natural kind properties like being a pine tree and artificial kind properties like being a table.2 In the history of philosophy, it is a matter of controversy whether we perceive high-level kind properties. Heidegger (1960), for instance, holds that we do not just hear “bare sounds”; instead, “we hear the storm whistling in the chimney, we hear the three- motored plane, we hear the Mercedes in immediate distinction from the Volkswagen” (p. 156). Contrast Heidegger’s view with Hume’s view. On Hume’s view, perception does not represent types of objects. Following Berkeley, Hume holds that it doesn’t even represent distances. As he puts it: [A]ll bodies which discover themselves to the eye, appear as if painted on a plain surface . . . their different degrees of remoteness from ourselves are discovered more by reason than by the senses. (Hume, [1738]1951, 1.2.5)
According to Hume, visual perception represents colors and locations, but not distances. It is not the senses, but reason, that detects distances. Similar to Hume, Berkeley holds that in vision, what is strictly seen is nothing more than “light, colours, and figures” ([1713]1949, p. 175). For him, we perceive only low-level properties, strictly speaking. Siegel (2006, 2010) presents an argument (henceforth, the “Phenomenal Contrast Argument”) for the conclusion that high- level kind properties are represented in perception. Roughly and briefly, the idea is as follows: Tables or pine trees can look
2. Arguably, some low-level properties, such as “green,” are kinds as well (see, for instance, Soames, 2007, p. 329). By using the term “high-level kinds” I simply mean to rule out kinds that are low-level properties.
66
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
phenomenally different to people once they become disposed to recognize them, and the best explanation for this is that the properties of being a table and being a pine tree can become represented in perception.3 In this chapter, I detail an alternative explanation for the phenomenal difference: a shift in the weight of one’s attention onto other low-level properties. In supporting my alternative explanation, I draw on a model of perceptual learning (first sketched in Goldstone, Landy, & Brunel, 2011) called the Blind Flailing Model of perceptual learning, whereby we come to recognize new kinds by attending to them in all sorts of ways (that is, by flailing our attention) until our attention settles on the low-level properties that make that kind novel to us. This creates, I will argue, a genuine perceptual difference in what we see. On my view, contrary to Siegel’s, we need not posit that we perceive high-level kind properties. Instead, what happens in cases like the pine tree case is that we learn to attend to different low- level properties from the ones to which we are currently attending. In the words of Eleanor Gibson (1969), “[A]s perception develops the organism comes to detect properties of stimulation not previously detected even though they may have been present” (p. 76). Low-level properties—such as the triangular shape of the pine tree—are there all along, but we may not have previously focused on them. Why is it important in the first place to determine which properties we perceive (high- level kinds, or just low- level properties)? One reason is that which properties we perceive determines which perceptions count as misperceptions and which beliefs count as false beliefs (Siegel, 2006, p. 483; 2007, 3. “Disposed to recognize” and “recognitional disposition” are Siegel’s locutions. I adopt this language throughout in presenting her argument and in replying to it.
67
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
p. 129). Suppose that the object you are attending to is a wax pine tree, not a real pine tree. If perception represents high-level kind properties like the property of being a pine tree, then your perception is a misperception. You incorrectly perceive that it is a pine tree when the object is in fact a wax pine tree. On the other hand, if perception represents only low-level properties, and not high-level kind properties, then your perception is not a misperception. You correctly perceive the particular color and shape of the wax pine tree, and although you are still in error, your error is a false belief. You have a veridical perception of a certain color and shape. Then you infer the false belief that the object is a pine tree. My interest in Siegel’s pine tree case is not just with which properties we perceive. That is, I am not just concerned with what to conclude from the pine tree case. My interest is also in the details of the pine tree case itself and, more generally, in what happens perceptually when we learn to recognize a kind property. I want to work out the finer details of what is occurring in the pine tree case and cases like it. One reason it is important to work out the details of cases of perceptual learning is for the Offloading View of perceptual learning that I laid out in chapter 1. According to the Offloading View, perceptual learning serves to offload tasks onto one’s quick perceptual system, tasks that would be slower and more cognitively taxing were they done in a controlled, deliberate manner. This offloading frees up cognitive resources for other tasks. By getting clearer on the details of cases of perceptual learning, we can better understand how exactly the offloading process occurs. In this chapter, I show how the right shifts in our attention offload onto our perceptual system the task of recognizing high-level natural kinds. 68
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
3.2 THE PHENOMENAL CONTRAST ARGUMENT Siegel describes kinds such as tables or pine trees coming to look phenomenally different to someone, once that person becomes disposed to recognize them.4 In such cases, she argues that there is a phenomenal difference in the sensory phenomenology. As she puts it, this is a difference in the phenomenology pertaining to the properties that sensory experience represents, properties such as colors and shapes and, perhaps, also high-level kind properties (2006, p. 485). Siegel contrasts sensory phenomenology with phenomenology associated with imagination, with emotions, with bodily sensation, with background phenomenology (as with drunkenness or depression), and with nonsensory cognitive functions (as with a feeling of familiarity; p. 492). There may be changes in those kinds of phenomenology as well, but Siegel’s concern is just with the sensory phenomenology. Very roughly, Siegel seems to have in mind that for vision, the sensory phenomenology is the visual phenomenology that typically changes when you move your head from side to side (excluding, say, the proprioceptive changes that also occur). And one way of understanding what Siegel is doing here, put in my words, is that she is clarifying that her case of perceptual learning is genuinely perceptual (see my discussion of this in chapter 1, sec. 1.2, and in chapter 2). Now consider Siegel’s core argument that perception represents high-level kind properties: the Phenomenal Contrast Argument. (Note that Siegel uses pine trees as her example, while I am using 4. Note that Jack Lyons (2005, p. 197) criticizes Siegel’s use of pine trees as an example, arguing that it is doubtful that we can perceptually represent the property of being a pine tree because of how heterogeneous the class is perceptually. Lyons says that being a mature Scotch pine is a better example. I return to this point later in the discussion.
69
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
wrens, and that she calls such properties as pine trees and wrens “K-properties,” while I am calling them “high-level kind properties”). The argument runs as follows: Suppose someone acquires a recognitional disposition for wrens. Contrast her perceptions before and after she gains that disposition. Plausibly, after she gains the disposition, even if she looks at exactly the same scene of a wren, the sensory phenomenology of her perception has changed. Given that the perceptions differ in their sensory phenomenology, the argument continues, they differ in their content, that is, in what the perceptions represent. Specifically, they differ with respect to the high-level property that she is now disposed to recognize—namely, the property of being a wren. The argument generalizes, mutatis mutandis, to other high-level kind properties. High-level kind properties can be represented in perception. Here is Siegel’s more formal expression of the argument. To preface the argument, she writes: Suppose you have never seen a pine tree before, and are hired to cut down all the pine trees in a grove containing trees of many different sorts. Someone points out to you which trees are pine trees. Some weeks pass, and your disposition to distinguish the pine trees from the others improves. Eventually, you can spot the pine trees immediately: they become visually salient to you. Like the recognitional disposition you gain, the salience of the trees emerges gradually. Gaining this recognitional disposition is reflected in a phenomenological difference between the visual experiences had before and after the recognitional disposition was fully developed. (2010, p. 100)
To formalize her argument, she starts by labeling each of the visual experiences in the previous example: 70
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
Let E1 be the visual experience had by a subject S who is seeing the pine trees before learning to recognize them, and let E2 be the visual experience had by S when S sees the pine trees after learning to recognize them. E1 and E2 are visual parts of S’s overall experiences at each of these times. The overall experience of which E1 is a part is the contrasting experience, and the overall experience of which E2 is a part is the target experience. (p. 100)
The Phenomenal Contrast Argument then runs as follows: (0) The target experience differs in its phenomenology from the contrasting experience. (1) If the target experience differs in its phenomenology from the contrasting experience, then there is a phenomenological difference between E1 and E2. (2) If there is a phenomenological difference between E1 and E2, then E1 and E2 differ in content. (3) If there is a difference in content between E1 and E2, it is a difference with respect to K-properties represented in E1 and E2. (2010, p. 101)
Siegel’s final summation of the argument is helpful: “I’ve argued that gaining a disposition to recognize K-properties can make a difference to visual phenomenology, and that this difference is accompanied by a representation of K-properties in visual experience” (p. 113). One note on the opening premise of the argument: Siegel (2010) writes, “Claim (0) is supposed to be an intuition. It is the minimal intuition one has to have for the argument to get off of the ground” (p. 101). However, the argument I made in chapter 2 lends 71
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
further support to Siegel’s opening premise. In particular, evidence on perceptual learning from neuroscientific and behavioral studies lends support to the claim that there are changes in sensory phenomenology due to learning. It gives us reason to think that the target experience might differ in its sensory phenomenology from the contrasting experience. Note also that while her argument employs what Siegel (2007, 2010) has called the “method of phenomenal contrast,” her appeal is not just to any old phenomenal contrast, but to one that is specifically due to learning. There are many other kinds of perceptual phenomenal contrasts, even if we restrict ourselves to sensory phenomenal contrasts as Siegel does. Two people might differ in their sensory phenomenology because one is colorblind while the other is not, or because one has 20/20 vision while the other does not. Or a person’s sensory phenomenology might differ from before due to their vision getting worse from old age. Furthermore, a person might exhibit a phenomenal contrast between before and after she suffers an eye injury or develops a brain lesion. Yet the argument that Siegel is making employs a case of phenomenal contrast that is unlike any of these cases. The difference is that in Siegel’s argument, the phenomenal contrast between before and after you learn to recognize pine trees is specifically the result of practice or experience. Her case is a case of perceptual learning, while many other sensory phenomenal contrasts are not.
3.3 THE ATTENTIONAL REPLY TO THE PHENOMENAL CONTRAST ARGUMENT Richard Price (2009) replies to the Phenomenal Contrast Argument in the following way: 72
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
After one learns to recognize pine trees, one starts to attend to those features of pine trees that distinguish them from other trees, for instance, the colour or thickness of the bark. Acquiring a recognitional disposition for pine trees will cause one’s patterns of attention to shift when one looks at a grove containing pine trees and other sorts of trees. (p. 516)
Price’s overall view is perhaps best put as follows: The phenomenal difference in the pine tree case is explicable in terms of a shift in one’s attentional pattern onto other low-level properties, and so, contrary to what Siegel concludes, we do not need to appeal to high- level kind properties (see also Macpherson, 2012, p. 37). Price’s attentional reply makes the following distinction relevant. Do our perceptions get enriched through perceptual learning, or do they just get more specific? (Gibson & Gibson, 1955; Gibson, 1969, pp. 77–81). According to the enrichment view, our perceptions start out as informationally sparse, and get enriched through learning. On this view, the percepts themselves change over time by becoming informationally richer (Gibson & Gibson, 1955, p. 34). According to the specificity view, by contrast, our perceptions start out as informationally rich and simply get more specific through learning. Eleanor Gibson (1969) puts the specificity view as follows: In the normal environment there is always more information than the organism is capable of registering. There is a limit to the attentive powers of even the best educated human perceiver. But he is also limited with respect to the complex variables of stimulation by his stage of development and his education. Perception in man’s rich environment begins as only crudely differentiated and grossly selective. But as perception develops the organism comes to detect properties of stimulation not previously 73
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
detected even though they may have been present. With growth and continued exposure to the world of stimulation, perception becomes better differentiated and more precise. (pp. 77–78).
Price’s attentional reply echoes the specificity view that Eleanor Gibson describes here. Our percepts do not become informationally richer as we learn to recognize pine trees. Instead, our perception just gets more specific, as we learn to attend to the color or thickness of the bark. Color and thickness are both features that were always present, but to which we are only now selectively attending. While Price’s attentional reply to Siegel’s argument is brief, in what follows I will provide a robust account of the attentional difference that occurs in the pine tree case and cases like it. My account draws on the psychology of attention and learning, and I will show that it provides a compelling explanation of changes in expert perception, without an appeal to high-level kind properties. One way in which my view is novel is the following: Price writes, “After one learns to recognize pine trees, one starts to attend to those features of pine trees that distinguish them from other trees, for instance, the colour or thickness of the bark” (2009, p. 516, italics added). A major difference between my view and Price’s is that on my view it is not after, but before one learns to recognize pine trees that one starts to attend to the distinguishing features of pine trees (like the color or thickness of the bark). On my account, attending to those features for the first time is part of the very process that enables you to develop a recognitional disposition. My account has application beyond the Phenomenal Contrast Argument. I mentioned the notion of cognitive penetration in chapter 1. Zenon Pylyshyn (1999) defends the cognitive impenetrability of visual perception. As he puts it, it’s the view that “an 74
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
important part of visual perception, corresponding to what some people have called early vision, is prohibited from accessing relevant expectations, knowledge, and utilities in determining the function it computes” (p. 341). Pylyshyn defends cognitive impenetrability in part by arguing that some putative counterexamples are explicable just in terms of “the allocation of attention to certain locations or certain properties prior to the operation of early vision” (p. 344). Take the case of chicken sexers, for instance. Biederman and Shiffrar (1987) conducted interviews with expert chicken sexers. These experts estimated that it took between two and six years of training to be able to accurately identify the sex of day-old chicks at roughly 99% accuracy (p. 643). Pylyshyn argues that the case of chicken sexers is not a case where their knowledge directly affects the content of what they see. Instead, their case is explicable in terms of the way they allocate their attention (p. 359). The model of perceptual learning that I will offer can help to illuminate how exactly the allocation of attention occurs in such cases, thereby building on the attentional strategy that Pylyshyn has offered to defend the cognitive impenetrability of perception. Siegel (2010) herself anticipates and rules out the possibility of an attentional reply to the Phenomenal Contrast Argument. She says that in the pine tree case and cases like it, “There need not be any difference in . . . focal attention (you might be staring hard at the face trying to remember who this person is, when suddenly it dawns on you that it’s Franco)” (p. 159). The face- recognition case Siegel mentions does capture a powerful intuition. It seems at first glance that we can hold our attention fixed while experiencing a sensory phenomenal change as we recognize Franco. However, once we think more about the details of the case, I think it becomes less clear that our intuitions are actually tracking a case that could occur in our everyday experience. 75
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
In particular, the case underestimates just how difficult it is in practice to keep focal attention fixed for even a short period. In fact, in order to accomplish this, experimenters tend to use a chin rest to stabilize the head. To fix attention, they need to keep constant both the visual angle from the eyes to the object and the distance between the eyes and the object (see, for instance, Wang & Mitchell, 2011, p. 438). And this is all just for the head placement. Perhaps the bigger difficulty in fixing attention is that our eyes are constantly moving. As Bart Krekelberg (2011) puts it, “Essentially, our eyes never stay still; we normally scan our environment with about two to three large saccades every second, and even when we think we are fixating a single location, our eyes move” (R416). Again, when we reconsider the Franco case in light of empirical facts about attention, it is less clear that the case could actually occur. So, it is less clear what our intuitions about the case are actually tracking.
3.4 THE BLIND FLAILING MODEL OF PERCEPTUAL LEARNING Siegel’s pine tree case involves a relatively permanent change in one’s perception of pine trees following experience with them. But one question that Siegel overlooks is why exactly these changes occur in the first place. What purpose does a change in one’s perception serve? The psychology literature provides an answer to this question: Perceptual changes occur so that we can better perform the cognitive tasks that we need to do. To ideally perform cognitive tasks, it is better for perceptual systems to be flexible rather than hardwired. As Robert Goldstone (2010) explains: 76
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
One might feel that the early perceptual system ought to be hardwired—it is better not to mess with it if it is going to be depended upon by all processes later in the information processing stream. There is something right with this intuition, but it implicitly buys into a “stable foundations make strong foundations” assumption that is appropriate for houses of cards, but probably not for flexible cognitive systems. For better models of cognition, we might turn to Birkenstock shoes and suspension bridges, which provide good foundations for their respective feet and cars by flexibly deforming to their charges. Just as a suspension bridge provides better support for cars by conforming to the weight loads, perception supports problem solving and reasoning by conforming to these tasks. (p. v)
As Goldstone puts it, perceptual systems are flexible rather than hardwired so that they can better support cognitive tasks. In chapter 1, I suggested how exactly perceptual systems do this—by having cognitively taxing tasks offloaded onto them, thereby freeing up cognition to do other things. In what follows, I provide an account of how this offloading occurs in cases like the pine tree case. Recall Peacocke’s Cyrillic case from c hapter 1. Peacocke (1992) says that there is a difference “between the experience of a perceiver completely unfamiliar with Cyrillic script seeing a sentence in that script and the experience of one who understands a language written in that script” (p. 89). One important feature of Siegel’s pine tree case and Peacocke’s Cyrillic case is that as the authors describe them, they are cases where the perceptual changes arise organically rather than through overt intervention. Consider the fact that there are all sorts of ways in which we can intervene in our perceptual systems to improve our perception. We cup our ears to hear better, or we slosh the wine around in our mouth so that it covers more taste buds 77
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
(Goldstone, Landy, & Brunel, 2011, p. 6). Yet in the pine tree and Cyrillic cases, none of the perceptual changes result from overt intervention. Instead, the perceptual changes with regard to the pine tree and the Cyrillic letters arise organically. In the pine tree case, for instance, nobody tells you how to identify a pine tree. Likewise, in the Cyrillic case, while someone may have to tell you the names of the letters, and while a person or a dictionary may have to tell you what the words mean, nobody tells you how to attend to Cyrillic letters. And in neither the pine tree case nor the Cyrillic case do the subjects themselves strategically intervene in their perceptual systems, causing perceptual changes to occur. Instead, the changes arise organically, through mere exposure to the trees or to the letters. One paradigm case of perceptual learning through mere exposure is from a 1956 study by Eleanor Gibson and Richard Walk. Gibson and Walk (1956) took two groups of albino rats. One group (the experimental group) was raised from birth in cages that had four shapes—two circles and two equilateral triangles—spread out on the walls of each cage. A control group was raised from birth in cages that had no shapes on the walls. When the rats were about 90 days old, they were then trained to discriminate a circle from an equilateral triangle. The rats raised in cages with the shapes in them learned the discrimination in fewer days and with fewer errors during the learning process. As the Gibson and Walk (1956) study suggests, the perceptual learning process is sometimes a low-level process, in which, as Goldstone, Landy, and Brunel (2011) put it, “our perceptual abilities are altered naturally through an automatic, non-conscious process” (p. 5). One key feature of the perceptual learning process is that the learning often occurs through random changes. As Goldstone, Landy, and Brunel put it, “If a random change causes important discriminations to be made with increasing efficiency, then 78
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
the changes can be preserved and extended. If not, the changes will not be made permanent” (p. 5). They advocate a simple model of perceptual learning, by which perceptual learning occurs through a process of random variation, followed by reinforcement. This process is called “blind flailing,” and they draw an analogy with infant motor learning. Infants flail their arms randomly during the process of learning motor control. As Goldstone, Landy, and Brunel (2011) summarize the infant motor learning process (drawing from Smith & Thelen, 1993), “The flails that are relatively effective in moving the arms where desired are reinforced, allowing an infant to gradually fine-tune their motor control” (p. 5). Analogously, in perceptual learning, those random changes that cause important perceptual discriminations to be made are reinforced and selected, allowing us to fine-tune our perceptual systems. It is important to understand that, although the Blind Flailing Model draws an analogy with infant motor movement, the Blind Flailing Model itself is designed for mature human perception, not developmental perception in infants. The purpose of the analogy with infant motor learning is simply to explain the power of an unsupervised process where helpful random changes get reinforced and selected. The fact that the motor learning case is an infant developmental case is not the point of the analogy. Applying the Blind Flailing Model to the case of learning to recognize wrens, suppose you are not disposed to recognize wrens, but a wren is in your visual field. You attend to the wren in various ways. Some ways of attending are selected and reinforced, while other ways of attending are not. This process fine-tunes our perceptual systems.5 5. One upshot of this view is that perceptual learning occurs in a way similar to how natural selection occurs. Where the process of natural selection selects a trait, the blind flailing process selects a way of attending. In the case of perceptual learning, at first your attentional pattern is varied. Those ways of attending that are helpful get selected and preserved, while
79
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
Which ways of attending are selected and reinforced? A natural answer is that ways of attending that produce novelty get selected and reinforced, while the ways of attending that do not produce novelty get discarded. This is because attending in a way that produces novelty gets rewarded. As Gottlieb and colleagues (2013) put it in a recent paper, “Sensory novelty, defined as a small number of stimulus exposures, is known to enhance neural responses throughout the visual, frontal, and temporal areas and activate reward-responsive dopaminergic areas. This is consistent with the theoretical notion that novelty acts as an intrinsic reward for actions and states that had not been recently explored” (p. 590).6 This provides some evidence that if a way of attending allows you to see a new feature or set of features, that way of attending will be preserved and extended. For instance, if you attend to the shape of the wren’s body, and that way of attending allows you to see a new feature that you haven’t seen before (wrens have a somewhat distinctive round shape, for instance), then that way of attending will be reinforced. On the other hand, many other ways of attending do not allow you to see new features, since wrens share many features in common with other birds. In that case, those ways of attending will be discarded. The same sort of story can be told for Siegel’s pine tree case. If you attend to the shape of the tree, and that way of attending allows you to see a new feature that you haven’t seen before (since pine trees very often have a distinctive triangular shape), then that way of attending is reinforced, while other ways of attending are not.
those ways of attending that are unhelpful are discarded. Like natural selection, the process begins with random variation, and ends with the selection of something useful. 6. For a research program that puts pressure on the idea that dopamine is a reward, or at least clarifies the type of reward that it is, see the work of Kent Berridge and colleagues (e.g., Smith, Mahler, Pecina, & Berridge, 2010).
80
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
Here is the next important point: an attentional pattern that gets selected because it helps you to see novel features of an object is an attentional pattern that helps you to recognize that object. This is because if you are looking at something you have never seen before, and you home in on its novel features rather than its shared features, these novel features will help you to recognize the new object, since those features distinguish it from other objects you have seen. Attending to the distinctive round shape of a wren, for instance, helps you to recognize wrens. Attending to the distinctive triangular shape of a pine tree helps you to recognize pine trees. In short, a kind of blind flailing that selects for novelty will enable recognition. Let me clarify that when I talk about “the distinctive round shape of a wren” or “the distinctive triangular shape of a pine tree,” I do not mean to take this notion of distinctness too far, because there are cases in which we can be fooled perceptually by an adequate fake. So, for instance, a very good fake of a wren might share that precise round shape that makes the wren distinctive from other birds. I think Jack Lyons (2005) is right when he points out that distinctiveness in these cases “depends on what else is out there” (p. 202). The visual features of water, for instance, are not entirely distinctive (visually) because of the existence of paint thinner (p. 202). In at least some cases, the features that are visually distinctive of a kind cannot distinguish it from fakes, or from other kinds that are visually identical to it (as in the water and paint thinner case). At the same time, the term “distinctive” admits of degrees. So, while the roundness of a wren might not be a fully distinctive feature of a wren (due to the possibility of fakes), it can still be a distinctive feature of a wren. On the Blind Flailing Model, perceptual learning happens through the mere exposure to stimuli. Progress is rewarded, but 81
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
with an internal (dopamine) reward, not any external reward.7 This way of thinking about perceptual learning is consistent with Eleanor Gibson’s conception of it. As she writes, “Perceptual learning is self- regulating, in the sense that modification occurs without the necessity of external reinforcement” (1969, p. 4). Nobody needs to tell us how to come to recognize a pine tree. Our perceptual systems can do it for us. The blind flailing process involves the random variation of attentional patterns plus the selection of a useful pattern. One attends to a wren in all sorts of ways, and the useful attentional pattern gets selected and reinforced. The upshot of this process is a shift in one’s attentional pattern. As I discussed in c hapter 1, psychologists refer to this as a shift in “attentional weighting.” The idea is that the weight of attention can shift by “increasing the attention paid to perceptual dimensions and features that are important, and/or by decreasing attention to irrelevant dimensions and features” (Goldstone, 1998, p. 588). For instance, your attention might become weighted over time to the round shape of the wren or the triangular shape of the pine tree. If so, your original attentional weighting has shifted. Since there are many different kinds of attention, shifts in attentional weighting can be quite sophisticated. Visual attention can be focal, acting like a spotlight (or several spotlights) on a stage, but it can also be distributed or diffuse at times, not simply centered on single points (see De Brigard & Prinz, 2010, p. 52; Prinz, 2010, p. 318; , 2012, p. 95; Cohen & Dennett, 2011, p. 360; Prettyman, 2018, p. 26). When we look at the starry sky, for instance, we need not only attend to single stars. We can have diffuse attention to large portions of the sky. Furthermore, attention can follow eye 7. Gold and Watanabe (2010) similarly suggest that dopamine plays a role in perceptual learning reward (p. R47).
82
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
movements, but it does not always follow eye movements (for example, see Carrasco, Ling, & Read, 2004, on the perceptual effects of covert attention without eye movement). Visual attention can be overt and follow eye movements, or it can be covert and not follow eye movements. The fact that there are many kinds of attention makes it more plausible that changes in perceptual phenomenology can be explained in terms of shifts in the way one attends. One prediction of the Blind Flailing Model is that subjects will typically be unaware of the shifts that are occurring at the level of attention (see Goldstone, Landy, & Brunel, p. 5). This is because the way that attentional shifts get selected is through a result that happens in a different domain: at the phenomenal level. If attentional shifts yield novelty at the phenomenal level, they are preserved. If not, they are discarded. Since attentional shifts are selected at the level of phenomenology, one can notice a phenomenal change without noticing that the source of that change is an attentional shift. With this in mind, recall Pylyshyn’s (1999) strategy of explaining some putative cases of cognitive penetration by arguing that the cases are explicable just in terms of the allocation of attention prior to perception (p. 344). One such case is the case of chicken sexers. In the Biederman and Shiffrar (1987) study I mentioned before, the researchers interviewed expert chicken sexers. The experts estimated that it took from two to six years of training to be able to accurately identify the sex of day-old chicks at roughly 99% accuracy, at a rate of 960 birds per hour (p. 643). But as Pylyshyn (1999) explains, research indicates that the case of chicken sexers is explicable in terms of the way they allocate their attention (p. 359). Interestingly, however, chicken sexers are wholly unaware of this fact. They are unaware of what has happened at the level of attention (p. 359). The case of chicken sexers would seem to be a paradigm case of 83
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
blind flailing. They attend in all sorts of ways, and their attentional patterns get honed by whether they yield important discriminations between the sexes of chicks. The helpful patterns are selected and reinforced. And this entire process happens without any knowledge of the attentional shifts that are occurring. While Siegel’s pine tree case itself is fictional, it is realistic and plausible, and we can speculate about what kinds of mechanisms might underlie such a case. Like the case of chicken sexers, the pine tree case has many of the features of a standard blind flailing example. Recall Siegel’s description of the case: Suppose you have never seen a pine tree before, and are hired to cut down all the pine trees in a grove containing trees of many different sorts. Someone points out to you which trees are pine trees. Some weeks pass, and your disposition to distinguish the pine trees from the others improves. Eventually, you can spot the pine trees immediately: they become visually salient to you. (2010, p. 100)
Here are three features that suggest an interpretation of the pine tree case as a case of blind flailing. First, in Siegel’s description, someone points out the pine trees to you, but then you are left to your own devices. There is no overt direction. This is consistent with a blind flailing process. Second, the phenomenal change occurs over a long time frame (Siegel says “some weeks”). As evidenced by the chicken-sexing case, some attentional shifts take time when they occur without someone overtly directing you on how to attend. For a difficult task like chicken sexing, chicken sexers estimate that it takes about 2.4 months for them to attend in a way that allows them to identify the sexes at a 95% success rate (Biederman & Shiffrar, 1987, p. 643). Reasonably, an attentional shift for pine trees might 84
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
take some weeks, as Siegel’s description stipulates.8 Third, while the subject in the pine tree case is aware of a change at the phenomenal level, there is no awareness of the source of that change. This is standard for blind flailing, where subjects are unaware of the shifts that are occurring at the level of attention. The fact that one can be entirely unaware of attentional shifts may go some way toward explaining why the attentional explanation is not the first explanation to come to mind in cases like the pine tree case. We may remember what something used to look like, but it is rare that we remember how we used to attend to it. We often do not notice how we are attending at all, let alone notice it, remember it, and compare it to how we are attending at a later time. My claim is that in cases like the pine tree case and the wren case, we come to increase our attention to the prototypical features of objects, like the round shape of the wren or the triangular shape of the pine tree. Siegel herself responds to a view similar to mine involving the idea of a pine-tree-shape gestalt. I say that the view is only “similar” to mine because the view that she responds to makes no mention whatsoever of attention, and attention is an essential part of my view. At the same time, I do sympathize with the shape- gestalt view. For one, the view shares my argument strategy: that the best response to the Phenomenal Contrast Argument is to accept a difference in phenomenology and in content, but to deny that the difference in content is that kind properties become represented in perception. According to the shape-gestalt view of the pine tree case, “your experience comes to represent a complex of shapes—leaf shape,
8. Similarly, in Eleanor Gibson’s case of a psychologist unfamiliar with goats but tasked with running experiments on a large herd of them, after a “few months’ acquaintance,” he can immediately spot his goat in the herd (1969, p. 82). See footnote 1 in section 3.1.
85
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
trunk shape, branch shape, and overall pine-tree shape” (Siegel, 2010, p. 111). Siegel’s reply to this view, as I understand her, is that the complex of shapes will need to be invariant across a wide array of pine trees, but in doing so, non–pine trees might fit the same complex of shapes as well. Furthermore, if the complex of shapes is so general, it will be implausible that we represent those shapes only after we come to recognize pine trees, and not before as well. Siegel’s reply to the shape-gestalt view hinges on the idea that the complex of shapes will need to be invariant across a wide array of pine trees, and that this will raise problems for the view. However, I think the real problem here can be found in a point that Jack Lyons makes. According to Lyons (2005), it is unlikely that we can perceptually represent the property of being a pine tree at all because of how heterogeneous the class is perceptually. He argues that a more reasonable class than pine trees would be the class of mature scotch pines (p. 197). If Lyons is right, then the difficulty Siegel raises for the shape-gestalt view is generated only by the particular pine tree example—an example that (if Lyons is right) also does not get Siegel her desired conclusion, since perception cannot represent a kind as diverse as pine trees. If we chose another example like mature scotch pines that restricts the invariance of the class, then the shape-gestalt view would not need to make the complex of shapes so general that it will let in members outside the class. Siegel gives another argument in addition to the first one, an argument that applies just as much to my own view as it does to the shape-gestalt view. Suppose you learn to recognize when a particular person and his kin are expressing doubt. At first, you do not recognize it, but after observing them you become able to do so when they look a particular way. Siegel says, “it seems implausible to suppose that there must be a change in which color and shape 86
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
properties are represented before and after one learns that it is doubt that the face so contorted expresses” (2010, p. 112). In reply to Siegel, I follow Bence Nanay in thinking that the role of attention is undervalued in the example. One of Nanay’s (2011) points is this: “[W]e have no reason to suppose, for example, that we attend to the same features of a face before and after learning that the face expresses doubt” p. 309). In fact, to add to Nanay’s point, studies on the visual perception of emotions provide some evidence that we would attend differently in such a case. For instance, Schurgin and colleagues (2014) ran an eye-tracking study and found that we fixate deferentially on different facial features for the emotions of joy, disgust, fear, anger, sadness, and shame. As they put it, “Participants preferentially fixate certain regions that they, at least implicitly, think are more diagnostic for recognition of different types of emotions” (p. 11). Given this, it would not be surprising if someone’s attention to a face changed after she recognized an emotion she did not recognize before, causing her to represent different color and shape properties. Furthermore, eye-tracking studies on the recognition of emotion seem to indicate a link between the failure to recognize a certain type of emotion and a failure to attend to the right features.9 Such studies have been done most extensively on people with autism spectrum disorders, who are often less accurate at recognizing emotions when compared with their peers (Wagner et al., 2013, p. 188). Studies here have found that participants fix their gaze less on the eyes, nose, and mouth areas than those in a control group (de Wit, Falck-Ytter, & von Hofsten, 2008; Pelphrey et al., 2002). Given such evidence, it would not be surprising if people do shift their attention as they come to 9. See Rehder and Hoffman (2005, pp. 2–3), for support for using eye movements as a proxy for attention.
87
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
recognize an emotion they did not recognize before, and thereby represent different color and shape properties.
3.5 A NEW ATTENTIONAL REPLY TO THE PHENOMENAL CONTRAST ARGUMENT The Phenomenal Contrast Argument trades on a contrast between your perceptions before and after you acquire a recognitional disposition (say, for pine trees). But when we understand the pine tree case in terms of the Blind Flailing Model, it is not your disposition to recognize pine trees that improves your perception. Rather, your perception of pine trees improves through a random change, and that improvement enables you to become disposed to recognize them. Notice now that we have a novel attentional reply to the Phenomenal Contrast Argument— one that is unique when compared with Price’s original reply. His claim was that “[a]fter one learns to recognize pine trees, one starts to attend to those features of pine trees that distinguish them from other trees” (2009, p. 516, italics added). But according to the Blind Flailing Model, one starts to attend to those distinguishing features of pine trees before one learns to recognize pine trees. In fact, this is part of the very process that enables one to recognize pine trees perceptually in the first place. Your attentional pattern to pine trees shifts. This gives the pine tree a new look to you. And the new look of the pine tree is part of what enables you to become disposed to recognize pine trees. Before you had never much noticed them. But now that they have a new look, this helps you to become disposed to recognize them. What does it mean for pine trees to have a new look to you? I think the best way to understand this is that you begin to focus 88
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
on the prototypical features of the pine tree such as the distinctive triangular shape or the standard pine-green color. Let me be clear: I am not saying that you never saw those features before, just that you are now focusing your attention on those features, whereas you were not before. Why think that attention to those features affects your sensory phenomenology? One major piece of evidence for this is from psychology studies showing that where you attend alters your perception of all sorts of low-level features: a color’s saturation (Blaser, Sperling, & Lu, 1999), the size of a gap (Gobell & Carrasco, 2005), contrast (Carrasco, Ling, & Read, 2004), and spatial frequency (Abrams, Barbot, & Carrasco, 2010).10 As I mentioned in chapter 1, Ned Block (2010) was the first philosopher to highlight these experiments, and he argues that they “provide strong evidence for the claim that the phenomenal appearance of a thing depends on how much attention is allocated to it” (p. 34).11 Furthermore, Block points out that in the empirical literature “it has been settled beyond any reasonable doubt that the effect is a genuine perceptual effect
10. To understand what spatial frequency is, consider Block’s (2010) explanation that “more stripes in a Gabor patch of a given size constitutes a higher spatial frequency” (p. 36). Gabor patches are popular stimuli in vision psychology experiments (see Figure 3.1, which shows two Gabor patches). They are typically circular and often have varying amounts of black and white stripes through them. Several of the experiments mentioned in chapter 2 involved Gabor patches. 11. There are various ways of understanding the evidence that attention affects how low-level properties are perceived. One way is to understand these changes in terms of the level of determinacy at which a property is represented (as in Nanay, 2010; Stazicker, 2011). Others, however, think that the effect of attention raises problems for a representational account of phenomenology (see Block, 2010). Still others have raised objections to that response (see Ganson & Bronner, 2013; Brewer, 2013; Watzl, in press). Note that in replying to the Phenomenal Contrast Argument, I am accepting its premise (for the sake of argument) that if there is a change in phenomenology, there is a change in content (although note that Siegel relies on a somewhat restricted version of this premise; see Siegel, 2010, p. 109; and chapter 6, sec. 6.2, of this book). This is because my main contention is with the argument’s conclusion: that the best explanation for the change in content is in terms of high-level kind properties. That is the target of my argument.
89
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
rather than any kind of cognitive effect” (p. 37; see also Ganson & Bronner, 2013, pp. 408–409; Prettyman, 2017, sec. 2). In fact, anyone who doubts this can try the demo shown in Figure 3.1 for themselves. To be clear, while I think there is an important phenomenal shift occurring before one learns to recognize pine trees, I do think that something important also happens after one learns to recognize pine trees. Prior to having a recognitional disposition for pine trees, through the process of blind flailing you might attend to the pine tree in a way that helps you to distinguish it from other trees. However, having a recognitional disposition for pine trees gives you something more: It enables you to attend to those features quickly and consistently. What this means is that the same attentional pattern can be cued in more than one way. A recognitional disposition is one way to cue a particular attentional pattern. If you have a recognitional disposition for pine trees, for instance, this might cue you to attend to pine trees in a particular way. But that is not to say that the same attentional pattern cannot be cued through some other means. The pattern arises first through blind flailing. It gets selected because it is useful, enabling you to recognize a pine tree.
22%
28%
Figure 3.1. Fix your attention on the black square in the middle, and then without moving your eyes, covertly attend to the patch on the left. The two patches may look the same to you. Now, remove your attention from the black square and attend directly to each of the patches. A 6% difference in contrast between the patches is clear. Source: Carrasco, Ling, and Read (2004, p. 310). 90
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
Then you use that recognitional disposition as a shorthand cue to redeploy that attentional pattern. Just as the same thought can be cued in two different ways, through a long-winded way such as “the teacher of Alexander the Great,” or through a short-winded way like “Aristotle,” the same attentional pattern can be cued through the long process of blind flailing, or through the shorter process of deploying a recognitional disposition. Here is the important upshot. If wrens (or pine trees) look a new way to us first, and that look enables us to then become disposed to recognize them, then this spells trouble for the Phenomenal Contrast Argument. This is because the Phenomenal Contrast Argument tries to explain the new way that a wren looks in terms of the recognitional disposition. After all, it is the perceiver’s possession of that recognitional disposition that provides the compelling reason to conclude that the perception represents the property of being a wren. As Siegel herself summarizes, “I’ve argued that gaining a disposition to recognize K-properties can make a difference to visual phenomenology, and that this difference is accompanied by a representation of K-properties in visual experience” (2010, p. 113). But if a perceiver could have that same type-perception without having a recognitional disposition for wrens, then we would need another explanation for the new look of the wren. Now consider Peacocke’s (1992) analysis of what happens in his Cyrillic case: Once a thinker has acquired a perceptually individuated concept, his possession of that concept can causally influence what contents his experiences possess. If this were not so, we would be unable to account for differences which manifestly exist. One such difference, for example, is that between the experience of a perceiver completely unfamiliar with Cyrillic script 91
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
seeing a sentence in that script and the experience of one who understands a language written in that script. (pp. 89–90)
According to Peacocke, the possession of a concept can influence one’s perception, both in terms of its phenomenology and in terms of its content. While I think this is right as far as it goes, Peacocke goes on to take it too far. He says that if we do not evoke the fact that the subject in the Cyrillic case possesses certain concepts, we will be unable to explain the phenomenal and content differences between that subject’s perception and the perception of someone who is entirely unfamiliar with Cyrillic script. But if what I have said is right, we might explain the phenomenal and content differences without appealing to concepts. According to the Blind Flailing Model, as you are learning the language, you attend to the Cyrillic script in all sorts of ways. Those ways of attending that are helpful for recognizing a letter or a word get selected, while those ways that aren’t helpful get discarded. The letters and words start to look different to you. And this perceptual change is part of what allows you to recognize them in the first place. So, it is not that you learn all of these concepts, and then the script looks different to you. That is where Peacocke is mistaken. However, after you learn all of those concepts, they might provide a shortcut to cue your attention in a particular way. So, Peacocke is partly right: The concepts that one possesses can have influence on the phenomenology and content of perception. It’s just that you need not have the concepts in order to have that phenomenology. When you have a recognitional disposition for pine trees (or a concept of a Cyrillic letter), it might seem that the exercise of that disposition (or concept) is constitutive of your sensory- phenomenal character when you look at a pine tree (or a Cyrillic letter). But I want to suggest that this is only because there is a 92
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
strong correlation between the two. The idea is this: typically, you get that phenomenal character only when you exercise the right recognitional disposition. But in fact, the exercise of that disposition is only contingently related to your phenomenal character. After all, you can get that phenomenal character through the blind flailing process, even without having the relevant recognitional disposition. The exercise of the recognitional disposition is not necessary for that phenomenal character since you can get that phenomenal character by attending in the right way, even without having the relevant recognitional disposition. Consider the following analogy: Suppose the existence of some hypothetical factor x, which is involved in the development of lung cancer. Suppose that all causal chains involving smoking go through factor x on their way to lung cancer. Furthermore, suppose that there is a causal chain to lung cancer that does not involve smoking, but does involve factor x. In such a case, it only appears that smoking is directly causing lung cancer, since there is a strong correlation between the two. But actually, smoking causes lung cancer only indirectly, through factor x. Smoking is not necessary for developing lung cancer since you can get lung cancer through factor x, even without smoking. Analogously, after you learn to recognize wrens, it might seem at first glance that your recognitional disposition is directly causing your phenomenal character. But this is only because there is a strong correlation between the two. In fact, that recognitional disposition causes your phenomenal character only indirectly, through a way of attending. The recognitional disposition is not necessary for that phenomenal character. Again, you can get that phenomenal character by blindly flailing into it, even without having the relevant recognitional disposition. What then happens when you then acquire a recognitional disposition for wrens? A different analogy is appropriate here. Consider 93
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
the case of learning a backhand in tennis. According to one plausible account, learning a backhand creates a new capacity in you. It gives you a new ability to hit backhands. That might seem reasonable enough. But it stands in contrast to a second account. According to this second account, you don’t actually gain a new ability. Rather, you already have the ability to hit backhands. Learning a backhand just selects for and reinforces it. One reason for holding this second account is that before you learn how to hit a backhand, you might accidentally get it right. If we rapidly hit tennis balls at you and give you a tennis racquet to defend yourself, you might possibly hit a backhand without ever properly learning the skill. Plausibly, this indicates that you already have the ability to hit backhands.12 Learning just selects for that ability and enables you to repeat it. In the way that I am using the terms, there is an important distinction between having an ability and having a skill. On my view, a one-off performance of a task, such as hitting a backhand, qualifies as an ability. In such a case, you are able to hit a backhand; put another way, you have the capability to hit a backhand. Still, there is a difference between a one-off performance, and repeated performance. A skill is an ability that has been reinforced. For instance, the competence to hit backhands consistently is a skill. Now consider two different accounts of what happens when you acquire a recognitional disposition for pine trees. According to 12. My notion of an ability dovetails with David Lewis’s notion, which highlights the fact that one can have an ability to do something even if that thing never happens (in fact, even if it cannot happen). Lewis (1976) makes the point using the example of a time traveler named Tim, who goes back in time in an attempt to kill his grandfather. Tim “has what it takes” and “[c]onditions are perfect in every way.” (p. 149). He has the ability to kill his grandfather, even though it cannot happen given that his grandfather ended up living. Just as Tim had the ability without ever in fact actualizing it, so too in the tennis case, you might have an ability without ever actualizing it.
94
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
one account, acquiring that disposition gives one a new perceptual ability. It enables you to attend to pine trees in a new way. What I want to suggest is that while a recognitional disposition might guide your attention when you see a pine tree, you could have attended in that way without it, just as you might accidentally hit a backhand without ever properly learning the skill. My positive proposal, then, is as follows: Acquiring a recognitional disposition does not give you a new ability to attend to pine trees in a particular way. It just selects an ability that you already have. This is the ability to attend to the prototypical features of the pine tree (such as the triangular shape or the pine-green color). These are features that you likely saw before, but on which you did not focus. Acquiring a recognitional disposition for pine trees, however, takes this ability to attend to the pine tree in that way, and it enables you to use that ability repeatedly. It creates a skill.13 The claim that I am making is intuitive in its own right. Attending in the same way as a pine-tree recognizer does not entail that you have a recognitional disposition. In my terms, you can have an ability to attend in that way, but not yet have the skill and be able to repeatedly attend in that way. Someone who arrives at that attentional pattern for the first time through blind flailing has shown an ability to attend in that way, but she does not yet have the skill enabling her to repeatedly attend in that way.14 13. Here I am departing from the way that Stanley and Krakauer (2013) and Stanley and Williamson (2017) talk about skills (see chapter 1, sec. 1.2), and am drawing on a notion more in line with Mohan Matthen’s (2014, 2015) work on perceptual skill. 14. In Reference and Consciousness, John Campbell (2002) tells a similar story about the role that sortal concepts play in attention. On his view, while a sortal concept might be the cause of your attending to an object, there could have been a different cause (if someone had pointed at it, for instance, or if you had just become interested in the object spontaneously; p. 76). This is not to deny the important role that sortal concepts play in attention. For, just as I argued that a recognitional disposition might guide your attention when you see a wren, Campbell argues that a sortal concept might guide your attention to single out
95
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
Through the process of blind flailing, you might exhibit the ability to see pine trees as a pine tree recognizer would, but not yet have the full-blown skill to see pine trees in that way. In such a case, you would lack a recognitional disposition for pine trees, but still see pine trees as a pine-tree recognizer would through the process of blind flailing. In this case, it is not the high-level kind prop erty that explains your new phenomenology. After all, you lack a recognitional disposition for pine trees. Instead, what has happened is that you have blindly flailed into an attentional pattern that is useful for recognition. That attentional pattern allows you to see the pine tree in a new way. Again, this is not to say that you are seeing entirely new features that you had not seen before, but that you are focusing for the first time on particular features on which you had not focused before. Seeing the pine tree in a new way, then, allows you to form a recognitional disposition for the first time. Once you have that recognitional disposition, you no longer need to blindly flail into that attentional pattern but can cue it repeatedly and at will. You have a skill. Finally, let me be clear about what my argument shows and does not show. My argument does not block the possibility that we perceive high-level kind properties. At most, it just blocks one argument for that conclusion. The Phenomenal Contrast Argument purports to show that at least some learning-induced changes in our perceptual phenomenology are best explained by high-level kind properties becoming represented in perception. I argued that there is a better explanation for the phenomenal changes: a shift in one’s one thing rather than another (p. 77). But also just as I argued that the recognitional disposition is dispensable, since you could have attended in that way without it, so too Campbell argues that a sortal concept is dispensable for singling out one thing rather than another. As he puts it, “You could in principle have your attention oriented towards that object by some other cause” (p. 77).
96
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
attention to low-level properties to which one was not previously attending. So, a pine tree looks different to you not because the property of being a pine tree is represented in your perception, but because you have begun to attend to its pine green color or its prototypical triangular shape. However, my argument does not entail that only low-level properties are represented in perception; it simply rejects one argument for the view that high-level kind properties are represented in perception.
Phenomenal Similarity Arguments The argument that I have been targeting in this chapter is Siegel’s Phenomenal Contrast Argument. Given two perceptual experiences of a certain kind, which intuitively differ phenomenally, the idea is that we can infer that the best explanation for that phenomenal difference is that one of the perceptual experiences represents a high- level kind property, while the other does not. The Phenomenal Contrast Argument has received a lot of attention in philosophy of mind (see Logue, 2013; Brogaard, 2013; Nanay, 2011, 2012; Byrne, 2009, pp. 449–451, for some of the more detailed treatments). But to my knowledge, nobody has pointed out that philosophers have also sometimes appealed not just to phenomenal contrast, but also to phenomenal similarity in order to ground their arguments. In a phenomenal similarity argument, a philosopher takes two perceptual experiences of a certain kind, which intuitively are the same phenomenally, and then infers that the best explanation for that intuitive similarity is sameness in the content of the two perceptual experiences. In chapters 1 and 2, we saw an example of a phenomenal similarity argument. A. D. Smith has us compare two perceptions of a typewriter, one had by you who are familiar with typewriters, 97
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
and the other had by someone from a very different background who lacks the concept of a typewriter altogether. Smith intuits that “there is, or need be, no perceptual difference” between the two of you (2002, pp. 96–97). Smith goes on to infer that the best explanation for such a case is that there is a sameness of content that is shared between the two perceptions, even though the other person lacks a concept that you possess. Such content, according to Smith, is nonconceptual content. In a second phenomenal similarity argument, mentioned in chapter 2, John McDowell (2008) writes: Suppose I have a bird in plain view, and that puts me in a position to know noninferentially that it is a cardinal. It is not that I infer that what I see is a cardinal from the way it looks, as when I identify a bird’s species by comparing what I see with a photograph in a field guide. I can immediately recognize cardinals if the viewing conditions are good enough. . . . Consider an experience had, in matching circumstances, by someone who cannot immediately identify what she sees as a cardinal. Perhaps she does not even have the concept of a cardinal. Her experience might be just like mine in how it makes the bird visually present to her. (p. 3)
In this case, McDowell asks us to suppose that one person has a concept of a cardinal, while a second person lacks it. He intuits that if they are looking at the same scene of a cardinal, viewing the bird from the same angle, under the same conditions, their perceptions may be the same. McDowell is here considering two perceptual experiences and intuiting that they are the same phenomenally. He then infers from the case that the two perceptual experiences have the same content, what he later refers to in Kantian terms as “intuitional content” (p. 4). These cases from McDowell and Smith show 98
L e a r n e d At t e n t i o n a n d t h e C o n t e n t s o f P e r c e p t i o n
that philosophers use phenomenal similarity arguments, not just phenomenal contrast arguments.
3.6 LEARNED ATTENTION AND THE OFFLOADING VIEW Finally, let’s return to the Offloading View of perceptual learning, according to which the function of perceptual learning is to offload onto one’s quick perceptual system tasks that would be slower and more cognitively taxing were they done in a controlled, deliberate manner. The upshot, I argued, was that offloading frees up cognition to do other tasks, something that can be extremely beneficial in situations of duress, or simply to move more quickly through ordinary daily activities. This chapter has been about attentional weighting, where the phenomenal look15 of an object, property, event, or spatial region changes as a result of learning to attend (or learning not to attend) to that object, property, event, or spatial region. The Offloading View applies to attentional weighting in the case of natural kind recognition. Instead of having to waste cognitive resources trying to see whether a bird is a wren, your perceptual system makes the process easier for you. Based on your past experience, the process of attentional weighting can weight your attention to the round shape of the wren, for instance, enabling quicker identification. More generally, the offloading that takes place as one learns to recognize a kind, such as a wren, happens through a shift in one’s attention toward features that are distinctive of that kind, such as its round shape. This frees up cognitive resources so that you can ask further questions, such 15. Again, “smell,” “taste,” “sound,” “feel,” etc., can be substituted for “look” here.
99
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
as whether it is a House Wren or a Marsh Wren. Instead of having to seek out features of a kind in order to identify it, your perceptual system weights your attention to those very features. Since you are not occupied in the task of seeking out these features, you are better able to perform other tasks. I rejected one argument that high-level kinds become repre sented in perception. But even if they do become represented in perception, this would be consistent with the Offloading View. The idea would be that instead of having to infer that something is a particular high-level kind, one immediately sees it as such. As Siegel put it in her description of the pine tree case, “Some weeks pass, and your disposition to distinguish the pine trees from the others improves. Eventually, you can spot the pine trees immediately: they become visually salient to you” (2010, p. 100). Her description might as well be a description of how a slow, cognitively taxing task gets offloaded onto the perceptual system. We now move on from the role that attention plays in one perceptual domain to the role that it plays in another. This chapter focused on how attention gets trained as a person learns to recognize a natural kind. The next chapter focuses on how attention gets trained as one learns to use a sensory substitution device.
100
Chapter 4
Learned Attention II Sensory Substitution
4.1 INTRODUCTION Chapter 3 was about cases of attentional weighting—again, cases where the phenomenal look1 of an object, property, event, or spatial region changes as a result of learning to attend (or learning not to attend) to that object, property, event, or spatial region. More specifically, the chapter focused predominantly on learning to attend to properties, not away from them. In particular, the idea was that we come to recognize new high-level kinds by attending to them in various ways until our attention settles on those properties that make that kind novel to us. For instance, we might come to recognize pine trees by learning to attend to their distinctive triangular shape. This chapter focuses both on cases where we learn predominantly to attend to properties, and on cases where we learn predominantly to attend away from properties. I explore both kinds of cases within the context of the learning that occurs with sensory substitution devices (SSDs). These are devices that deliver information about
1. Again, “smell,” “taste,” “sound,” “feel,” etc., can be substituted for “look” here.
101
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
the environment by converting the information that is normally received through one sense modality into information for another sense modality. Typically, SSDs are used by the blind to help them better navigate the world.2 Such devices consist of a video camera that feeds visual information into a converter that transforms it into auditory or tactile stimuli, which the blind person can perceive. In such a case, audition or touch acts as the “substituting modality” for vision, the “substituted modality.” When a user integrates an SSD into her life, the process involves changes to her perceptual system (see, for instance, research on sensory substitution and brain plasticity, especially Ptito et al., 2005; Kupers & Ptito, 2004). These changes allow the user of an SSD to better respond to stimuli in her environment. A blind person, for instance, can use an SSD to respond to the colors of objects, where without the device she could not. In this chapter, I show both how sensory substitution helps to explain perceptual learning, and how perceptual learning helps to explain sensory substitution. In recent years, there has been some illuminating work on perceptual learning and sensory substitution. For instance, Proulx and colleagues (2014) use perceptual learning research to make sense of the plasticity in the brain that occurs in sensory substitution; Briscoe (2018) uses perceptual learning research to better understand the bodily actions of SSD users. My focus here is different. I explore the relationship between SSDs and the training of attention. Specifically, I explore two questions. First,
2. The rock climber Erik Weihenmayer, who is blind, even uses an SSD for rock climbing. The SSD that Weihenmayer uses is called “BrainPort” and consists of a video camera and a converter that transforms the visual information into tactile stimuli, which he can then feel on his tongue through a lollipop-like contraption that he holds in his mouth. Weihenmayer is able to rock climb without BrainPort, but when he uses the device, he is able to climb more deliberately (Twilley, 2017).
102
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
how can the test case of sensory substitution help to illuminate how the training of attention works? Second, how does knowledge of the way attention is trained in perceptual learning help us to better understand sensory substitution? My discussion in this chapter is ultimately in service of a philosophical point. Much of the current philosophical debate surrounding SSDs is about the nature of one’s perceptual experience once one has fully integrated an SSD. More specifically, the question is whether one’s perceptual experience should be classified in the substituted modality (as vision), in the substituting modality (as auditory or tactile), or in a new modality (see Hurley & Noë, 2003; Block, 2003; Noë, 2004; O’Regan, 2011; Kiverstein, Farina, & Clark, 2015; Martin & Le Corre, 2015; Deroy & Auvray, 2015). I use the lessons learned in this chapter to devise an empirical test that can make progress toward resolving this philosophical debate.
4.2 ATTENTIONAL WEIGHTING IN DISTAL ATTRIBUTION In this section, I articulate one way in which sensory substitution devices can help to illuminate how the training of attention works. Start by considering an SSD called the “vOICe.” The vOICe is “a visual-auditory substitution device that works by transforming images from a digital camera embedded in a pair of sunglasses into auditory frequencies (‘soundscapes’), which the user hears through headphones” (Kiverstein, Farina, & Clark, 2015, p. 661). The vOICe represents all sorts of information about objects in a user’s environment in terms of sound. For instance, it represents the vertical spatial location of an object by using higher or lower pitched tones (the higher the tone, the higher the spatial location of the object). 103
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
It represents horizontal spatial location by using left and right headphones. If the spatial location of an object is to the left (or right), the vOICe plays a tone out of the left (or right) headphone. The vOICe represents the brightness of an object with volume (the brighter the object, the louder the volume). One surprising thing that happens as a user becomes fluent with a sensory substitution device like the vOICe is the following: Early on in training they perceive the sounds as being near their ears (since they are wearing headphones), but after training, they come to project the sounds onto the objects that the sounds represent. This is called “distal attribution.” Put more generally (to apply the concept of distal attribution to other sensory substitution devices besides the vOICe), distal attribution is something that happens at a point during the sensory substitution training process “when the sensations felt on the skin or in the ears are projected onto a distant perceived object, which should correspond to the one captured by the camera” (Deroy & Auvray, 2015, p. 333). At first, the evidence for distal attribution was largely based on reports from users. Users would simply say that they were projecting sounds or tactile sensations onto objects. For instance, Siegle and Warren (2010) write, in a discussion of Bach-y-Rita (1972), “After sufficient training, participants in these studies claimed to ignore the stimulation on their backs while becoming directly aware of objects detected by the camera” (p. 209). Such reports are fairly widespread among fluent users of SSDs, providing some evidence that distal attribution genuinely occurs. It is an important question whether or not distal attribution genuinely occurs, because it weighs on whether the changes that occur during the sensory substitution learning process are genuinely perceptual. As Siegle and Warren point out, in ordinary perception, we tend to be directly aware of objects in our environment, as opposed 104
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
to the proximal stimulation on the surfaces of our sensory receptors. Similarly, in sensory substitution, we become aware of the objects in our environment. However, it is still an open question whether we become aware of them by perceiving the proximal stimuli3 and then by making inferences to the nature of the distal stimuli, or whether we are simply immediately aware of the distal stimuli through our perception (Siegle & Warren, 2010, p. 208). The issue of whether or not distal attribution genuinely occurs in sensory substitution, in turn, weighs heavily on whether perceptual learning genuinely occurs in sensory substitution. If users of SSDs just perceive the proximal stimuli, and then make inferences to the nature of the distal stimuli, then sensory substitution is not a case of perceptual learning, since the locus of learning is in the learned inference. The learning would be an inference based on perception, but not perceptual learning. However, if SSD users become directly aware of objects in their environments through learning, whereas before they were simply aware of proximal stimulation on the surfaces of their sensory receptors, then sensory substitution is more plausibly a case of perceptual learning. For a long time, the evidence for distal attribution in sensory substitution consisted primarily in the verbal reports of research participants, but Siegle and Warren (2010) have more recently provided empirical evidence for distal attribution. Specifically, they have offered evidence that distal attribution involves an attentional shift from proximal stimuli to distal stimuli for users of SSDs. To show this, Siegle and Warren used a very basic SSD. Participants 3. I take a “stimulus” here to be an objective property, object, or event in the external world. I use “proximal stimulus” to refer to the property, object, or event in the substituting modality (audition or touch), such as a particular tone delivered by a visual-to-audio SSD. By “distal stimulus” I mean the property, object, or event that the proximal stimulus represents, such as the brightness represented by the tone.
105
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
wore a contraption that looked somewhat like a finger splint on their index finger. Mounted on this splint-like apparatus was a “photodiode,” a device that looks like a small camera, but in fact takes light as an input and converts it into an electrical current. So when (and only when) the photodiode is pointed at a light, it outputs an electrical current. The electrical current travels through a wire from the photodiode, and the current is ultimately fed into a motor worn on the back of each participant. The motor is active when, and only when, it is fed by an electrical current, and the motor could be felt by the participants as a tactile vibration on their backs. The experimenters used a light source as a target. The participants were blindfolded, and, again, they could feel the vibration on their back when, and only when, their finger (with the photodiode on it) was pointed toward the light source. The experimenters placed the target light source randomly at one of ten different distances on a 193-centimeter track, which was raised 72 centimeters off of the ground. Participants were instructed to hold the arm (with the photodiode on it) parallel to the track. They were told to sweep that arm back and forth at that same height, and then to bend the arm and sweep it again. They were not allowed to stop the arm’s movement when it passed over the target light source. The target light source was then removed, followed by the blindfolds. Participants were then instructed to move an object to the place from which they thought the light source was coming. Before the experiment, the experimenters had divided the participants into two groups, twenty into a “distal attention” group, and eleven into a “proximal attention” group. The distal-attention group had been instructed to attend to the target light source, and to ignore the position of their arms while blindfolded. The proximal- attention group had been instructed to attend to the position of their arms and to the tactile vibration on their backs while blindfolded 106
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
(Siegle & Warren, 2010, p. 212). After two hours of practice, the distal- attention group showed significant improvement in performing the task, while the proximal-attention group showed no improvement. That is to say, when the blindfolds and the target light source were removed, and the participants had to move an object to the place from which they thought the light source was coming, the distal-attention group improved in their task performance over time, but the proximal attention group did not. Siegle and Warren’s experiment provides some evidence that distal attribution genuinely occurs, and that the changes that occur during the sensory substitution learning process are genuinely perceptual rather than based in a cognitive inference. First, the data the experimenters collected showed a reduction in errors for the distal-attribution group, which the experimenters took as being “consistent with perceiving the target at a more definite distal location” (Siegle & Warren, 2010, p. 220). Second, though it is always possible that the distal-attribution group was using a cognitive inference to make their distance judgments instead of directly perceiving the distance of the target light, the experimenters noted that the proximal-attribution group, who were instructed to use a cognitive inference to make their distance judgments, showed no improvement in those judgments (p. 220). Again, this provides some evidence that the changes occurring during the sensory substitution learning process for the distal-attribution group were genuinely perceptual rather than based in a cognitive inference. Understanding sensory substitution training as involving an attentional shift from the proximal to the distal can help to illuminate other cases of perceptual learning, especially those involving reading. For some of the cases mentioned in chapter 1, philosophers have introspected that there is a shift in attention from one class of properties to another in these cases. Recall Peacocke’s case of 107
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
learning to read a new language in an unfamiliar Cyrillic script (1992, p. 89). Siegel has interpreted this case to involve decreased attention to the letters over time, and increased attention to the semantic properties of the words (2010, p. 100).4 The evidence on distal attribution lends some plausibility to this claim because it is a documented case of an attentional shift from one class of properties (the proximal ones) to another class of properties (the distal ones) after learning. In this way, the attentional shift involved in distal attribution is unlike the shift that occurs during natural kind recognition, which I described in chapter 3. That attentional shift was from one set of low-level properties to another set of low-level properties, whereas the attentional shift involved in distal attribution is from one class of properties to a different kind of class of properties. Similarly to the shift that occurs in distal attribution, an attentional shift from one class of properties to another can help to illuminate the Bryan and Harter (1899) experiments on telegraphers (which I discussed in chapter 1). According to the verbal reports of telegraphers, the trained telegrapher pays less attention to the individual beeps over time, and starts attending to the words, word phrases, or even short sentences (p. 352). Again, the evidence on distal attribution lends some plausibility to the claim that there can be attentional shifts from one class of properties to another after learning. The cases mentioned in the previous paragraph support the Offloading View of perceptual learning. Instead of having to painstakingly attend to each beep of the SSD, each letter of a language, or each click of the telegraph, the perceptual learning process shifts your attention to more significant features, whether it is the object in the world, the word, word phrase, short sentence, or meaning of the words. This reduces the cognitive load that would be there if you 4. Whether or not we can come to perceive the meanings of words is the topic of c hapter 6.
108
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
had to attend to the beeps, letters, or clicks and then draw an inference about the distal object. It enables the discovery of new features (such as precise location) and relations (such as relative location), thereby better honing the skill one is learning. In this section, I articulated one way in which sensory substitution helps to explain the training of attention. Sections 4.3 and 4.4 as a whole aim at detailing how facts about the training of attention can help to explain and potentially improve sensory substitution. I start section 4.3 by explicating at greater length how attention gets trained.
4.3 LATENT INHIBITION AS A KIND OF LEARNED ATTENTION A prominent model of perceptual learning is Ian McLaren and Nicholas Mackintosh’s associative learning model (see McLaren, Kaye, & Mackintosh, 1989; McLaren & Mackintosh, 2000). Their model of perceptual learning is specially designed to account for how perceptual learning can occur passively, without overt instruction. As McLaren and Mackintosh (2000) put it, just about any worthwhile theory of perceptual learning can explain perceptual learning in cases of overt instruction, but the true theoretical challenge for a good theory is to explain how perceptual learning can occur passively through mere exposure (p. 225). Suppose you are presented with two similar perceptual stimuli (X and Y) the same number of times. Being similar stimuli, X and Y share many features. When a feature is shared, you are exposed to it twice as much as you are a nonshared feature (once when presented with X, and a second time when presented with Y). According to the McLaren and Mackintosh model, you become more habituated 109
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
to the shared features, which results in those features becoming less likely to enter into associations (and, as such, less likely to be good candidates for use in learning). At the same time, you are exposed to the nonshared features only half as much as the shared features. So you become less habituated to the nonshared features. This results in the nonshared features becoming more likely to enter into associations (and so more likely to be good candidates for use in learning). To apply the view more concretely, suppose that you drink a Merlot and a Cabernet Franc the same number of times. You are exposed to their shared features, such as the tannins, twice as much as you are exposed to the features unique to each of them. You become more habituated to their shared features than to their unique ones, resulting in the shared features becoming less likely to enter into associations. As such, the shared features are poorer candidates for use in learning. By contrast, you become less habituated to the unique features of each wine. So the unique features, such as the fruitiness of the Merlot, are more likely to enter into associations. As such, the unique features are better candidates for use in learning. McLaren and Mackintosh’s model is somewhat counterintuitive, despite being well-supported by experimental evidence. On their view, it is easier to learn from unfamiliar stimuli and harder to learn from familiar stimuli. It might seem at first glance that it would be the other way around. And in fact, for several decades in the history of psychology, that intuitive idea was dogma. In the late 1950s, psychologists were concerned with the question: Can learning take place without reinforcement?5 There was evidence that it could. One important experiment was conducted by E. C. Tolman and H. C. Honzik in 1930. The experiment involved 5. My exposition in this section is based largely on Lubow and Weiner (2010).
110
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
rats and mazes, and it had two stages. In the first stage, rats were allowed to explore a maze without reinforcement (that is, there was no reward at the end of the maze). In the second stage, the same rats were rewarded with food at the end of the maze. These rats were more quickly able to navigate the maze and receive the reward than a control group that did not take part in the first stage. The idea was that the rats had learned about the routes of the maze during the first stage, even without a reward, and that learning became evident only in the second stage when a reward was offered. These rats experienced what was called “latent learning,” a kind of learning that occurs before reinforcement, but is evident only after reinforcement (Tolman & Honzik, 1930, p. 257). In 1959, R. E. Lubow and A. U. Moore were attempting a simple demonstration of latent learning, not with rats, but with sheep and goats (see Lubow & Moore, 1959). They divided the animals into two groups. In the first stage of the experiment, they presented each member of group one with a flashing light 10 times, and they presented each member of group two with a rotor that turned 10 times. None of the 10 presentations were reinforced for either group. In the second stage of the experiment, both groups received alternated presentations of the flashing light and the turning rotor. For each presentation, the flash or the turn was followed by a mild shock to each animal’s front right leg. Once the animals had built an association between the flash and the shock (or, alternatively, between the turning rotor and the shock), they exhibited an anticipatory response—what’s called a “leg flexion,” a reflexive flexing and lifting of the front right leg in anticipation of the shock. The experimenters could tell that the animal had built an association if it exhibited a leg flexion at the right time. Lubow and Moore were expecting their results to provide an example of latent learning. Specifically, they thought that it would 111
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
be easier to condition the animals with the stimulus that they had been pre-exposed to in stage one—the familiar stimulus. They hypothesized that just as the rats in Tolman and Honzik’s experiment had learned to navigate the maze even without reinforcement (something that became evident only during reinforcement in stage two), so too would the sheep and goats learn about the stimuli they had been exposed to in stage one. They predicted that this learning would become evident during reinforcement in stage two. So, for instance, if the animal had been presented with a flashing light 10 times in stage one, Lubow and Moore predicted that in stage two, the animal would build an association between the flashing light and the shock more quickly than between the turning rotor and the shock. What Lubow and Moore actually found was exactly the opposite of what they had predicted. If the animal was pre-exposed to the flashing light, then it built an association between the turning rotor and the shock more quickly. If the animal was pre-exposed to the turning rotor, then it built an association between the flashing light and the shock more quickly. To contrast the effect they had found with latent learning, Lubow and Moore called it “latent inhibition.” In latent inhibition, pre-exposure to a stimulus type inhibits learning on that stimulus type.6 Nowadays, as Lubow (1989) describes it, latent inhibition “is considered by many theorists to be a reflection of attentional processes” (p. 7; see Oades & Sartory, 1997; Kruschke, 2003, as examples of such theorists). In particular, in the words of Westbook and Bouton (2010, p. 24), “[L]atent inhibition has commonly been
6. Fifty years later, Lubow is surprised that the original experiment worked at all, given that the number of stimulus pre-exposures was so small (2010, p. 4). Typically, latent inhibition requires many more than 10 exposures.
112
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
explained as a decrease in attention” (italics added). The idea is that “non-reinforced stimulus preexposures retard subsequent conditioning to that stimulus because the subject learns not to attend to the irrelevant stimulus” (Lubow, 2010, p. 8).7 Describing latent inhibition as a “decrease in attention,” as Westbrook and Bouton do, certainly sounds like a claim about a change in attentional weighting. And indeed, Goldstone (1998) has described latent inhibition as not just involving attentional processes but also as an instance of attentional weighting in particular. He writes about latent inhibition under that heading in his well-cited review article on perceptual learning: Attention can be selectively directed toward important stimulus aspects at several different stages in information processing. . . . In addition to important dimensions acquiring distinctiveness, irrelevant dimensions also acquire equivalence, becoming less distinguishable (Honey & Hall, 1989). For example, in a phenomenon called “latent inhibition,” stimuli that are originally varied independently of reward are harder to later associate with 7. By using an eye tracker, Wang and Mitchell (2011) have provided support for the attentional interpretation of latent inhibition—that due to latent inhibition, we attend less to pre-exposed features. Their experiment was done on human participants and employed checkerboard stimuli. In the first phase of the experiment, 21 participants were presented with the same checkerboard stimulus 120 times (p. 439). From that stimulus, two further stimuli were constructed. Both of the constructed stimuli were different from the previous stimulus only in that each had one small portion of the checkerboard that was unique to it. In the test phase of the experiment, participants were asked to determine whether two checkerboards were the same or different when presented one after the other (each for 900 ms with an 880-ms gap between them). The experimenters measured the amount of time participants looked at the unique portions of the two constructed stimuli. They compared this with the amount of time participants looked at the unique portions of two control checkerboards, which shared the same background with each other (but not with the pre- exposed stimulus). The experimenters found that the amount of time spent looking at the unique portions of the constructed stimuli was longer than the time spent looking at the unique portions of the two control checkerboards (p. 439).
113
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
reward than those that are not initially presented at all (Lubow & Kaplan 1997, Pearce 1987). (Goldstone, 1998, pp. 588–589)
On Goldstone’s view, latent inhibition is a case of attentional weigh ting, a kind of learned inattention toward certain objects or properties.8 Lubow considers latent inhibition to be a fortunate effect. Since at just about every moment we are bombarded with countless sensory stimuli, it is essential that we are able to separate relevant stimuli from irrelevant stimuli. Lubow (1989) quotes the art historian Bernard Berenson: “To save us from the contagious madness of this cosmic tarantella, instinct and intelligence have provided us with stout insensibility and inexorable habits of inattention, thanks to which we stalk through the universe tunneled in” (p. 9).9 8. As I discussed in chapter 1, attentional weighting by and large (if not always) involves top- down attention. Latent inhibition is no exception. The kind of attention involved in latent inhibition is not the kind of attention that is initiated by loud bangs or flashing lights, as in bottom-up attention. Furthermore, recall that in attentional weighting, as attention becomes weighted, certain features of the attentional process can become automatic. This happens in latent inhibition as well. For instance, in the Merlot and Cabernet Franc case, due to learning, your attention becomes automatic toward the unique features of the wines, and away from the common features of the wines. 9. The account of latent inhibition given here is in line with the predictive coding approach to perception. Very roughly, according to the predictive coding approach, the brain is a prediction machine that aims to predict sensory inputs. For instance, imagine (very plausibly) that when we are listening to what someone is saying, our brains are trying to predict the next words in the sentence (Clark, 2015, p. 19). In such cases, sometimes the brain will make errors in its predictions. Perhaps the person you are listening to says something entirely unexpected or makes a surprising grammatical error. In that case, there would be a discrepancy between what the brain predicts and the auditory input. A major goal of the brain (if not the biggest goal), however, is to minimize such prediction errors (Hohwy, 2012, p. 1). The brain minimizes prediction errors, in large part, by learning. It updates its predictions based on any unexpected perceptual inputs it receives. Because of this, in an important sense, unexpected perceptual inputs have more significance than the expected ones (in line with the phenomenon of latent inhibition), since the brain can update its predictions from the unexpected inputs, but not as much from the expected inputs. (See Kok & de Lange, 2015, sec. 11.3.1, for a survey of empirical data suggesting that even the neural responses to expected stimuli are suppressed, which is in line with what predictive coding predicts).
114
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
On McLaren and Mackintosh’s model of perceptual learning (which is supported by facts about latent inhibition), repeated exposure enables you to tell two similar things apart more easily. Returning to our previous example, suppose you are exposed to a Merlot and a Cabernet Franc the same number of times. Some features (such as the tannins) are shared by the two wines, while other features (such as the fruitiness of the Merlot) are unique to the particular wine. Latent inhibition affects the shared features more than the unique features, since you are exposed to the shared features twice as many times—once when you drink the Merlot and a second time when you drink the Cabernet Franc. The result is that the features unique to the wines stand out as more salient than the features they share. Essentially, you have learned not to attend to the features that are shared by the Merlot and the Cabernet Franc. More generally, latent inhibition is a kind of learned inattention that enables you to tell two similar things apart more easily. As this case attests, latent inhibition offloads onto our quick perceptual systems tasks that would be slower and more cognitively taxing were they done in a controlled, deliberate manner. Without latent inhibition, one can tell a Merlot and Cabernet Franc apart, but only with a greater effort. With latent inhibition, however, the features shared between the two wines are less salient, making each wine’s unique features stand out by comparison. Your attention becomes automatic with respect to those unique features, and away from the common ones. One upshot is that this frees up cognition to do other tasks. Since one is able to tell the Merlot and the Cabernet Franc apart more easily, one can focus on other questions, such as to the vintage or the vineyard of the wines.
115
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
4.4 APPLYING PRINCIPLES OF ATTENTIONAL TRAINING TO SENSORY SUBSTITUTION Section 4.3 detailed how attention is trained through latent inhibition. I now aim to explicate how these facts about the training of attention can help to explain and potentially improve the learning that happens with sensory substitution devices. One lesson to be taken from the literature on latent inhibition is the following: Since perceptual learning is quicker on novel stimuli because of latent inhibition, programming more novel stimuli into SSDs might increase the speed of the learning process with SSDs. One way to determine whether stimuli are novel to research participants is to examine their brain responses. Novel stimuli activate networks in the brain that have a signature and are detectable by using electrophysiological recordings of scalp and intracranial “event-related potentials” (ERPs). These signatures have been specifically shown to exist for novel auditory and tactile stimuli (Knight, 1996). ERPs are not the only option, however. For cortically blind or for sighted participants, another option might be to examine pupil response to determine novel stimuli. Naber and colleagues (2013) have provided evidence that the pupil dilates more for novel stimuli, and there is evidence that pupil dilation measures novelty not just for visual stimuli, but for other modalities as well (p. 2). This could provide a helpful test. As Naber and colleagues point out, pupil size is “a simple and outwardly accessible physiological measure” (p. 2). Measuring pupil dilation or using ERPs can determine which stimuli are novel to participants. SSDs could then be better programmed to deliver those stimuli. (Note that this point will be important at the end of the chapter, when we engage with the philosophical debate over 116
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
whether SSD perception should be classified as visual, auditory, tactile, or as part of a new modality.) Recall how the SSD the vOICe works. A user wears a pair of sunglasses on which a small video camera is mounted. The camera feeds visual information into a converter that transforms it into sounds that the user hears through headphones. Higher pitches convey that an object is high up in the camera’s field of view, and lower pitches convey that an object is low in the camera’s field of view. If an object is spatially located in the left part of the camera’s field of view, the user hears the pitch through the left headphone; if an object is spatially located in the right part of the camera’s field of view, the user hears the pitch through the right headphone. The brightness of the object is encoded as volume (the brighter the object, the louder the volume). Consider a person using the vOICe. Such a person has already been exposed to auditory stimuli, prior to her first experience with the SSD. Contrast this with an infant. For an infant, there is a time in which the sounds of birds chirping, voices talking, and fans whirring are all new to him. A user of the vOICe, on the other hand, has already heard all these sounds and many others. Auditory stimuli are not new to her. SSD users have been pre-exposed not just to auditory or tactile stimuli in general, but also to some of the specific proximal stimuli types delivered by these devices. Consider the EyeMusic device, an SSD that like the vOICe converts visual images into sounds, but is also unique in delivering information about color. EyeMusic represents five colors with notes from five ordinary instruments, such as a piano (white) and a marimba (blue) (Levy-Tzedek et al., 2012a, p. 316). In other words, the device delivers proximal stimuli with which the user is undoubtedly familiar: the notes of a piano and a marimba. Because of latent inhibition, however, prior exposure 117
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
interferes with new learning. So new learning might be harder to accomplish with piano and marimba notes. To take a different example, consider TVSS, which stands for tactile-vision sensory substitution. TVSS devices convert visual information picked up by a camera into tactile information, which is delivered to the skin (to the tongue or the back, in particular). As Bach-y-Rita and Kercel (2003) describe it, however, the information delivered by TVSS is “the usual cutaneous information (pressure, tickle, wetness, etc.)” (p. 543). But certainly, pressure, tickles, and wetness are familiar stimuli, which have been experienced before in other contexts. Once again, however, research on latent inhibition shows that prior exposure interferes with new learning. So it might be harder to train someone with TVSS, given that the device delivers pressure, tickles, and wetness—all familiar stimuli. Even though prior exposure interferes with new learning, there are still ways in which SSDs might be programmed to minimize or avoid prior exposure. One idea would be to start by determining which stimuli are novel to SSD users by using ERPs or by measuring pupil dilation. This could be a complicated process, since a token stimulus can fall under multiple types. An mbira note, for instance, might be novel as an mbira note, but familiar as the note C-sharp. In such a case, the goal would be to carefully determine which feature is novel in order to construct the most novel stimulus. After determining such stimuli, SSDs can be programmed such that the devices deliver them. Since EyeMusic represents colors with notes from familiar instruments—for instance, a piano and a marimba— these could be switched to notes from less familiar instruments (the notes of an mbira might be a good substitute for piano notes, for example). Perhaps one reason a piano and a marimba were chosen in the first place is because both instruments were familiar to the researchers who selected them. But novelty, not familiarity, 118
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
facilitates perceptual learning. To speed up the training, more obscure instruments are a better choice.10 Consider a second way in which we can apply the principles of attentional training to sensory substitution. The first way involved programing SSDs to better accord with how we attend. However, attention itself might be trained to better facilitate SSD learning. To understand how, consider a study on perceptual learning in which participants were trained to classify different kinds of fish locomotion ( Jarodzka et al., 2013). Three groups were shown the same four videos, each with a single fish swimming a different locomotion pattern. Each video was accompanied by an audio instruction from a professor of marine zoology explaining the relevant features of the locomotion pattern ( Jarodzka et al., 2013, p. 65). When the professor was delivering his instructions, he was also watching each video, and his eye movements were tracked. A control group was shown the four videos with the accompanying audios. A second group was given the four videos with the audios, but in addition, each video had a dot that tracked the spot where the professor’s eye movements were focusing. A third group was given the videos and the audios, along with a spotlight that both highlighted the spot
10. EyeMusic also uses tones that make up a pentatonic scale (Levy-Tzedek et al., 2012b, p. 3). This makes sense, because it means that the tones will sound consonant. However, it is also at odds with efficient perceptual learning. After all, a chord composed from the pentatonic scale will likely be a familiar chord. A scale that is less familiar to most people, such as the Locrian scale, might deliver more efficient perceptual learning than the pentatonic scale. Of course, it is important to strike a balance between user-friendliness and efficient perceptual learning. The pentatonic scale is certainly pleasurable to listen to, and that may be why it was chosen. To balance efficient perceptual learning with user- friendliness, however, SSDs might be programmed with a pleasant-sounding scale that is also unfamiliar to the majority of users. This issue, however, suggests a more general point: SSD design will often need to strike a balance between several different desiderata (including efficient engineering), and some SSDs may be in conflict with efficient perceptual learning.
119
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
where the professor’s eye movements were focusing and filtered out the other distracting details. In the testing phase of the experiment, participants were shown four new videos, one for each locomotion pattern, and then were rated for the efficiency with which they fixated on one of the identifying features of the locomotion pattern, and the length at which they fixated there. Those in the dot group fixated on the identifying features quicker and focused there longer than the control group. Those in the spotlight group were even quicker, and they fixated even longer. Both the dot and the spotlight group were also better than the controls at interpreting what they saw when responding to multiple-choice questions after viewing videos. The fish locomotion study suggests a novel way to enhance the learning process with sensory substitution devices: attentional cueing might serve a useful function during training. More specifically, cues could shift the weight of attention onto the more task- relevant stimuli delivered by the SSD, and away from task-irrelevant stimuli. Suppose you are training participants with the vOICe or EyeMusic, and the audio is correlated with a complicated pattern on a computer screen. An attention-grabbing auditory cue such as a beep might be placed just temporally prior to stimuli that need to be highlighted. This would cause the viewer to focus on the subsequent stimulus. Just as the dot drew the attention of participants in the fish locomotion study, the beep could draw the auditory attention of participants when training on SSDs. Importantly, however, the results of the fish locomotion study suggest that an auditory spotlight would be even better. Suppose again that you are training participants with the vOICe or EyeMusic and that the audio is correlated with a complicated pattern on a computer screen. White noise could be played over distracting details to dampen them, but not over important stimuli. Just as the spotlight highlighted 120
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
important features of fish locomotion, the lack of white noise could highlight important features in a sensory substitution training session. In short, the research by Jarodzka and colleagues on perceptual learning suggests some novel attentional training techniques that can be harnessed to teach someone how to use an SSD. These “sensory substitution training wheels” might give users a better training experience with the device and make them more proficient with SSDs once the training wheels are removed.
4.5 PERCEPTUAL LEARNING AND PERCEPTUAL HACKING The 2013 study by Jarodzka and colleagues used an instructional video illustrating where an expert perceiver looked at fish locomotion patterns in order to make novices quicker and more accurate at recognizing those locomotion patterns. Such a video aims at “perceptual hacking,” the tweaking of a person’s perceptual system with the intention of getting that person to better perform the tasks he or she needs to do (Goldstone, Landy, & Brunel, 2011, p. 6). Perceptual hacking sometimes falls under the category of perceptual learning, as it seems to in the study by Jarodzka and colleagues (2013). There were no follow-up studies to see how long the perceptual changes lasted in that study. Still, the novices did develop changes in their perceptions as the result of practice with fish locomotion patterns that lasted at least the duration of the study (about 45 minutes), and likely lasted longer than that. More testing is needed, but the study seems likely to have yielded somewhat long-term perceptual changes. However, perceptual hacking need not involve long-term perceptual changes at all, and so some 121
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
cases of perceptual hacking do not count as perceptual learning. Goldstone, Landy, and Brunel (2011), for instance, describe the simple perceptual hack of cupping one’s hands behind one’s ears in order to hear better (p. 6). This is a way of tweaking one’s perceptual system with the intention of better performing the task one needs to do. Yet it does not result in a long-term perceptual change. So it is not a case of perceptual learning.11 Recall Fred Dretske’s claim from c hapter 2: “Through learning, I can change what I believe when I see k, but I can’t much change the way k looks (phenomenally) to me, the kind of visual experience k produces in me” (1995, p. 15). Perceptual hacking is a way in which one can change the way k looks (sounds, tastes, smells, or feels) to them. Imagine a young person who is learning a new language and cannot hear the difference between distinct phonemes in two different words that sound the same to her. For a little while every day, she plays those two words one after the other through her headphones, until she is finally able to hear the difference between them. She has tweaked her perceptual system with the intention of better performing the task she needed to do. In doing so, she has changed the way something sounded (phenomenally) to her. As the previous case shows, perceptual hacking is not just limited to squinting our eyes to see better or cupping one of our hands behind an ear to hear better. These are ways in which we alter our perceptual machinery (our ears, eyes, hands, etc.) in order to change the way properties, objects, or events look (smell, feel, etc.) to us. However, perceptual hacking goes well beyond just altering our perceptual machinery. Take the study by Jarodzka and colleagues 11. Another example of perceptual hacking is the case of the Moken children, sea nomads of Southeast Asia, who can constrict the size of their pupils, giving them better visual acuity as they dive underwater. Gislén et al. (2006) found that European children can be trained to constrict the size of their pupils as well.
122
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
(2013) as another example. That case shows that we can deliberately change the weighting of our attention by devising training regimens that train us to attend like experts, such as instructional videos. Training regimens for perceptual learning are not just limited to instructional videos of the kind used by Jarodzka et al. (2013). Recently, perceptual hacking technology has become a fast-growing industry. Researchers have developed instruments that train baseball players to track baseballs better (Clark et al., 2012; Deveau, Over, & Seitz, 2014). There is also software that enables students to do algebra more efficiently and accurately (Goldstone et al., 2016); medical residents, to better detect heart problems (Romito et al., 2016); and soldiers, to better detect improvised explosive devices (Yang, McCauley, & Masotti, 2014). The consensus among these researchers is that we can intervene in our perceptual systems in order to better perform the tasks we need to do. Perceptual hacking can be a special way in which we offload tasks onto our perceptual systems, freeing up cognitive resources in the process. Specifically, some instances of perceptual hacking are deliberate attempts to offload a task onto our perceptual system. As deliberate offloading attempts, these cases of perceptual hacking contrast with many other instances of perceptual learning. For instance, when one’s way of attending to pine trees shifts over time as a result of simple exposure to many different instances of pine trees, it is not a deliberate attempt to shift one’s attention. Contrast such cases with the training described in Jarodzka et al. (2013), where the goal is to deliberately change the weight of a novice’s attention such that he or she would attend to fish locomotion patterns as an expert would. One potential use of perceptual hacking is to better train users of SSDs. For instance, I suggested in the section 4.4 that during 123
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
training, when a user is learning to recognize a complicated pattern on a screen, trainers might play white noise over distracting details to dampen them, but not over important details. This would be a way of tweaking the users’ perceptions so as to allow them to better perform the recognition tasks that they need to do.
4.6 AN EMPIRICAL TEST FOR DETERMINING THE NATURE OF SSD EXPERIENCE The discussion in this chapter yields a philosophical upshot. A lot of the philosophical discussion of SSDs has revolved around the nature of one’s perceptual experience once one has fully integrated an SSD. In other words, after perceptual learning takes place, how should we classify the resultant perceptual experiences? Should that perceptual experience be classified in the substituting modality (as auditory or tactile), in the substituted modality (as visual), or in a new modality? Enactivists such as Noë, Hurley, and O’Regan have argued that the perceptual experience should be classified as visual (see Hurley & Noë, 2003; Noë, 2004; O’Regan, 2011). Block has argued that it should be classified as auditory or tactile (2003). And Kiverstein, Farina, and Clark (2015) have argued that the perceptual experience should be classified as part of a new modality. If perceptual experience with an SSD is in the substituted modality (vision) or in a new hybrid modality, then think about the proximal stimuli involved in that experience. Assuming that the SSD user is blind from birth, if the perceptual experience is visual, then there is a strong sense in which the stimuli are novel to the user (they are visual stimuli, which the user experiences for the first time). Likewise, if the perceptual experience is in a 124
L e a r n e d At t e n t i o n I I : S e n s o r y Sub s t i t u t i o n
new hybrid modality, there is also a strong sense in which the stimuli are novel to the user (they are hybrid stimuli, which the user experiences for the first time). But remember that there are ways to test the degree of stimulus novelty to a study participant. One option is to examine brain responses. Novel stimuli activate networks in the brain that have a signature and are detectable. Researchers detect them by using electrophysiological recordings of scalp and intracranial event-related potentials (ERPs) (see Knight, 1996). Another option, for cortically blind participants at least, is to examine the pupil response, since the pupil dilates more for more-novel stimuli (Naber et al., 2013), and there is evidence that this occurs even for nonvisual perceptual stimuli (Naber et al. 2013, p. 2). Using ERPs or measuring pupil dilation can help to determine whether the stimuli are novel to the user (either as visual or hybrid stimuli, which the user experiences for the first time). Finding this would provide evidence that the perceptual experience with an SSD is best classified either in the substituted modality (as visual) or in a new hybrid modality. If, however, the stimuli are familiar, then we have evidence that the perceptual experience remains in the substituting modality (as auditory or tactile). Successfully carrying out the empirical test I am suggesting will require some care. Pupils might dilate or ERP novelty signatures might occur just because of other circumstances happening around the same time. One circumstance under which they might occur is at the start of distal attribution, when proximal sensations first get projected onto their corresponding distal objects.12 To carry out the empirical test that I am suggesting, it will be important to ensure
12. Thanks to Robert Briscoe for raising this issue.
125
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
that any pupil dilations or ERP novelty signatures are linked to visual or hybrid stimuli, not just to distal attribution.
4.7 CONCLUSION Recall that the goal of this book is both to introduce readers to the topic of perceptual learning and to apply the three mechanisms of perceptual learning to different domains in the philosophy of perception in order to better understand those domains. This chapter and c hapter 3 have focused on attention and learning. The goal was to apply the perceptual learning mechanism of attentional weighting so as to better understand (a) the contents of perception and (b) sensory substitution. Next, we move on to a second mechanism of perceptual learning: unitization. Remember that in cases of unitization, as a result of practice or experience, what previously looked to a subject (phenomenally) like two or more objects, properties, or events, later looks to her like distinct parts of a single complex object, property, or event. In the next chapter, I use the concept of unitization to better understand cases of multisensory perception, where we perceive objects, properties, or events with more than one sense modality at a time.
126
Chapter 5
“Chunking” the World through Multisensory Perception
5.1 INTRODUCTION Suppose that you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and you hear the clang. This is a case where your perception of an object or event involves more than one sense modality (vision and audition), and as such it is a case of multisensory perception.1 My claim in this chapter is that multisensory perception is learned. Why does it matter whether multisensory perception is learned? One reason has to do with the implications the issue has for a long-standing philosophical problem. Molyneux’s question asks whether a man born blind, who can distinguish a cube and sphere by touch, could distinguish those shapes by sight upon having his sight restored. If multisensory perception is learned, then we have a straightforward no answer to Molyneux’s question. You see the cube for the first time. No learning has taken place between sight 1. See Matthen (2017) for an argument that perception is typically multisensory. See Fulkerson (2014) and Connolly (2014a) for the connection between multisensory perception and the contents of perception, a topic that relates this chapter to chapters 3 and 6.
127
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
and touch. So, no, you don’t recognize which is the cube and which is the sphere. How we answer Molyneux’s question will, in turn, have ramifications for debates between nativists or rationalists, on the one hand, and empiricists on the other hand. On Molyneux’s question, nativists have traditionally held that the association between the felt cube and the seen cube is innate (see Leibniz, 1765), while empiricists have held that it is learned (see Locke, 1690). If the association between the felt cube and the seen cube is learned, therefore yielding a no answer to Molyneux’s question, then this gives us an empiricist answer to Molyneux’s question rather than a nativist one.2 A second reason it matters whether multisensory perception is learned is that, as Casey O’Callaghan (2017b) has pointed out, one of the most important discoveries in the cognitive science of perception in the past two decades is that the senses involve extensive interaction and coordination (p. 156). We want to understand how this works. Depending on whether multisensory perception is learned or not, we will have a different account of an important recent discovery in the cognitive science of perception. A third reason the debate is important is that a view that makes multisensory perception a flexible, learned process fits more naturally with the emerging view that perception is a more active process 2. Recent experimental evidence lends support to the claim that the answer to Molyneux’s question is no. A study conducted by Held and colleagues (2011) tested whether participants who had just undergone cataract-removal surgery for sight restoration, would be able to identify previously felt Legos by sight. In the study, participants first were given one Lego to touch. Next, they were visually presented with two distinctly shaped Legos and were asked which of the two they had previously been touching. The result was that participants performed at near-chance levels in answering this question. Held and colleagues interpret this result to mean that the answer to Molyneux’s question is likely no, since participants born blind who could distinguish between shapes by touch could not distinguish those shapes upon having their sight restored (however, for a debate about the experimental design in Held et al., 2011, see Schwenkler, 2012, 2013; Connolly, 2013; Cheng, 2015; Clarke, 2016).
128
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
than it has typically been taken to be. That is to say, it fits with a view of perception in which the perceiver works to construct the world through learning and exploration instead of just passively receiving inputs that get transformed into a representation of the world. Suppose again that you are at a live jazz show. The drummer begins a solo. You see the cymbal jolt and you hear the clang. But in addition to seeing the cymbal jolt and hearing the clang, you are also aware that the jolt and the clang are part of the same event. O’Callaghan (2014a) calls this awareness “intermodal feature binding awareness.” It is intermodal, meaning that it occurs between two or more sense modalities. It is feature binding in that the features are perceived as jointly bound to the same object or event. And it is awareness, because you are conscious of the features as bound to the object or event in this way. I agree that we can have awareness that the jolt and the clang are part of the same event. I will argue that we develop a single learned perceptual unit for the cymbal’s jolt and clang. This view is in contrast to the standard view, which is that there is an automatic feature binding mechanism that binds features like the jolt and the clang together. My view, by contrast, is that when you experience the jolt and the clang as part of the same event, this is the result of an associative learning process. More generally, my claim is that multisensory cases involve learned associations, and I will outline the specific learning process in perception whereby we come to “chunk” the world into multisensory units. Note that it is one thing to say that features x and y are associated, but quite another to give a detailed account (drawing on an established perceptual learning process) of how exactly that association happens. A central contribution of the chapter is this: I explain how exactly multisensory associations occur by appealing to the mechanism of unitization, which no one else has discussed in the multisensory perception literature to date. 129
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
My view is in contrast to the standard view, on which there is a feature binding mechanism that binds certain features together automatically. When I say a feature binding mechanism is “automatic,” however, I mean that it is automatic once certain underlying conditions are met. For instance, as O’Callaghan (2008) points out, there are certain “unity assumptions” for cases like the cymbal case; in particular, “temporal coincidence and spatial proximity are part of what regulates which auditory and visual features belong together” (p. 326). When I say that feature binding is “automatic” on the binding view, I mean that it is automatic once the features are roughly temporally coincident3 and spatially proximate. Secondly, there is evidence that under certain conditions, top-down attention is necessary in order for multisensory stimuli to be coupled (Talsma et al., 2010, pp. 402–403). Again, when I say feature binding is “automatic” on the binding view, I mean that it is automatic after certain attentional conditions are met. Thirdly, as Stevenson and colleagues (2014) put it, “The ability to perceptually bind sensory information is notably impaired in a number of clinical populations, including those with autism spectrum disorders (ASD)” (p. 379). When I say that feature binding is “automatic” on the binding view, I mean that it is automatic in populations in which the ability for multisensory integration is not impaired. It can be difficult to tease apart the difference between an account of multisensory perception based on intermodal feature binding (the standard view) and an account based on associative learning (my view). For now, the key question to ask is how features, such as a jolt and a clang, come to be coupled. Specifically, did the
3. Known as the “temporal window of integration,” the temporal window in which multisensory stimuli will likely be grouped together by the adult perceptual system is approximately 150 milliseconds (Donohue, Green, & Woldorff, 2015, p. 2).
130
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
coupling happen just prior to your current experience, or did it happen further back in past experience? If you experience a jolt and a clang as part of the same event, for instance, is it due to those features getting coupled in past experience (see Figure 5.1), or did the coupling of the jolt and the clang occur just prior to your current experience of them (see Figure 5.2)? The differences between these two views are further summarized in Table 5.1. If feature binding awareness does not involve feature binding— which is what I will argue—then this flies in the face of how many working on multisensory perception have been thinking about these cases. Consider four such representative passages highlighted by O’Callaghan (2014a, pp. 80–81; all italics added by O’Callaghan): When presented with two stimuli, one auditory and the other visual, an observer can perceive them either as referring to the same unitary audiovisual event or as referring to two separate unimodal events . . . . There appear to be specific mechanisms in the human perceptual system involved in the binding of spatially and temporally aligned sensory stimuli. (Vatakis & Spence, 2007, pp. 744, 754) As an example of such privileged binding, we will examine the relation between visible impacts and percussive sounds, which allows for a particularly powerful form of binding that produces audio-visual objects. (Kubovy & Schutz, 2010, p. 42) In a natural habitat information is acquired continuously and simultaneously through the different sensory systems. As some of these inputs have the same distal source (such as the sight of a fire, but also the smell of smoke and the sensation of heat) it is reasonable to suppose that the organism should be able to 131
Clang Information
Jolt Information Unitized Jolt-Clang Information
Conscious Experience 1 + n (Past and present interaction between information channels creates learned association)
Figure 5.1. Multisensory Perception as an Associative Learning Process.
Clang Information
Jolt Information
Conscious Experience 1 (Interaction between information channels)
Intermodal feature binding awareness
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n Subpersonal Processing
Jolt Information Binding Mechanism Clang Information
Jolt-Clang Information
Intermodal feature-binding Awareness
Figure 5.2. Multisensory Perception as an Automatic Feature Binding Process.
bundle or bind information across sensory modalities and not only just within sensory modalities. For one such area where intermodal binding (IB) seems important, that of concurrently seeing and hearing affect, behavioural studies have shown that indeed intermodal binding takes place during perception. (Pourtois et al., 2000, p. 1329) [T]here is undeniable evidence that the visual and auditory aspects of speech, when available, contribute to an integrated perception of spoken language. . . . The binding of AV speech streams seems to be, in fact, so strong that we are less sensitive to AV asynchrony when perceiving speech than when perceiving other stimuli. (Navarra et al., 2012, p. 447)4
The traditional view is that multisensory perception at the conscious level is the result of intermodal feature binding at the 4. O’Callaghan adds to this list: Bushara et al. (2003); Bertelson and de Gelder (2004); Spence and Driver (2004); Spence (2007); and Stein (2012). See also Deroy, Chen, and Spence (2014); Spence and Bayne (2015); and Deroy and Spence (2016).
133
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
Table 5.1 Multisensory Perception: Associative Learning versus Intermodal Feature Binding Associative Learning
Intermodal Feature Binding
When does the coupling happen?
In past experience
Just prior to present experience
Can the theory accommodate intermodal feature binding awareness?
Yes (see sec. 5.4)
Yes
How does the theory answer Molyneux’s question?
Definitive no
Either yes or no *
* Note: If intermodal feature binding awareness is the result of intermodal feature binding, then the answer to Molyneux’s question is still open: it may be yes or no. By design, when the newly sighted man tries to distinguish the cube from the sphere by sight, he is not touching the shapes at the same time. To do so would spoil the whole point of the task, since touching the shapes would ensure that the man knew which were which. Since the man is only seeing the objects but not touching them, however, this means that intermodal feature binding is not activated by the test. The task is a recognitional task. And it remains an open question whether the man recognizes by sight which is the cube and which is the sphere.
subpersonal level in all the ways just mentioned, whether it is with spatially and temporally aligned stimuli, audio-visual objects, facial expression and tone of voice, or audio-visual speech streams. I will argue that this view is mistaken. My plan for the rest of the chapter is as follows. In section 5.2, I briefly explain O’Callaghan’s notion of intermodal feature binding awareness—an account that details what is happening at the conscious level in cases that many psychologists have taken to involve intermodal feature binding at the subpersonal level. In section 5.3, 134
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
I offer my alternative to intermodal feature binding: unitization. In section 5.4, I apply unitization to multisensory cases and show how the phenomenon is consistent with O’Callaghan’s main argument for intermodal feature binding awareness. In section 5.5, I reply to some objections. Finally, in section 5.6, I illustrate how exactly unitization supports the Offloading View of perceptual learning.
5.2 THE KIND OF CONSCIOUS AWARENESS WE HAVE IN MULTISENSORY PERCEPTION When you are listening to the live jazz drum solo, see the cymbal jolt, hear the clang, and are also aware that the jolt and the clang are part of the same event, this is a case of intermodal feature binding awareness. More generally, intermodal feature binding awareness occurs when you consciously perceive multiple features from more than one sense modality jointly to belong to the same object or event. Why is intermodal feature binding awareness of philosophical significance? One reason is that it propels an argument, made by O’Callaghan (2014b), that not all perceptual experience is modality specific—that is to say, that there are cases of multisensory perception that cannot be broken down into just seeing, hearing, touching, tasting, and smelling happening at the same time. As he puts it, perceptual awareness is not just “minimally multimodal.” It is not just exhausted by perceptual awareness in each of the sense modalities happening at the same time (see also O’Callaghan, 2017a, chap. 8). Why think that there is intermodal feature binding awareness in the first place? O’Callaghan’s argument for intermodal feature binding awareness runs as follows: Consider the difference between the following cases one and two. In case one, when the drummer 135
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
begins a solo, you see the cymbal jolt and hear the clang, and you are aware that the jolt and the clang are part of the same event. In case two, you see the jolt and hear the clang, but you are not aware that the jolt and the clang are part of the same event. Perhaps you have never seen a cymbal before and are unaware of the sound that it makes. According to O’Callaghan, there may be a phenomenal difference between case one and case two.5 This difference is explicable in terms of intermodal feature binding awareness: case one involves such an awareness, while case two does not. O’Callaghan (2014b) generalizes the point: “[A]perceptual experience as of something’s being F and G may differ in phenomenal character from an otherwise equivalent perceptual experience as of something F and something G, where F and G are features perceptually experienced through different modalities” (p. 140). This is just to say that in the cymbal example and others like it, case one differs from case two in terms of its phenomenology. O’Callaghan explains this difference in that the former but not the latter case involves intermodal feature binding awareness. Everything said so far is about feature binding awareness. This is something that happens at the conscious level. However, psychologists often talk about feature binding, and there they are referring to a subpersonal process. But what is the connection between feature binding awareness and the feature binding process? The assumption in the empirical literature is that cases like the cymbal case depend on feature binding at the subpersonal level— an assumption that I will argue is mistaken. Roughly and briefly, on my view, cases like the cymbal case are best explained through the
5. Note that this is another instance of a phenomenal contrast argument, in addition to the arguments of chapters 3 and 6. This argument is focused on phenomenology rather than content.
136
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
process of unitization, whereby features that were once detected separately (such as the jolt and the clang) are later detected as a single unit. For example, while someone who has never seen a cymbal before might plausibly experience the jolt and the clang not as the part of the same event, others unitize those features into the same event, due to learning. O’Callaghan’s (2014a) own argument is about feature binding awareness, which he describes as likely related to—but not the same as—feature binding itself. O’Callaghan explains the connection: “Feature binding awareness presumably depends upon feature binding processes. I say ‘presumably’ because a feature binding process . . . may require that features are detected or analyzed separately by subpersonal perceptual mechanisms” (p. 75). At the same time, O’Callaghan distances himself from feature binding processes. He allows that “it is possible that what I have characterized as feature binding awareness could occur without such a feature binding process” (p. 75). So, on O’Callaghan’s view, the existence of feature binding awareness does not imply a feature binding process. O’Callaghan’s account of feature binding awareness is consistent with my view, since I deny a feature binding process, and his view does not imply such a process. But one place where O’Callaghan and I differ is on the term feature binding awareness. If there is an associative process involved rather than a feature binding mechanism, and the result of the associative process manifests itself at the conscious level (as I will argue), it is hard to see why we should call the conscious upshot “feature binding awareness” instead of something else.6 6. I thank Casey O’Callaghan and Diana Raffman for clarifying the relationship between O’Callaghan’s position and my own.
137
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
At the same time, O’Callaghan and I are united in our departure from Spence and Bayne (2015), who say the following: But are features belonging to different modalities bound together in the form of MPOs [multimodal perceptual objects]? . . . [W]e think it is debatable whether the “unity of the event” really is internal to one’s experience in these cases, or whether it involves a certain amount of post-perceptual processing (or inference). In other words, it seems to us to be an open question whether, in these situations, one’s experience is of a MPO or whether instead it is structured in terms of multiple instances of unimodal perceptual objects. (Spence & Bayne, 2015, pp. 117, 119, quoted in O’Callaghan, 2014a, p. 77)
On Spence and Bayne’s account, it is debatable whether intermodal feature binding awareness occurs at all. So in the cymbal case, where O’Callaghan and I think that you can see the cymbal jolt and hear the clang and be aware that the jolt and the clang are part of the same event, Spence and Bayne think that is debatable. One alternative, they might say, is that you see the jolt of the cymbal, hear the clang, and infer that they are both associated with the same object. And on their view, it is an open question whether such an alternative is correct. O’Callaghan’s account of intermodal feature binding awareness is restricted to the conscious level. But we can ask what the subpersonal processes are that produce it. Psychologists have assumed that intermodal feature binding is responsible for it, but I will now explore an alternative to intermodal feature binding— unitization.
138
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
5.3 UNITIZATION AS A PERCEPTUAL LEARNING MECHANISM Unitization, as Goldstone (1998) puts it, “involves the construction of single functional units that can be triggered when a complex configuration arises. Via unitization, a task that originally required detection of several parts can be accomplished by detecting a single unit. . . . [U]nitization integrates parts into single wholes” (p. 602).7 For example, consider someone who is developing an expertise in wine tasting and is learning to detect Beaujolais. Detecting it might at first involve detecting several features, such as the sweetness, tartness, and texture. But detecting the Beaujolais is later accomplished just by detecting it as a single unit. Since the Beaujolais gets unitized by your perceptual system, this allows you to quickly and accurately recognize it when you taste it. According to Goldstone and Byrge (2015), unitization in perception is akin to “chunking” in memory (p. 821). As they point out, normally, we are only able to commit 7 plus or minus 2 items into short-term memory. Yet we are easily able to do much better with the following string of 18 letters by chunking them: M O N T U E W E D J A N F E B M A R
We can chunk the first nine letters as abbreviations for days of the week and the next nine as abbreviations for the first three months on the calendar. Chunking is the building of new units that help to enable memory. Similarly, in perception, unitization allows us to encode complex information that we might be unable to encode without unitization. Suppose, for instance, that you are drinking 7. I will be drawing very closely on Goldstone (1998, pp. 602–604) and Goldstone and Byrge (2015, pp. 821–824) in explaining unitization.
139
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
an extremely complex Beaujolais that you have never tasted before. Your perceptual system might unitize that type of wine, allowing you to recognize it as a Beaujolais, despite the fact that it is extremely complex. A whole host of objects have been shown to be processed first as distinct parts, and later as a unit. Goldstone and Byrge (2015) offer the following diverse list: “birds, words, grids of lines, random wire structures, fingerprints, artificial blobs, and 3-D creatures made from simple geometric components” (p. 823). Many of these cases involve parts being treated as wholes after unitization, as when parts of a 3-D creature get treated as a whole unit (as in the case of the Greebles mentioned in chapter 1). However, there are also cases in which attributes or properties become treated as units. For instance, a study by Shiffrin and Lightfoot (1997) found that participants were able to unitize the angular properties (i.e., horizontality, verticality, or diagonality) of a set of line segments.8 As objects become unitized, the whole becomes easier to process perceptually than the part. This is the case in face perception, for instance (see Goldstone, 1998, p. 603; and Goldstone & Byrge, 2015, p. 822). One interesting feature of face perception is that inverting a face disrupts the unitization process. This means that faces are harder to recognize when they are presented upside-down than when they are presented right-side up (Diamond & Carey, 1986). In fact, this inversion effect seems to be a mark of unitization 8. This study involved sets of three discrete line segments; each of the segments was angled either horizontally, vertically, or diagonally. Participants were given a target set. Say, for instance, that the target set had two horizontal and one vertical line segments. Given that target, the participants were asked to pick out matching targets (that is, all and only sets involving two horizontal and one vertical line segment) and to ignore distractors (such as a set of two vertical and one horizontal line segments or of three horizontal line segments, among others). Through training, participants became very quick and accurate at this task, indicating that they had unitized the angular attributes of each of the three line segments.
140
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
more generally. For instance, it occurs not just for faces, but for Greebles as well (Ashworth et al., 2008). One interesting variation of this is that if you distort the features of a face, the distortions are quite apparent when the face is right-side up, but much less apparent when the face is upside-down. This effect, called the Thatcher effect, seems to show something important about the phenomenology of a unitized object. Specifically, what it is like to experience the upside- down distorted face is not simply what it is like to experience the right-side up distorted face plus inversion. Rather, there is something that it is like to experience, say, a distorted nose and lips in a unitized face, and that is different from what it is like to experience a distorted nose and lips in a nonunitized face. At this point, one might wonder how exactly unitization differs from binding. This issue is especially pertinent because in articulating what binding is, Anne Treisman (1996) discusses the case of “part binding,” which seems at first glance to be very similar to unitization. In part binding, Treisman says, “the parts of an object must be segregated from the background, and bound together, sometimes across discontinuities resulting from partial occlusion” (p. 171). Treisman (1999) offers an example that helps to illuminate the challenge that part binding needs to meet: “[D]ifferent parts, like the arms and legs of a child or the two colors of her shirt and pants, occupy different locations and, if she is partly occluded, may not even be spatially linked in the retinal image” (p. 105). Part binding unites parts that occupy different locations, sometimes locations that are separated due to partial occlusion. How should we distinguish between part binding and unitization? A good point of illustration is the case of the Greebles. When a participant first sees a Greeble, prior to any training with Greebles, part binding is occurring. Just as part binding unites (in our perception) the arms of a child with her body, even though 141
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
they occupy different locations, so too does part binding unite the appendages and the body of the Greebles (in our perception). This means that part binding precedes unitization in the case of perceiving Greebles, because it is only after practice with the Greebles that unitization occurs. Specifically, after practice with Greebles, participants begin to see them not as collections of features but as single units. However, part binding is required just to see a Greeble in order to practice with it. So, while unitization and part binding are similar, they do not do the same thing.
5.4 APPLYING UNITIZATION TO MULTISENSORY CASES My claim is that we unitize things, sometimes unimodally, as in the case of faces, birds, grids of lines, random wire structures, artificial blobs, and fingerprints. But sometimes unitization occurs multimodally as well. As Goldstone writes, “Neural mechanisms for developing configural units with experience are located in the superior colliculus and inferior temporal regions. Cells in the superior colliculus of several species receive inputs from many sensory modalities (e.g., visual, auditory, and somatosensory), and differences in their activities reflect learned associations across these modalities (Stein & Wallace, 1996)” (Goldstone, 1998). Consider the difference between cases one and two of the cymbal example mentioned earlier. In case one, you see the jolt of the cymbal, hear the clang, and are aware that the jolt and the clang are part of the same event. In case two, you see the jolt and hear the clang, but are not aware that they are part of the same event. My claim is that in case one, the jolt and the clang are unitized in the same event, whereas in case two they are not. Interestingly enough, 142
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
one reason why case two might occur in the first place is if you have never seen a cymbal before, and so you have not built the association between what a cymbal looks like when it has been struck and what it sounds like. O’Callaghan (2014a) argues more generally for intermodal feature binding awareness by distinguishing between the following: (1) Perceiving a thing’s being both F and G (where F and G are features that are perceived through different sense modalities). (2) Perceiving a thing’s being F and a thing’s being G. (pp. 87–88)
His idea is that (1) involves intermodal feature binding awareness, while (2) does not. But what I am saying is that the difference between (1) and (2) does not entail that intermodal feature binding has occurred (the conclusion for which psychologists have argued). We can distinguish between (1) and (2) phenomenally without appealing to intermodal feature binding. If (1) involves unitization, while (2) does not, then the phenomenal difference between them is that in (1), F and G are unitized in the thing, while in (2), they are not unitized in the thing. When Vatakis and Spence (2007), Kubovy and Schutz (2010), Pourtois et al. (2000), Navarra et al. (2012), and others assume that intermodal feature binding occurs, their background reasoning is perhaps something like the following: We know that intramodal binding occurs—that is, that features detected by a single sense modality get bound together, as when the shape and color of a cup get bound to the cup in our perception. We know that multisensory perception occurs. So we can take binding and extrapolate from the intramodal case to apply it to multisensory perception. The 143
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
overall argument that I am making is structurally similar. We know that unitization occurs, and we know that multisensory perception occurs. So I am taking unitization and extrapolating from the unimodal case to apply it to multisensory perception. But how do we know which point is the right starting point? How do we know whether we should start with binding or start with unitization? I now want to turn to a few cases that I think are potentially difficult for the intermodal binding view to handle, but easy for the unitization view. Start by considering a case of illusory lip-syncing—a case where someone appears to be lip-syncing, but is actually singing. Such a case might occur due to a mismatch in association between the audio and the visual. In 2009, for instance, a Scottish singer named Susan Boyle gained worldwide fame from her appearance on the TV show Britain’s Got Talent.9 Her performance was captivating to many people because to them she did not look as if she would have such a beautiful singing voice. They did not associate that sound with that look. And part of the good that came out of her case was that people broke their previous false association. Now imagine that you are in the audience as Susan Boyle steps on stage and sings. Plausibly, this would be a confusing experience. At first, you might not localize the sound at Susan Boyle’s moving lips. In your experience, it might be a case of illusory lip-syncing. You might experience the sound as coming from elsewhere, even though it is actually coming from Susan Boyle. Cases where vocal sounds are incongruous with the visual might be most vivid with pets, and amusing videos have been made
9. For a video of her initial performance, which has been viewed over 200 million times, see Leyland, Davy, Susan Boyle—Britain’s Got Talent 2009 Episode 1 [Video File], April 11, 2009. Retrieved from http://www.youtube.com/watch?v=RxPZh4AnWyk.
144
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
documenting the results, showing animals that sound like human beings or like fire engine sirens. Consider one such example. Suppose you are listening to your radio with your dog nearby. A song comes on the radio that you haven’t heard before. You happen to glance over at your dog, who appears to be moving its mouth in sync with the vocal track. Then you realize that what you thought were the vocals are actually coming from your dog.10 By appealing to learned associations, the singing dog case (and others like it) makes sense—the radio’s location and the dog’s sound get unitized. This happens because through experience, your perceptual system associates the sound that the dog makes with the radio. That sound is the kind of sound that would typically come from a radio. When the radio’s location and the dog’s sound get unitized, this is a misperception. The sound came from the dog and not from the radio. However, the misperception is understandable, given the fact that that type of sound typically comes from a radio and not from a dog. We can apply the lesson of this case more generally. Past associations (between, say, types of sounds and types of things) determine the specific multisensory units that we experience. It is unclear what psychologists who advocate intermodal feature binding would say about these sorts of cases. The dog’s mouth movement and the sound have happened at exactly the same time, and from the same spatial location, but fail to be bound. But if binding were an automatic mechanism, wouldn’t intermodal binding just bind the dog’s voice to the dog’s mouth? One option for the defender of binding is to hold that binding need not be automatic, but can be modulated by cognitive factors
10. The following appears to be a case of a dog doing illusory lip-synching: C. K., Illusory Lip-Synching [Video File], July 18, 2014. Retrieved from https://www.youtube.com/ watch?v=KECwXu6qz8I.
145
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
like whether or not the noise is the sound that a dog can make. For example, O’Callaghan (2014a, p. 86) quotes Vatakis and Spence (2007), who claim that binding need not depend just on “low-level (i.e., stimulus-driven) factors, such as the spatial and temporal co- occurrence of the stimuli,” but can depend on “higher level (i.e., cognitive) factors, such as whether or not the participant assumes that the stimuli should ‘go together’ ” (p. 744). If Vatakis and Spence and O’Callaghan are right, then binding need not be automatic, since it can be modulated by cognitive factors. Suppose that binding need not be automatic, but can be modulated by cognitive factors. My claim was that a view on which binding is automatic gets cases like the dog case wrong, since the view would predict that the dog’s voice gets bound to the dog’s mouth, which is not what happens. Yet if theorists defending an intermodal binding process can just weaken the automaticity requirement, then it may be that they can accommodate cases like the dog case into their model. More experiments may be needed. However, at this point my view does seem more parsimonious. Given that it is difficult currently to empirically pull apart the associative account from the intermodal binding account, an appeal to the theoretical virtues of each view is warranted. Forgetting about multisensory cases for a moment, this much is true: we need an account of binding for unimodal cases just as we need an associative account (in terms of unitization) to handle certain unisensory cases of perceptual learning. Both accounts would need to be made more theoretically robust in order to handle cases of intermodal feature binding awareness. However, if an associative account can handle all purported cases of intermodal binding, but an intermodal binding account cannot handle all cases without appealing to a learning mechanism, as Vatakis and Spence do (in order to deal with cases where plausibility plays a role in whether two features are coupled), this means 146
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
that the associative account delivers more parsimony to our overall theory of multisensory perception. Since we have to make one of the two dueling accounts more robust to handle the intermodal cases, we should make the one more robust that can handle all of the cases at the intermodal level. Of course, there may be other theoretical virtues to take into account when examining both views, as well as other empirical considerations, but it seems at the very least that parsimony tells in favor of an associative account on this front.
5.5 OBJECTIONS AND REPLIES My claim is that appealing to learned associations (such as the association between the dog’s sound and the radio’s location) makes sense of cases like the singing dog case. But one might object that there are other equally good or better ways of making sense of such cases. One alternative is that the singing dog case is just a straightforward crossmodal illusion, like the ventriloquist effect.11 The idea is that just as in the ventriloquist effect, the auditory location of the sound gets bound to the moving lips, so too in the singing dog case, the sound gets bound to the location of the radio. In both cases, the experience is illusory. Just as the sound is not coming from the ventriloquist dummy in the ventriloquist case, so too is it not coming from the radio in the singing dog case. The singing dog case is different from straightforward crossmodal illusions like the ventriloquist effect in the following way, however: In the ventriloquist effect, unlike in the singing dog case, both binding and association are viable explanations, at least on the face of it. For the associative explanation of the ventriloquist 11. Thanks to Casey O’Callaghan for raising this possibility.
147
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
effect, it could be that we build an association between the sound of a voice and the movement of lips. On the other hand, an explanation just in terms of binding is equally plausible. It could be that we bind sounds with congruent movements together. By contrast, in the singing dog case, only an explanation in terms of association will suffice, by itself, to explain the case. The associative explanation is that we build an association between voice sounds and radios, and so when the dog makes a voice sound, that sound gets unitized with the radio. An explanation just in terms of binding, however, gives the wrong prediction for the singing dog case. If we bind sounds with congruent movements together, then the dog’s sound should be bound to the congruent movement of the dog’s mouth. In short, there are important differences between the singing dog case and straightforward crossmodal illusions like the ventriloquist effect. Consider a second objection that there is another equally good or better way of making sense of cases like the singing dog case. According to this objection, feature binding can be guided by categorical perception. The idea is that in the singing dog case and cases like it, the categories that you have (of dog voices and radio sounds, for instance) influence what gets bound to what. So there is a story to be told about the selection of features with regard to which features get bound together. And it is natural to suppose that categorical learning might have a role to play in which features get selected and thus bound together. Traditionally, the literature on binding has been very much concerned with sensory primitives like colors and shapes, and there is a question about whether higher- level perceptual features get bound in that same way. According to this objection, we do not need to choose between feature binding and learned associations because they can play a role together.12 12. Thanks to Tim Bayne for this objection.
148
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
I find this objection to be plausible, yet currently unsubstantiated. To the best of my knowledge, there is no empirical evidence demonstrating the claim that categorical perception can guide feature binding. I take it to still be a plausible hypothesis, however, because there is some evidence that learning connections between sensory primitives like colors and shapes can influence the binding process (see Colzato, Raffone, & Hommel, 2006). But as far as I know, this same influence has not been demonstrated for higher- level perceptual features. The objection is right in that it remains a live option that feature binding can be guided by categorical perception. Still, if the goal of the objection is to establish that there is another equally good or better way of making sense of cases like the singing dog case, in absence of empirical evidence to ground this alternative, the alternative is not a better explanation. There is empirical evidence, due to studies on unitization, to ground the explanation of the singing dog case in terms of learned association. So, barring empirical evidence to ground the explanation of that case in terms of categorical perception guiding feature binding, this explanation is not equal or better than the explanation in terms of learned association. A third objection is to the idea that unitization can explain multisensory cases. According to this objection, unitization implies that there was something there before to unitize. But in certain cases of multisensory perception, this seems implausible. Take the case of flavor perception. Flavor is a combination of taste, touch, and retronasal (inward-directed) smell (see Smith, 2013). Yet, flavors are always just given to you as single unified perceptions. You are never given just the parts. You do not start by having a retronasal smell experience, taste, and touch, and then unitize those things.13 13. I owe this objection to Barry C. Smith.
149
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
I think this objection points to an exception to the argument I am making. Flavor perception is a special case of multisensory perception where a unitization account does not apply. This might seem ad hoc, but at the same time, it is well-recognized that flavor perception is a special case of multisensory perception in general. Flavor is special, because as O’Callaghan (2014b) points out, it is a “type of feature whose instances are perceptible only multimodally” (p. 158). That is to say, where in the cymbal case one can experience the jolt and the clang either together or separately (if one were to close one’s eyes or shut one’s hears, for instance), in the case of flavor properties, they are perceptible only through taste, touch, and retronasal smell. Given that, it should not be surprising that flavor has a special treatment. A fourth objection continues on the third but focuses on speech perception rather than flavor perception. According to this objection, there are documented cases of infant speech perception where an infant has a coupling without ever being exposed to either of the coupling’s components. For instance, before eleven months, Spanish infants can match /ba/and /va/sounds with corresponding images of someone unambiguously saying /ba/and /va/(Pons et al., 2009). Spanish itself does not make a distinction between /ba/and /va/. Even if an infant is not surrounded by English speakers, for instance, the infant before eleven months can still match audio and visual English phonemes. But how can this be through association when the infant herself was not surrounded by English speakers? Why are infants able to match the sounds with the images, and how can an associative account explain it?14 This objection presents a difficult but not insurmountable challenge for the unitization view of multisensory perception. In the 14. I owe this objection to Barry C. Smith and Janet Werker.
150
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
study in question (Pons et al., 2009), all infants initially underwent two 21-second trials in which they were presented with silent video clips of a bilingual speaker of Spanish and English, repeatedly producing a /ba/syllable on one side of the screen and a /va/syllable on the other side. So, while it is right to say that the Spanish infants had not been surrounded by English speakers, they had been exposed to English speakers. And it remains a possibility that this exposure was sufficient for matching audio and visual English phonemes through association.
5.6 UNITIZATION AND THE OFFLOADING VIEW I argued that unitization would enable us to perceive a thing’s being both F and G (where F and G are features that are perceived through different sense modalities), rather than just perceiving a thing’s being F and a thing’s being G. But what kind of advantage might it give to us to have the former experience rather than just the latter one? The answer rests in the Offloading View of perceptual learning I sketched in chapter 1. Instead of having to see the jolt of the cymbal, hear the clang, and judge that they are both associated with the same object, the unitization process does this efficiently for you. It would take a longer time to have to see the jolt, hear the clang, and judge that they are part of the same object. Unitization is a way of offloading that task onto our quick perceptual system. We get the same information—that the jolt and the clang are part of the same event—w ithout having to make inferences to get there. This frees up cognition to make other, more sophisticated inferences.
151
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
The same story can be told of unitization more generally (even outside cases of multisensory perception). When a face is unitized, for instance, instead of having to identify the features of the face one- by-one (the color of the eyes, hair color, the shape of the nose, etc.) and then infer the identity of the person, your perceptual system processes the face such that you are immediately able to identify it. What would ordinarily be a time-consuming, cognitively taxing task becomes quick due to unitization. To sum up, my account in this chapter sides with O’Callaghan in one respect and against the dominant view in psychology in another respect. With O’Callaghan, I accept that perceptual awareness is not just “minimally multimodal.” It is not just exhausted by the perceptual awareness that is happening in each of the sense modalities at the same time. The cymbal case shows this. There is something that it is like to be aware that the jolt of the cymbal and the clang are part of the same event. And this is different from what it is like to just see the jolt and hear the clang. In holding this view, I depart from Spence and Bayne (2015), who find it debatable that it is part of one’s experience that the jolt and the clang are part of the same event, rather than part of post-perceptual processing or some kind of inference the subject makes. Contrary to what Spence and Bayne have indicated, multisensory cases are instances of perceptual learning and, as such, are genuinely perceptual. According to the dominant view in psychology (including Vatakis & Spence, 2007; Kubovy & Schutz, 2010; Pourtois et al., 2000; Navarra et al., 2012), multisensory experiences result from an intermodal feature binding process. Against this dominant view, however, I am a skeptic of intermodal feature binding. This is because I think that an associative process rather than a binding mechanism best explains multisensory perceptions. 152
“ C h u n ki n g ” by W ay o f M u lt i s e n s o r y P e r c e p t i o n
To show this, I outlined a specific associative process in the perceptual learning literature that can explain multisensory perceptions: unitization. I argued, for instance, that unitization best explains what it is like to be aware that the jolt of the cymbal and the clang are part of the same event. The jolt and the clang are unitized in that event.
5.7 CONCLUSION The goal of this book is to illustrate both the nature and the scope of perceptual learning. With regard to the scope of perceptual learning, the goal is to show both that perceptual learning occurs in several domains of perception and how it occurs in those domains. In this chapter I showed how perceptual learning occurs in multisensory perception, and linked the significance of this fact for several issues that philosophers care about, including Molyneux’s question. One significant upshot of the discussion in this chapter is that since multisensory perception is learned, the answer to Molyneux’s question is no. That is, a man born blind, who can distinguish a cube and sphere by touch, would not be able to distinguish those shapes by sight upon having his sight restored. I drew on the perceptual learning mechanism of unitization to show how exactly perceptual learning occurs in multisensory perception. In the next two chapters, I turn to a different perceptual learning mechanism: differentiation. In particular, I show how we learn to differentiate the phonological and phonemic properties involved in speech perception (chapter 6) and how our trained color perception enables us to better differentiate objects from their backgrounds (chapter 7). 153
Chapter 6
Learning to Differentiate Properties Speech Perception
6.1 INTRODUCTION Several cases of perceptual learning in the philosophical literature involve language learning, both in terms of written and spoken language. As I discussed in c hapters 3 and 4, one example from written language comes from Christopher Peacocke (1992), who writes that there is a difference “between the experience of a perceiver completely unfamiliar with Cyrillic script seeing a sentence in that script and the experience of one who understands a language written in that script” (p. 89). With regard to spoken language, as Casey O’Callaghan (2011) points out, several philosophers have made the claim that after a person learns a spoken language, sounds in that language come to sound different to them.1 Ned Block (1995), for instance, writes, “[T]here is a difference in what it is like to hear sounds in French before and after you have learned the language” (p. 234). It is tempting to think that this difference 1. O’Callaghan cites Block (1995, p. 234); Strawson (2010, pp. 5–6); Tye (2000, p. 61); Siegel (2006, p. 490); (Prinz, 2006, p. 452); and Bayne (2009, p. 390). See also Pettit (2010, p. 13); Reiland (2015, p. 483); and Brogaard (2018).
154
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
is explicable in terms of the fact that, after learning a language, a person hears the meanings of the words, whereas before learning the language she does not. On such a view, meanings would be part of the contents of auditory perception. However, O’Callaghan (2011) denies this (see also O’Callaghan, 2010, 2015; and Reiland, 2015). O’Callaghan argues that the difference is in fact due to a kind of perceptual learning. Specifically, through learning we come to hear language-specific sounds, such as the phonological features specific to the new language. As O’Callaghan argues, these features, not the meanings, explain what it’s like to hear a new language. By contrast, Brogaard (2018) argues that meanings are in fact part of the content of perception (see also, Pettit, 2010). After offering arguments against the opposing view, she relies on evidence about perceptual learning to help make the positive case for her view. In particular, she uses evidence about perceptual learning to rebut the view that we use background information about context and combine it with what we hear in order to get meanings. Instead, she argues, language learning is perceptual in nature. In line with the discussion of unitization in c hapter 5, she points to changes in how we perceive utterances (more in chunks rather than in parts), as a result of learning. Background information directly influences what we hear, she argues, altering how language sounds to us. I argue that although Brogaard has some successful criticisms of O’Callaghan’s view, on balance, the evidence from perceptual learning does not support her conclusion that through perceptual learning we come to hear meanings. I should note here that Brogaard does offer some other arguments, ones that are not based on perceptual learning that I do not address here. My focus in this chapter is just on the arguments that pertain to perceptual learning. In section 6.2, I introduce the standard phenomenal contrast argument for the conclusion that through learning we come to hear meanings in 155
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
speech perception (I call this the “Phenomenal Contrast Argument for Hearing Meanings”). In section 6.3, I introduce and provide a critical assessment of O’Callaghan’s “Argument from Homophones,” an argument against the Phenomenal Contrast Argument for Hearing Meanings. In section 6.4, I detail the role that the perceptual learning mechanism of differentiation plays in speech perception. Through differentiation, we come to parse features of a language, including phonemes that we had not previously differentiated. In section 6.5, I present and reply to Brogaard’s argument that through perceptual learning we come to hear meanings. As I mentioned, according to that argument, through perceptual learning we come to “chunk” words in a new language after we come to know the meanings of those words. I argue, however, that the chunking of words does not in fact track the fact that the words are meaningful to us; it just tracks how prevalent the word units are to us. In section 6.6, I argue that regardless of whether we hear meanings or not, both O’Callaghan and Brogaard’s views support the Offloading View of perceptual learning, whereby through perceptual learning we offload onto perception tasks that would otherwise be done in a cognitive manner, such as the task of disambiguating speech sounds that we hear.
6.2 THE PHENOMENAL CONTRAST ARGUMENT FOR HEARING MEANINGS Chapter 3 discussed the Phenomenal Contrast Argument using the case of learning to recognize pine trees (see Siegel, 2006, 2010). The idea was that there is a difference in a person’s sensory phenomenology before and after they come to recognize pine trees, and that the best explanation for this difference is that pine trees come to be represented in perception. There is a similar argument with 156
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
respect to language learning for the conclusion that meanings come to be represented in perception (see, for instance, Siegel, 2010, pp. 99–100). According to the Phenomenal Contrast Argument for Hearing Meanings, a person’s auditory phenomenology differs when she hears a language that she knows versus when she hears that same language prior to knowing it. This is the case, according to the argument, even if she is hearing the same sentence from the same angle, from the same speaker, at the same volume, and so on. Since there is a difference in her auditory phenomenology, the argument continues, there is a difference in the content of her auditory perception. That is, there is a difference in what her auditory perception represents. According to the argument, the best explanation for this difference is that meanings come to be represented in her auditory perception after learning the language. That is, before she understands the language, meanings are not part of the content of her auditory perception when she hears the utterance; but after she understands the language, they are a part of the content when she hears the utterance. A few comments on the argument. First, consider the opening premise that there is an auditory phenomenal difference before and after learning a language when one hears that same language. This is an intuition, albeit a somewhat widespread one among philosophers in the literature, given the number of them that endorse it (see the list in the introduction to this chapter). Chapter 2 also lent some empirical plausibility to this intuition, since it provided evidence that there are indeed widespread phenomenal differences between people due to learning. Next, consider the second premise, that since there is a difference in auditory phenomenology, there is a difference in the auditory content. This premise follows from intentionalism, the view that “there can be 157
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
no difference in phenomenal character without a difference in content” (Byrne, 2001, p. 204). However, as Siegel points out, all that is required for a phenomenal contrast argument to succeed is a much more limited claim about the relation between phenomenology and content in the case under consideration (2010, p. 109). There may be counterexamples to the general thesis of intentionalism that involve differences in phenomenal character without differences in content. All that is needed for a phenomenal contrast argument to succeed is that this particular case, involving a change in auditory phenomenology when one comes to understand a spoken language, is not open to such counterexamples. Third, take the final premise—that the best explanation for the difference in content is that meanings have come to be represented in auditory perception. This is the premise on which much of the discussion in this chapter will rest. Is this the best explanation for the difference in content, or is there a better explanation? Brogaard (2018) thinks that this is the best explanation, whereas O’Callaghan (2010, 2011, 2015) argues that there is a better explanation. The Phenomenal Contrast Argument for Hearing Meanings is the focus of this chapter, but it worth noting that the argument has a visual analogue, a case that I mentioned at the beginning of this chapter. Recall Peacocke’s (1992) exact words about the difference “between the experience of a perceiver completely unfamiliar with Cyrillic script seeing a sentence in that script and the experience of one who understands a language written in that script” (p. 89). Peacocke mentions that the difference is in the “experience of a perceiver.” Assuming (fairly uncontroversially, I think) that the experience Peacocke is referring to is visual experience in particular, one can create a Phenomenal Contrast Argument for Seeing Meanings from this initial intuition that Peacocke has. The argument would continue from the initial intuition as follows: Since 158
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
there is a difference in visual phenomenology, there is a difference in visual content. The best explanation for this difference, the argument continues, is that meanings come to be represented in the perceiver’s visual experience after learning a language.
6.3 THE ARGUMENT FROM HOMOPHONES The debate about whether or not we hear meanings is perhaps the debate in philosophy in which perceptual learning has been evoked the most. In order to get to those arguments, it is important to understand the main line of reply to the Phenomenal Contrast Argument for Hearing Meanings in the literature: Casey O’Callaghan’s argument that the Phenomenal Contrast Argument for Hearing Meanings fails due to the case of homophones (see O’Callaghan, 2010, 2011, 2015). In this section, I critically assess that argument. In the case of homophones, two or more words have different meanings but the same pronunciation. For instance, the words “vain,” “vein,” and “vane” each have a different meaning, the first word having to do with pride, the second with blood circulation, and the third with the weather. Nonetheless, all three words have the same pronunciation. The homophones in this example are spelled differently, but that is not always the case. For instance, “bank” and “bank” are homophones when the former refers to a financial institution and the latter to the edge of a river, or vice versa. Despite the different meanings, both words are pronounced the same way. According to O’Callaghan, the case of homophones undermines the Phenomenal Contrast Argument for Hearing Meanings. His reasoning is as follows: If we want to test whether or not the awareness of meanings is truly what makes the difference to one’s auditory phenomenology when hearing spoken language, we should look at 159
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
the following case: We need a case where we fix the sounds and vary the meaning, and what we need to look for, then, is whether there is any difference in the auditory phenomenology (2011, p. 796). The case of homophones is such a case. Consider the following three sentences: “The politician is vain,” “The nurse located the vein,” and “The farmhouse had a weather vane.” With the homophones vain/vein/vane in these sentences, we fix the sounds, and vary the meanings of the words. When we hear the sentences, what we need to listen for is whether there is a difference in the auditory phenomenology of the homophones. O’Callaghan claims, very reasonably, that there is no difference. Brogaard (2018) offers a reply to the Argument from Homophones. She considers sentences like the vain/vein/vane sentences above (using O’Callaghan’s original example of pole/pole/ poll). O’Callaghan claims that there is not a difference in our auditory phenomenology when we hear the homophones. However, Brogaard argues that the meanings of homophones do make a difference to our phenomenology. She considers O’Callaghan’s sentences: “Ernest used the pole to vault over the high bar,” “Last year Mac visited the southern pole of Earth,” and “Bubb won the greatest number of votes in our latest poll” (O’Callaghan, 2011, p. 797). Brogaard next imagines a case where the same “poll” or “pole” sound is heard in a foreign language. She gives the example of the Danish sentence “Giv dukken til Poll.” Brogaard (2018) claims that the experience of the word “Poll” is different. In particular, she says, “We would have no impression of experiencing a meaning” (p. 2971). I think this is reasonable on its face. Note that what O’Callaghan is denying, however, is a difference specifically in auditory phenomenology. It is less clear to me that the impression of experiencing a meaning versus not experiencing a meaning involves a difference in auditory phenomenology specifically. Even assuming that it does, however, there is still 160
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
a problem with Brogaard’s argument. To be fair, O’Callaghan sometimes states his argument as a general claim about homophones, writing, for instance, “Homophonic utterances do not apparently cause perceptual experiences that differ in phenomenal character” (2015, p. 480). However, O’Callaghan does not need to say that every pair of homophonic utterances sound the same phenomenally when we hear them. O’Callaghan is arguing against the claim that the awareness of meanings makes a difference to one’s auditory phenomenology when hearing spoken language. He presents a number of standard cases of homophones where the meanings do not seem to affect one’s auditory phenomenology. To show that there is one case (or even several cases) of homophones that differ in terms of their auditory phenomenology (apparently due to meanings) would not undermine O’Callaghan’s winning point: that there are a very substantial amount of cases of homophones where meanings do not affect one’s auditory phenomenology. That is all he needs to undermine the Phenomenal Contrast Argument for Hearing Meanings. I have a different reservation about the Argument from Homophones, one that supports Brogaard’s objection by under cutting the reply that I said O’Callaghan could give to her. In his argument, O’Callaghan seeks to evaluate whether “hearing specific meanings makes a distinctive phenomenal difference” when one hears speech (2011, p. 796). It is not clear to me, however, that the proponent of the view that we hear meanings has to hold that hearing specific meanings must make a distinctive phenomenal difference, just that hearing specific meanings can (or perhaps typically does) make a distinctive phenomenal difference. The latter claim successfully yields the conclusion that meanings come to be represented in perception. So the proponent of the view that we hear meanings does not need to hold that hearing specific meanings must make a distinctive phenomenal difference. Perhaps there are 161
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
contexts in which hearing specific meanings does not make a distinctive phenomenal difference. One potential context is when the words involved are homophones presented closely together. Perhaps in such a context, the phenomenal sameness of the sounds presented side by side overrides any auditory phenomenal difference that would be due to differing meanings. In any case, the gen eral point is that there might be contexts in which hearing specific meanings does not make an auditory phenomenal difference. That is perfectly consistent with the Phenomenal Contrast Argument for Hearing Meanings. If the homophone case counts as one such context, then the Argument from Homophones does not undermine the Phenomenal Contrast Argument for Hearing Meanings. On closer examination, however, the homophone case arguably does not actually count as a case where hearing specific meanings would fail to make a distinctive phenomenal difference. Typically, when you present two similar things adjacently, either spatially or temporally, it highlights perceptual differences that exist between them instead of eliminating those differences. So, for instance, imagine that you have known identical twins for a while, and then you see them side by side for the first time. Plausibly, this will make the differences between them more salient than they were before, especially when you are looking for those differences. Now imagine that you hear the following sentences one right after the other: “The politician is vain,” “The nurse located the vein,” and “The farmhouse had a weather vane.” If there is difference in our auditory phenomenology when we hear the homophones (due to their different meanings), presenting them one after the other should probably highlight such a difference, not mitigate the difference. This is especially the case when you are tasked to listen for a distinctive difference in auditory phenomenology. So, even if there are contexts in which hearing specific meanings does not make a distinctive 162
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
phenomenal difference, the homophone case probably does not present such a context. Brogaard (2018) suggests that when evaluating the Argument from Homophones, we should evaluate not just the phenomenology of the individual homophones, but the phenomenology of the whole sentence fragment of which the homophones are a part (pp. 2980–2981). She argues that even if our auditory phenomenology of individual homophones is the same across sentences, it does not mean the phenomenology of the whole sentence fragment is the same. In fact, she argues, there is evidence for differences “from cases in which we fill in unheard or ignored gaps in speech through top-down influences” (p. 2981). I think Brogaard makes a very reasonable suggestion here, and it implies that O’Callaghan is evaluating the phenomenology of sentences containing homophones in the wrong way. The idea is that his focus needs to be on the auditory phenomenology of the whole sentence, not just the homophone. However, imagine hearing the following sentences, in which the first mention of “bank” refers to the financial institution, and the second mention of “bank” refers to the sloping area at the edge of a river: “I went to the bank,” and “I went to the bank.” Even if we evaluate not just the phenomenology of the individual homophones, but also the phenomenology of the whole sentence fragment, is there really a difference in auditory phenomenology between the two sentences? Granted, there may be a difference in the total phenomenology of the experience, but why think that there is a difference in the auditory phenomenology in particular? Furthermore, even if we evaluate the phenomenology of the whole sentence, as Brogaard suggests, recall that all O’Callaghan needs to undermine the Phenomenal Contrast Argument for Hearing Meanings is to show that there are cases of homophones where meanings do not affect one’s auditory phenomenology. He does not need to show 163
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
that in all cases of homophones, meanings do not affect one’s auditory phenomenology. The “I went to the bank” case would seem to suffice for this.
6.4 THE ROLE OF DIFFERENTIATION IN SPEECH PERCEPTION In addition to his Argument from Homophones, which is an argument against the view that we hear meanings, O’Callaghan offers a positive argument. The positive argument is designed to explain the intuition, which he has in common with the Phenomenal Contrast Argument for Hearing Meanings, that there is an auditory phenomenal difference between hearing a language before and after you know the language. Once again, my argument from chapter 2 lends some empirical support to this intuition. In this section, I use the perceptual learning literature to pinpoint the kind of perceptual learning involved when you learn a spoken language. According to O’Callaghan, several factors contribute to the auditory phenomenal difference between hearing a language before versus after you know the language. First, when you hear a language that you know, the speech sounds segmented to you, and you are able to hear the gaps between the words. By contrast, when you hear a language that you do not know, the language sounds more like a continuous stream than like sentences segmented into words. So one factor contributing to the auditory phenomenal difference upon learning a language is that the language becomes segmented into words for you (see O’Callaghan, 2010, p. 320; 2011, p. 801; 2015, p. 478. O’Callaghan, 2011, p. 801, also cites Smith, 2009, p. 185, and Strawson, 2010, p. 6, as making the same observation, and Lyons, 2009, p. 104, does as well). Second, when you learn a 164
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
language and develop some fluency in it, you become able to hear sounds in that language in more detail, giving you an ability to detect subtle differences that you could not detect when you lacked fluency. As an example, O’Callaghan (2015) notes how those who have mastered English come to hear “the difference between ‘s’ and ‘z,’ or the dropped ‘g’ or ‘t’ of certain accents,” whereas those who have not mastered English do not hear those differences, and perhaps cannot hear those differences (p. 478). Third, when you hear a language that you know, you hear phonemes that are specific to that language. By contrast, when you hear a language that you do not know, you typically fail to hear some of the phonemes of that language. For instance, a monoglot Japanese speaker who is learning English will typically be unable to hear the difference between the phonemes /r/and /l/. As she learns the language, however, her auditory phenomenology may change as she becomes able to hear the difference between /r/and /l/. The same holds true for a monoglot English speaker learning Mandarin as she becomes able to hear the difference between the phonemes /p/and /ph/.2 O’Callaghan argues that learning to understand another spoken language requires a significant amount of perceptual learning. Specifically, it requires a lot of exposure to sounds in the new language, which then modify how you hear the language. O’Callaghan (2015) makes this point specifically about the language-specific features of a language, such as the phonemes (p. 483). (He leaves open the possibility that we might also come to hear other language-specific features, such as morphemes and lexemes.) Since phonemes differ across languages, we need exposure to the language we are learning in order to develop the ability to hear the phonemes (p. 481). While it might seem at first glance that learning a language 2. I owe these examples to O’Callaghan (2015, p. 481).
165
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
simply requires a person to develop a mapping between the sounds in that language and the language’s meanings, learning a language in fact also requires learning to hear the sounds themselves, something that can be accomplished only through a lot of exposure to that language (p. 491). On O’Callaghan’s view, extensive perceptual learning is required in order to understand another spoken language. But which mechanisms of perceptual learning are implicated in the language- learning process, in particular? Recall the perceptual learning mechanism of differentiation from c hapter 1. In cases of differentiation, as a result of practice or experience, what previously looked (sounded, felt, tasted, etc.) to a subject (phenomenally) like a single object, property, or event, later looks (sounds, feels, tastes, etc.) to her like two or more objects, properties, or events. In chapter 1, I mentioned a case from William James (1890) involving differentiation. In James’s case, when given a glass of a particular kind of wine, a man had become able to tell, by taste, whether the wine came from the upper half or the lower half of the wine bottle (p. 509). One way to understand this is as follows: Whereas before the wine tasted the same to the man, regardless of where it came from in the bottle, now the wine tastes different to him if it comes from the upper half than it does if it comes from the bottom half. I mentioned that O’Callaghan lists three factors that contribute to the auditory difference in phenomenology that arises after one learns a new spoken language. Differentiation plays a distinctive role in each of these three factors. First of all, one of the standard examples used to explicate differentiation itself is the case of coming to hear language-specific features of a language upon learning it, which you could not hear before (see, for instance, Goldstone, 1998, p. 598). For the monoglot Japanese speaker who is learning English, for instance, part of learning the language is coming to 166
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
differentiate between the phonemes /r/and /l/, just as for the monoglot English speaker learning Mandarin, part of learning the language is coming to differentiate between the phonemes /p/and /ph/. Secondly, when a language that you have learned no longer sounds like a continuous stream of sounds, but like sentences segmented into words, this is a case of learned differentiation as well. Your ability to differentiate between words allows you to hear the gaps between the words now, where you could not before. Thirdly, differentiation is also involved when someone who has mastered a language becomes able to detect subtle differences in the language, such as “the difference between ‘s’ and ‘z,’ or the dropped ‘g’ or ‘t’ of certain accents” (O’Callaghan, 2015, p. 478). If O’Callaghan is right, all these factors contribute to the auditory phenomenal differences that accompany learning to understand a new spoken language. Interestingly, all these factors involve the perceptual learning mechanism of differentiation. In the definition of differentiation mentioned in the previous paragraph, differentiation helps you to hear two or more objects, properties, or events as distinct. There are different kinds of differentiation based on whether what is differentiated involves objects or properties. Goldstone and Byrge (2015) distinguish between the differentiation of whole stimuli, on the one hand, and the differentiation of attributes, on the other. Goldstone (1998) counts William James’s case of learning to differentiate between the upper and lower halves of a bottle of a particular kind of wine as a case of whole stimulus differentiation (p. 596). The case of chicken sexing— when chicken sexers learn to differentiate male and female chicks— also falls in this camp (p. 596). In both cases, you are differentiating between two distinct stimuli, be they glasses of wine or chicks. By contrast, consider a case like the following, in which mere attributes of stimuli are differentiated, rather than entire stimuli: It is typically 167
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
difficult for people to distinguish the brightness of a color from the saturation of a color. As Goldstone and Byrge (2015) put it, the two attributes are “psychologically fused” to most people (p. 823). Goldstone (1994), however, found that he could enable study participants to differentiate between brightness and saturation with the right kind of training. Goldstone and Byrge (2015) suggest that an especially efficient way to do this is to “repeatedly alternate training where saturation is task-relevant with training where brightness is task-relevant” (p. 823). This is an instance of attribute differentiation because it is a case of subjects becoming able to differentiate between two attributes that they could not differentiate before. As I have mentioned, O’Callaghan argues that one thing that helps account for the auditory phenomenal difference between hearing a language before and after you know the language is that we come to hear new phonemes during the process of learning the language. I follow O’Callaghan (2011) in holding that phonemes are not objects themselves, but rather are best construed as properties or attributes of utterances (p. 802; see also O’Callaghan, 2010, p. 325; 2015, p. 488). Given this, the case of learning to hear new phonemes as one learns a language is a case of attribute differentiation rather than object differentiation. (Chapter 7 will explore a case of object differentiation in depth.) Attribute differentiation occurs during the process of learning a spoken language, but it is not the only kind of perceptual learning that occurs. In addition to attribute differentiation, unitization also occurs. Recall that in cases of unitization, as a result of practice or experience, what previously looked (sounded, felt, tasted, etc.) to a subject (phenomenally) like two or more objects, properties, or events, later looks (sounds, feels, tastes, etc.) to her like distinct parts of a single complex object, property, or event. Unitization 168
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
is the converse of differentiation, in that it involves treating once- distinct items as the same, whereas differentiation involves treating once-unified items as distinct. However, Goldstone and Byrge (2015) think of the relationship between unitization and differentiation as “flip sides of the same coin.” This is because both unitization and differentiation “involve creating perceptual units,” even though differentiation involves distinguishing previously united perceptual units, and unitization involves uniting previously distinct perceptual units (p. 823). When we learn a spoken language, the process involves both differentiation and unitization. As O’Callaghan writes: In some cases, you effortlessly make, and must make if you are to understand what is spoken, discriminations you did not make and, in many cases, could not make before learning the language. In other cases, you cease to make and, in many cases, can no longer make, discriminations you did make before learning the language. (2010, pp. 312–313)
In this passage, the first sentence summarizes the role of differentiation in language learning. In language learning, you come to differentiate features of utterances that you did not differentiate before, perhaps because you were unable to do so before. The second sentence of the passage summarizes the role of unitization in language learning, that sometimes you stop differentiating features of utterances, and start to treat these features as a new single unit. O’Callaghan (2015) elaborates on this for the case of phonemes, where “users of a given language commonly treat certain crucial pairs of sounds or utterances as perceptibly equivalent, while those who do not know that language treat them as perceptibly distinct” (p. 481). He gives the example of the “t” sound as heard in utterances 169
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
of “ton” and “stun.” Monoglot English speakers hear the “t” sound in those utterances as perceptually equivalent, while the sound is different in the two utterances to monoglot Mandarin speakers (2015, p. 481). Put another way, monoglot English speakers unitize the “t” sound, while monoglot Mandarin speakers differentiate it. To summarize, the process of learning a spoken language involves the perceptual learning mechanisms of both unitization and differentiation. One way in which differentiation is involved is in the way in which we come to distinguish phonemes when learning a spoken language. There the differentiation involved is a special kind in which attributes (as opposed to whole stimuli) are differentiated. Following O’Callaghan, phonemes are attributes of utterances, and in the process of learning a spoken language, we come to differentiate phonemes that previously we had not differentiated.
6.5 WHY PERCEPTUAL LEARNING DOES NOT SUPPORT THE VIEW THAT WE HEAR MEANINGS Does the evidence from perceptual learning support the view that we hear meanings? I argue that the evidence does not support that view. This includes new evidence offered by Brogaard (2018), which I present in this section. On O’Callaghan’s view, the perceptual learning that occurs during language learning tells against the view that we hear meanings. Instead, he argues, we can explain the auditory phenomenal change in terms of low-level features, as well as middle-level features such as phonemes, without needing to hold that we perceive high-level properties such as meanings. Brogaard (2018) argues that the kind 170
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
of perceptual learning that takes place during language learning in fact supports the view that we hear meanings. Brogaard (2018) distinguishes between two different views: the Inferential View of language comprehension and the Perceptual View of language comprehension. According to the Inferential View, we hear an utterance and infer (likely unconsciously) what the speaker means (p. 2968). By contrast, on the Perceptual View, through language learning we come to perceive meanings directly (p. 2968). Brogaard argues against the Inferential View, and for the Perceptual View, based in part on evidence about perceptual learning in the language learning process. What this perceptual learning shows, Brogaard (2018) argues, is that we do not combine background information with what we hear, and then infer what is said (p. 2977). Rather, through perceptual learning, “Background information directly influences the formation of visual and auditory appearances” (p. 2977). Brogaard (2018) begins her discussion of the role of perceptual learning in language learning by considering the case of expert chess players, which she takes to be informative. Expert chess players are able to “chunk” whole chess configurations into long-term memory, whereas novice chess players commit only individual chess pieces into long-term memory. These “chunks” consist in a “configuration of pieces that are frequently encountered together and that are related by type, color, role, and position” (p. 2977). Just as expert chess players are able to “chunk” whole chess configurations into long-term memory, Brogaard argues, so too are readers who are fluent in a language able to “chunk” words. The reader new to a language starts out seeing random strings of letters, but as she develops fluency in the language, she starts to see them as the chunked letters of words. (Since her argument here is in service 171
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
of the conclusion that we hear meanings, presumably, Brogaard also thinks, very reasonably, that “chunking” happens when we hear words, too, not just when we see them.) Brogaard cites empirical evidence to argue that “chunking” happens after people learn the meaning of new words: When learning to read a new language, the brain transitions from a process of recognizing words as random strings of letters to a process of visually representing them in chunks, where a “chunk” can be considered a kind of visual object. This was shown in a study where researchers recruited a group of college students to learn the meaning of 150 nonsense words (Glezer et al. 2015). Before they learned the meaning of the words, their brain registered them as a jumble of symbols. But after they learned their meaning, their brain dedicated a circuit of neurons to each word in the visual word form area in the temporal cortex, an area which stores a visual representation of known words. The results indicate that once the volunteers learned to comprehend the words, they began to see them as units rather than sets of random letters, making it possible to read and comprehend at a faster pace. (Brogaard, 2018, pp. 2977–2978)
Brogaard summarizes that for the participants just beginning the study, their brains processed pseudo-words such as “poat” as mere jumbles of letters, as the participants had not yet learned the meaning of the words. However, after the participants had learned the meaning of the words, their brains dedicated a circuit of neurons to each word. This is evidence, according to Brogaard, that we come to perceive meanings. Brogaard is right that we come to chunk words after being repeatedly exposed to them, but I think she is mistaken that meaning 172
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
plays the crucial role here. It is not entirely clear to me what Brogaard has in mind when she says that the participants learned the meaning of the words. She writes, for instance, that participants “learned to comprehend the words.” Yet the study was built on “behavioral studies showing that people can learn novel words rapidly and in the absence of semantic information” (Glezer et al., 2015, p. 4966, italics added). The experimenters in the study did not give participants meanings for the novel words they were learning. In fact, when the authors write that the participants are learning a novel word, what they mean is that the participants are learning to recognize a novel string of letters (such as “poat”), not that they are also learning a new meaning for the novel word (such as that poat means X). So, when participants began to see the nonsense words as units, it was in absence of learning meanings for those words, though it was after the participants had been exposed to the nonsense words repeatedly. Perhaps by “learning the meaning” of the words, Brogaard simply means that the participants in the study learned that the words were significant, whereas they had not been before. I think this is a more reasonable view, given the details of the study. On this interpretation, the participants chunked the words after they had acquired significance for them, but not before. Other words that had not acquired significance would not be chunked. If this is what Brogaard means, however, I do not think her conclusion that perception represents meanings follows. It would show at best that we represent perceptually that certain words are significant for us, not that we represent the meanings of those words in perception. Supposing still that by “learning the meaning” of the words, Brogaard simply means that the participants in the study learned that the words were significant, I think there is a more plausible alternative reading of the study. Why did participants in the study come to chunk the nonsense words over time? The alternative 173
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
explanation is that what matters for chunking is the prevalence of the particular string of letters to us. On this view, it is the regularity of the letter combination that makes the difference for chunking, not the participant’s knowledge of the meaning of the word or the word’s significance to the participant. In fact, it has been found that in certain tasks for which chunking would be advantageous, “the advantage of words over nonwords . . . can be eliminated by repetitively exposing participants to the stimuli” (Goldstone, 1998, pp. 602–602, referencing Salasoo, Shiffrin, & Feustel, 1985). Perhaps the repeatedly exposed words acquire significance for the participants, and perhaps they do not. All we know from such studies is that chunking can occur through repeated exposure. These studies do not speak to whether the words have significance to the participants. The alternative explanation that I am proposing is also in line with the literature on perceptual learning more generally, whereby perceptual learning can occur through mere exposure, without the participant even knowing that the relevant stimuli are significant (see, for instance, the discussion of unsupervised learning in c hapter 1, sec 1.2, or the discussion of Gibson & Walk, 1956, in chap. 3, sec. 3.4).
6.6 THE OFFLOADING VIEW AND SPEECH PERCEPTION So far, the debate has been about whether or not we come to hear meanings. My view is that the evidence from perceptual learning cited in the philosophy literature does not support the conclusion that we hear meanings. However, regardless of whether we hear meanings or not, it is important to recognize that both views support the Offloading View of perceptual learning. Recall that according to the Offloading View, perceptual learning offloads tasks 174
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
onto one’s quick perceptual system, tasks that would be slower and more cognitively taxing if they were done in a controlled, deliberate manner. For instance, when a budding telegrapher begins training, receiving a message starts out as a slow, deliberate, and cognitively taxing task process for her. Through time, however, the task gets offloaded onto the perceptual system, thereby freeing up cognitive resources for other tasks, such as thinking about the meaning and implications of the message. Brogaard’s account, on which we do hear meanings, supports the Offloading View of perceptual learning. In her argument for the view that we come to hear meanings, for instance, she describes how the perceptual learning that happens during language learning serves to free up cognitive resources: Language learning evidently also proceeds via perceptual learning. During the initial stages of second language learning, for example, speakers use controlled processes with focal attention to task demands (McLaughlin et al. 1983). They may consciously employ grammatical rules when producing sentences and use translation when reading. At more advanced stages automatic processes are employed, and the attention demands decrease. (Brogaard, 2018, p. 2977)
On Brogaard’s account, language learning begins as a controlled and attention-demanding process. However, as the language learner becomes more fluent, attentional demands decrease. As the language learner begins to understand the language, she starts to see words as units, not just random strings of letters, thereby “making it possible to read and comprehend at a faster pace” (2018, p. 2978). The language learner no longer has to use attentional resources to read each letter, since words are now chunked by her perceptual 175
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
system. This frees up cognitive resources for quicker comprehension. Brogaard’s examples of language learning in the passage just quoted focus on speech production and reading. However, the same point can be made about learning to understand spoken language. The initial stages require a large amount of cognitive resources, as the language learner needs to attend in a controlled manner to the words of the speaker, among other things. After one is fluent in a language, however, automatic processes are employed, freeing up one’s cognitive resources to do other tasks. Just as Brogaard’s account supports the Offloading View of perceptual learning, O’Callaghan’s account also supports the Offloading View, despite being opposed to Brogaard’s account in many other respects. O’Callaghan (2011) argues that through perceptual learning, we come to hear the language-specific features of a language we are learning, such as the phonemes in that language. Hearing these new language-specific features, as the result of the perceptual learning process, is part of what enables us to understand the utterances of a new language (p. 784). As O’Callaghan (2010) puts it, you come to make new discriminations that you “must make if you are to understand what is spoken” (p. 312). So, the perceptual learning process of differentiation helps a language learner to understand the language. Importantly, it enables a language learner to understand what is said more quickly and automatically. Consider the following indicative case: A monolingual Japanese speaker who is just beginning to learn English will likely have trouble differentiating between the phonemes /r/and /l/. This makes the process of understanding English utterances more difficult. In many spoken sentences, you must differentiate the phonemes /r/and / l/in order to understand the meaning of the sentence. For the new English speaker, the lack of differentiation between the phonemes /r/and /l/makes understanding English more cognitively taxing 176
D i f f e r e n t i at i n g P r o p e r t i e s : Sp e e c h P e r c e p t i o n
because it creates ambiguities that would not be present for a native English speaker. The language learner then has to spend valuable cognitive resources in sorting out the ambiguities. However, as a result of perceptual learning, there are fewer ambiguities when the speaker hears English. This enables the language learner to understand the utterances of the new language more quickly and automatically. Perceptual learning offloads onto the perceptual system the once-cognitive task of disambiguating words, thereby freeing up cognitive resources for other tasks. Brogaard and O’Callaghan agree that perceptual learning occurs in language learning. But they disagree about whether through perceptual learning we come to hear meanings in spoken language. However, both agree that language learning helps us to grasp meanings. They just differ on whether we grasp them in the content of perception, or whether we grasp them post-perceptually. Yet both can agree that the perceptual learning that occurs in language learning helps us to more quickly and automatically grasp those meanings, regardless of whether the grasping is perceptual or post- perceptual. On O’Callaghan’s view, perceptual learning enables us to more quickly grasp meanings because it helps us to differentiate language-specific sounds. When a person comes to hear the language-specific sounds, it allows her to more quickly understand what is said. On Brogaard’s (2018) view, perceptual learning enables us to “comprehend at a faster pace” (p. 2978). Meanings come to be represented in the perception itself, according to Brogaard.
6.7 CONCLUSION In this chapter, I argued that the perceptual learning mechanism of differentiation plays a significant role in learning to hear a spoken 177
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
language. Differentiation often works in tandem with the mechanism of unitization, helping us to better hear a new language. This enables us to more quickly grasp the language’s meanings. However, I argued that the evidence from perceptual learning cited in the philosophy literature does not support the view that we literally hear meanings. This chapter focused on one particular kind of differentiation: attribute differentiation. Through perceptual learning, we come to differentiate language-specific features, such as phonemes. Phonemes are not objects themselves, but attributes of objects. So, when you come to differentiate phonemes while learning a new language, this is a kind of attribute differentiation. In the next chapter, we will turn to a second kind of differentiation: object differentiation. The chapter focuses on perceptual learning with regard to color. It may seem at first glance that this is just another kind of attribute differentiation. However, the particular case of color perception I explore involves differentiation as it applies to objects. To be more specific, for types of objects with which we associate a prototypical color (such as for stop signs), researchers have found that such objects are sometimes perceived as closer to the prototypical color than they actually are, under low lighting conditions. This effect helps us to better differentiate those objects from their backgrounds, or so I will argue in chapter 7.
178
Chapter 7
Learning to Differentiate Objects The Case of Memory Color
7.1 INTRODUCTION In the past 100 years, several psychology studies have reported a bizarre color phenomenon. Discolored hearts sometimes appear redder to us than they actually are; discolored bananas, yellower; Smurfs, bluer; and oranges, more orange. In general, faded types of objects with which we associate a prototypical color are sometimes perceived as closer to that color than they actually are. For example, we associate a distinct yellow with bananas. There is evidence that if we see a banana under dim lighting, we may well perceive it as being closer to its prototypical yellowness than the discolored banana actually is. The late nineteenth-and early twentieth-century psychologist Ewald Hering called this an effect of memory color. On Hering’s ([1920]1964) view, memory color is pervasive. As he put it, “All objects that are already known to us from experience, or that we regard as familiar by their color, we see through the spectacles of memory color, and on that account quite differently from the way we would otherwise see them” (pp. 7–8). Bananas look yellower to
179
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
us, in Hering’s words, because we see them through the spectacles of memory color. In this chapter, my goal is to understand memory color in terms of perceptual learning. More specifically, my claim is that memory color enhances perceptual learning by enabling us to better differentiate objects from their backgrounds, especially under low lighting conditions. In understanding cases of memory color in this way, I part ways with an influential philosophical interpretation of such cases (first presented by Fiona Macpherson in 2012): that they should be understood as cases of cognitive penetration. I also part ways with the standard interpretation of memory color cases in psychology: that they should be understood as cases of color constancy (see Hering, [1920]1964; Duncker, 1939; Olkkonen, Hansen, & Gegenfurtner, 2012). If I am right to understand memory color in terms of perceptual learning, this will again expand the scope of perceptual learning, showing that we can use perceptual learning to understand another perceptual domain. The plan for the chapter is as follows: In section 7.2, I present the standard view of memory color cases in philosophy: that they are cases of cognitive penetration. In section 7.3, I turn to psychology. I survey the evidence for the existence of memory color, which spans from well over a half-century ago to the present day. Cases of memory color are not uncontroversial, and some have wondered whether the memory color effect really exists and is genuinely perceptual. I address some recent criticism of such cases in section 7.3. In section 7.4, I then raise a worry for the standard explanation of this effect: that memory color cases are instances of color constancy— that is, cases that serve to make the color of objects less variant across
180
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
different viewing conditions. The problem with this is that, given the right scene, color constancy occurs nearly universally across humans, while memory color occurs only if the person has seen the types of objects in that scene. In section 7.5, I argue that there is a better explanation of memory color cases than is found in the views that many philosophers and psychologists have been offering: memory color enhances perceptual learning, enabling us to better differentiate objects, such as fruits, from their environmental backgrounds. That is to say, it is helpful for figure-ground separation. As I have been arguing throughout this book, perceptual learning benefits us by offloading onto perception tasks that would be slower and more cognitively taxing were they done in a controlled, deliberate manner, thereby freeing up cognition to do other tasks. In section 7.6, I show how this applies to cases of memory color. Memory color enhances our ability to perceptually differentiate objects from their backgrounds. To give just one example, because of memory color, we can better differentiate a cluster of bananas from a banana plant in poor lighting. The same holds for a whole range of objects, and the cumulative result is that this frees up cognitive resources in our everyday lives.
7.2 MEMORY COLOR AND COGNITIVE PENETRATION Philosophers often do not use the term “memory color” when discussing the phenomenon (although some exceptions are Silins, 2016; Siegel, 2017). But the standard philosophical issue regarding cases of memory color, which was first raised by Fiona Macpherson (2012), is whether cases of memory color are cases of cognitive
181
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
penetration (see also Deroy, 2013; Zeimbekis, 2013; Bitter, 2014; Orlandi, 2014, sec. 4.8; Brogaard & Gatzia, 2016; Silins, 2016). Recall cases of cognitive penetration as cases in which “the phenomenal character of perceptual experience [is] altered by the states of one’s cognitive system, for example, one’s thoughts or beliefs” (Macpherson, 2012, p. 24). Focusing especially on one particular case of memory color (an experiment conducted by Delk and Fillenbaum in 1965), Macpherson (2012) argues that memory color is an instance of cognitive penetration (at least in the Delk and Fillenbaum case). Others have questioned or denied that cases of memory color are any such thing (see Deroy, 2013; Zeimbekis, 2013; Bitter, 2014; Orlandi, 2014, sec. 4.8; and Brogaard & Gatzia, 2016). As Macpherson points out, it is more difficult to refute the claim of cognitive penetration in memory color cases than it is in many other putative cases of cognitive penetration. In part, this is due to the fact that in many other putative cases of cognitive penetration, the change in phenomenal character can be explained in terms of a shift in spatial attention, whereas memory color cases cannot be explained in that way. For instance, Macpherson (2012, p. 35) takes Siegel’s pine tree case, which I discussed in chapter 3, to be a putative case of cognitive penetration. Macpherson argues that the pine tree case is one in which the change in phenomenal character after one learns to recognize a pine tree might be explained in terms of a shift in spatial attention. Since many do not consider such shifts in attention to be true cases of cognitive penetration, the pine tree case would not be a genuine case of cognitive penetration (Macpherson, 2012, p. 37). In memory color cases, however, the same attentional explanation is unavailable. One reason is that many memory color studies use uniformly colored stimuli and aim to evenly illuminate the stimuli (see Delk & Fillenbaum, 1965, p. 291; Witzel et al., 2011, 182
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
pp. 32–33). So spatially attending to a different part of a stimulus is unlikely to yield a different color phenomenology (Macpherson, 2012, p. 43). For this reason, cases of memory color are likelier candidates for cognitive penetration. One role of perceptual learning in the philosophical literature has been to explain away putative cases of cognitive penetration. For instance, it may seem at first glance that Siegel’s pine tree case is one in which one’s newly acquired concept of a pine tree influences one’s perception and, as such, is a case of cognitive penetration. As I argued in chapter 3, however, the best way to understand Siegel’s pine tree case is through perceptual learning (see also, Connolly, 2014c; Arstila, 2016). My claim was that one’s newly acquired concept of a pine tree is not directly responsible for the new look of the pine tree. Rather, a shift in the weight of one’s attention is responsible for it. One reason why perceptual learning is a good instrument for explaining away putative cases of cognitive penetration is the following: In cases of perceptual learning, it is the external environment that drives the perceptual changes. As Raftopoulos (2001) puts it, “[P]erceptual learning does not necessarily involve cognitive top-down penetrability but only data-driven processes” (p. 493). For putative cases of cognitive penetration, the strategy for the perceptual learning theorist is to show how the perceptual changes involved may have been driven by data, the result of repeated exposure to stimuli, instead of driven from the top-down by cognition. Several philosophers have used this strategy at times, including Pylyshyn (1999, sec. 6.3), Brogaard and Gatzia (2015, p. 2), Stokes (2015, p. 94), and Deroy (2013).1
1. One exception to the trend of explaining away putative cases of cognitive penetration in terms of perceptual learning is Cecchi (2014). Cecchi argues that a particular case of perceptual learning—that found in Schwartz, Maquet, and Frith (2002)—should count as a case
183
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
In this chapter, my focus is not just on whether memory color is a case of cognitive penetration, but also on why memory color cases happen in the first place. This second issue is something that philosophers interested in memory color can countenance, regardless of where they stand in the cognitive penetration debate.2 My view, very roughly, is that cases of memory color happen because they enable us to better differentiate objects from their environmental backgrounds, especially under low lighting conditions. This helps us to identify objects more quickly and accurately. In an ecological context, this means that we are better able to pick out fruits like bananas and oranges when foraging for them. For those who hold that memory color is an instance of cognitive penetration, my view can offer them an account of why some cases of cognitive penetration occur. Those interested in memory color who deny that it is an instance of cognitive penetration can still improve their understanding of memory color by gaining an account of why it occurs. In section 7.3, I plan to survey the psychology literature on memory color. Before I continue, however, since color is the subject of this chapter, let me make one point about theories of color. I have been talking about the memory color effect as an effect that of cognitive penetration. The study in question found changes in the primary visual cortex after learning, and also that these changes were brought about by higher areas in the brain influencing the primary visual cortex. Cecchi argues that because the perceptual changes were the result of top-down influence, this case of perceptual learning should count as a case of cognitive penetration. One worry I have is that the study Cecchi cites may not actually involve a case of conscious perception. In the study, stimuli were presented for 13 milliseconds, which is potentially below the threshold of conscious detection. If the study did not involve conscious perception, then it would not seem to qualify as a case of cognitive penetration, at least as Macpherson describes it. As she puts it, cognitive penetration involves cases in which “the phenomenal character of perceptual experience [is] altered by the states of one’s cognitive system, for example, one’s thoughts or beliefs” (Macpherson, 2012, p. 24, italics added). That would require conscious perception. 2. One exception is Zeimbekis (2013), who denies that memory color is a perceptual effect. See my reply to him in section 7.3.
184
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
makes objects look more like their prototypical color than they actually are. This distinction between the actual color and the look of the color may strike some as assuming an objectivist notion of color. According to objectivists, a color is defined as a physical prop erty in the world, such as a surface spectral reflectance, and this can be separated from the way a color looks (see, for instance, Byrne & Hilbert, 2003). I am not endorsing objectivism, however. When I distinguish between the way a color looks and the way it actually is, a different but equally good way of understanding this is as the distinction between how the color of an object looks to you under the effect of memory color and how it would look to you under those same circumstances without the memory color effect. The latter formulation is consistent with both subjectivist (e.g., Hardin, 1993) and relationalist (e.g., Thompson, 1995; Matthen, 2005; Hatfield, 2007; Cohen, 2009; Chirimuuta, 2015) notions of color. In fact, understanding memory color with a relationalist theory of color can yield an interesting position. On the relationalist view, “colors are constituted in terms of relations between subjects and objects” (Cohen, 2010, p. 13). Taking into account memory color may mean that the subject side of the relation needs to include the subject’s history.
7.3 A BRIEF SURVEY OF MEMORY COLOR STUDIES In a study on memory color, Karl Duncker (1939) took several artificial green leaves and then cut and patched some of them into the shape of a donkey. He left another artificial green leaf intact (see Figure 7.1). He then had 11 participants look at either the leaf or the donkey under red lighting. Under the red lighting, neither the 185
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g (a)
(b)
Figure 7.1. Duncker placed an artificial green leaf (A) and a donkey shape composed of the same green leaf material (B) under red lighting to neutralize the green color of the stimuli. Eight of eleven participants found the leaf to look greener even though the donkey was made of the same colored material. Source: Duncker (1939).
leaf nor the donkey looked green anymore. The participants were next asked to match the color of the donkey and then the color of the leaf (both still presented under the red lighting) with one of the colors on a nearby color wheel (the wheel was not under the red lighting). An experimenter was in charge of the color wheel, and the participants directed him toward the color that matched the stimulus. The results were that 8 out of 11 participants found the leaf to be greener than the donkey, despite the fact that the two were made from exactly the same green material and shown under exactly the same lighting conditions. A 1965 experiment by John Delk and Samuel Fillenbaum added to Duncker’s result, finding the memory color effect for a more diverse array of stimuli. Delk and Fillenbaum (1965) used three classes of stimuli, all cut from orange-red cardboard. The first class of stimuli (consisting of a heart, an apple, and a pair of lips) were shapes of objects that are normally associated with red. The second class of stimuli (consisting of an oval, a circle, an ellipse, and a square) were shapes of objects that have no particular color association. The third class (a front-facing horse head, a bell, and a mushroom) were 186
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
shapes of objects that are typically associated with colors other than red. For each of 60 participants, an experimenter placed each stimulus, one-by-one, in front of a colored background. The participants were tasked with directing the experimenter to change the color of the background until it matched the color of the stimulus. Despite the fact that each stimulus was cut from the same orange-red cardboard, participants made the background redder for the class of objects normally associated with red (the heart, apple, and lips). As in Duncker’s (1939) study, Delk and Fillenbaum (1965) found evidence for a memory color effect. They found evidence that under certain conditions, objects with prototypical colors appear more like those colors than they actually are. More recent studies on memory color have improved on past experimental designs. Hansen et al. (2006) made two important advances (see Olkkonen, Hansen, & Gegenfurtner, 2012, p. 187). First, they allowed participants to control the color matching themselves instead of directing the experimenters to do it for them. The worry was that since communication between the participant and experimenter had allowed language into the experiment, the results might have been due to semantic effects rather than perceptual ones. For instance, it may have been that the reason participants asked an experimenter to make a heart redder was that they had just verbalized to the experimenter that it was a heart, not that it actually looked redder to them. Second, instead of using cutout stimuli that captured only shape outlines, computer technology enabled Hansen and colleagues (2006) to use realistic-looking stimuli that could capture not just shape outlines, but three-dimensional characteristics of the entire object. Using the improvements just mentioned, Hansen and colleagues (2006) ran a memory color experiment as follows: They presented pictures of seven fruits and vegetables one-by-one on 187
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
a computer screen to each of fourteen participants. They were realistic-looking photos of a cucumber, a bunch of grapes, a head of lettuce, a lemon, a banana, an orange, and a carrot (see Figure 7.2). Each fruit or vegetable was presented against a gray background on the computer screen. The initial color of each fruit or vegetable was randomized. So a participant might initially see a green carrot or a blue lemon on the screen. Participants were then asked to adjust the color of the fruit or vegetable until it appeared gray. The result was that participants overadjusted the colors of the fruits and vegetables. And they did so in a uniform way. To illustrate with one example, they made the banana bluer than it should have been. This is significant because blue is the opponent color to yellow. The experimenters concluded that when the banana was actually
18
9
0
–9
–18 –2
–1
0
1
2
L – M (% cone contrast)
Figure 7.2. Hansen and colleagues (2006) asked participants to make fruit stimuli neutral gray, using a dial. Participants should have dialed the colors to point (0, 0) on the axes. Instead, they overshot, making the fruits closer to their opponent colors. Hansen and colleagues interpreted this result to mean that participants saw the grayed fruits as having more of their prototypical color than the fruits actually had. Source: Hansen et al. (2006). 188
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
perfectly gray, it still appeared slightly yellow to the participants, and so the participants continued to adjust it past the gray and into the blue so that it would appear gray to them. Put another way, to the participants, the banana appeared to have a yellow tinge to it, even when it was fully grey and there was no yellow in it, and this tinge did not go away until it they had actually made the banana slightly blue, in order to cancel out the yellow. The experimenters concluded more generally from their data that when the fruits and vegetables were in fact gray, they appeared more like their prototypical colors than they actually were. The Hansen et al. (2006) study used stimuli of natural objects like bananas, oranges, carrots, and grapes. A 2011 study by Witzel and colleagues, by contrast, tested 25 participants and found the same effect for artificial objects, such as German mailboxes (the study was conducted in Germany), Smurfs, and the Pink Panther (see Figure 7.3). Past color associations appear to affect one’s perception of color for both natural and artificial objects. At this point, one might object that the authors of the two memory color experiments actually misinterpreted their results. The standard interpretation of memory color experiments is that they illustrate how past associations affect a subject’s color experiences. Objects with prototypical colors appear more like those prototypical colors than they actually are, at least under certain conditions. By contrast, one might argue (as John Zeimbekis does), that the experiments do not show a perceptual effect at all. Zeimbekis (2013) argues that “the way subjects in the experiments think of the objects could affect their color judgments without altering their color experiences” (p. 168). Zeimbekis makes a strong case for this conclusion. Following Diana Raffman (1994, pp. 51–53), he draws on work by the renowned psychologists Amos Tversky and Daniel Kahneman (1974) reporting that under conditions of uncertainty, 189
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
Figure 7.3. Witzel and colleagues (2011) found the memory color effect for ten out of fourteen of these artificial objects (that is, for everything except the fire extinguisher, heart, Coca-Cola logo, and mouse cartoon figure). Source: Witzel et al. (2011).
study participants start with an initial value (in this case, perhaps the equivalent is “that a heart is red” or “that a banana is yellow”), and then adjust their performances or judgments accordingly. So, according to Zeimbekis, it is not that participants base their judgments on the colors that they are seeing. Rather, they base their judgments on what they initially think the color ought to be. This can account for the results of the experiments, without saying that participants actually perceived the banana as yellower (or the heart as redder) than it actually was. 190
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
In my view, however, Zeimbekis’s argument does not provide a sufficient explanation of memory color. Consider a somewhat controversial, but I think ultimately convincing case involving memory color and race in a study by Levin and Banaji (2006) (see Firestone & Scholl, 2015, 2016; Baker & Levin, 2016, for a debate about the merits of the study). The study used the grayed face of a Black person and the grayed face of a White person. Even though the faces had exactly the same reflectance, participants routinely matched the face of the Black person to a darker shade of gray, and the face of the White person to a lighter shade of gray. And when the two faces are shown side by side (see Figure 7.4), it seems that Zeimbekis (a)
(b)
Figure 7.4. Levin and Banaji (2006) found that participants matched the face of a Black person (A) to a darker shade of gray than the face of a White person (B), despite the fact that the two faces were matched “for both mean luminance and contrast” (p. 503). Viewing the two faces side by side, many people observe an illusory difference in lightness even though both faces have exactly the same reflectance. This provides evidence that our past associations between entities and their typical colors affect our color experience, not just our color judgments, contrary to Zeimbekis (2013). Source: Levin and Banaji (2006). 191
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
cannot be right about memory color. Zeimbekis says that “the way subjects in the experiments think of the objects could affect their color judgments without altering their color experiences” (2013, p. 168). But in viewing the two faces side-by-side, it is common for people to report that they do not simply judge the faces to have a different color. Rather, they perceive the colors of the faces to have a different color. As Firestone and Scholl (2016) put it, “[T]he difference in lightness is clearly apparent upon looking at the stimuli” (p. 12).3
3. Firestone and Scholl (2015, 2016) argue that the difference in how we perceive the Black face and the White face is explicable just in terms of low-level differences between the two faces, such as the glossiness of the Black face but not the White one. Their argument relies on the results of their 2015 study, in which they blurred the Black and White faces, so that participants were unable to identify the races, but could still see a lightness difference. Baker and Levin (2016) have replied to that study with new empirical results, and Firestone and Scholl (2016, pp. 60–61) have replied back to them. I have a different line of argument against Firestone and Scholl’s 2015 paper, however. In the blurred faces that they presented to participants, the 3-D shapes of the faces are still somewhat visible as are depth and texture cues. Since their participants have been exposed to Black and White faces already in ordinary life, there is a confound in the study. It could be that the perceived difference in lightness between the faces is due to low-level differences between the faces (which is what Firestone and Scholl argue). Or, it could be instead that prior exposure to the right conjunction of colors, on the one hand, and 3-D shape, depth, texture, and potentially other low-level cues, on the other hand, modulates the subsequent perceptions of colors experienced with those types of 3-D shapes (assuming they have similar enough depth, texture, and other low-level cues). While Firestone and Scholl’s explanation of the difference in lightness between the blurred faces is simpler than the alternative hypothesis I have proposed, my alternative hypothesis would seem to have more explanatory power in at least one respect, since it can also provide a plausible explanation for the other memory color cases I have described. The explanation of memory color would be that we build up associations between colors, on the one hand, and 3-D shape, depth, texture, and potentially other low-level cues on the other hand. The idea is that these associations modulate our perception of the colors of subsequent 3-D shape, depth, and texture cues of that type (at least under certain conditions). The faded banana looks yellower than it actually is because you have built up an association between yellowness, on the one hand, and the 3-D shape, texture, and depth cues of a typical banana. I return to this idea in section 7.5.
192
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
7.4 WHY MEMORY COLOR IS NOT A MECHANISM FOR COLOR CONSTANCY Why does a color effect occur that makes bananas look yellower and Smurfs look bluer under certain conditions? The traditional explanation (offered in Hering, [1920]1964; Duncker, 1939; Olkkonen, Hansen, & Gegenfurtner, 2012) is that the memory color effect is a mechanism for color constancy—that is, for enabling us to see colors more stably through variations in illumination (see Hatfield, 2009, p. 283).4 Very roughly, the idea is that the memory color effect helps subjects to perceive a banana and a Smurf as their true color (or closer to it) when those objects are viewed under less than favorable lighting conditions. As Olkkonen, Hansen, and Gegenfurtner (2012) put it, “[K]nowledge about the colors of familiar objects interacts with sensory evidence to increase the stability of perceived colors in uncertain situations, such as when the illumination is variable” (p. 194). I agree with this explanation so far. If color constancy just refers to how we see objects as the same color despite differences in illumination or viewing conditions, then memory color enables color constancy. The problem, however, is this: Color constancy occurs nearly universally across humans given the right scene, and this feature of it is inconsistent with the memory color effect. Let me explain. Color constancy is nearly universal among humans in the following sense: If viewing a particular scene implicates the mechanisms for color constancy in one person, it will almost certainly implicate those same mechanisms for another. A color constancy illusion
4. For a recent review of color constancy, see Brown (2017).
193
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
(such as the two images in Figure 7.5a) will work on everyone (at least, given that they are not color blind, have adequate visual acuity, and so on). The memory color effect, on the other hand, varies from person to person. Whether or not the mechanisms for memory color are implicated, given a particular scene, will depend on whether or not someone has seen the relevant type of object before. If someone has seen the cartoon Pink Panther before, the memory color effect might occur, but if someone has not seen the Pink Panther it will not (see Figure 7.5b). Although color constancy is nearly universal in humans, let me be clear that there are indeed cases of color constancy when the specific effect one experiences is not universal. A recent case of this involved a photograph of a dress that looked blue and black to some observers and white and gold to others.5 In both the dress case and in cases of memory color, different people may perceive different colors. At the same time, in the dress case, although the particular effect one experiences is not universal, everyone who sees the dress experiences constancy (given that they are not color blind, have adequate visual acuity, and so on). It’s just that sometimes, due to color constancy, the dress appears white and gold, and at other times, again due to color constancy, the dress appears blue and black. But in all cases, given the dress scene, the mechanisms for color constancy are implicated. My claim was that cases of color constancy are importantly different from cases of memory color because humans experience the former cases nearly universally,
5. See Holderness, C., What colors are this dress? Buzzfeed, 2016, February 26, 2016. Retrieved from https://www.buzzfeed.com/catesish/help-am-i-going-insane-its-definitely-blue. For another similar example, see Holderness, C., & Vergara, B. S., What color is this jacket? BuzzFeed, February 26, 2016. Retrieved from https://www.buzzfeed.com/catesish/ what-color-is-this-jacket.
194
(a)
(b)
Figure 7.5. This figure illustrates one major challenge for explaining memory color as a case of color constancy. Starting with the two cube images (a), the blue tiles on the cube on the far left are actually the same shade as the yellow tiles on the cube to the right of it. Yet they are perceived as different shades because of color constancy. Importantly, nearly all humans will experience this effect. By contrast, the discolored Pink Panther (b) on the right will be experienced as pinker than it actually is only if someone has seen the Pink Panther before. Source: (a) Lotto and Purves (2002); (b) The original undiscolored image is from Witzel et al. (2011).
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
whereas the latter cases differ from person to person. In this way, even the dress case differs from cases of memory color. Viewing the dress scene implicates the mechanisms for color constancy universally. By contrast, whether or not the mechanisms for memory color are implicated, given a relevant scene, depends on whether or not the viewer has seen the relevant type of object before.
7.5 MEMORY COLOR AND PERCEPTUAL LEARNING One further point against the color constancy explanation of the memory color effect is that there is a good alternative explanation for the effect. The alternative hypothesis, which I will defend, is that memory color is explicable in terms of perceptual learning.6 The idea is that following repeated experiences with bananas, for instance, we develop a relatively permanent and consistent change in our perception of them. Namely, in cases of poor illumination, they look more yellow than they actually are. This allows us to more quickly and accurately differentiate bananas from their backgrounds. At the same time, note that there is one rare case where memory color may actually thwart differentiation. If the background is the same color as the prototypical color of the object, and the object is faded just so that 6. Despite some denials that memory color is a perceptual effect, for example, Zeimbekis (2013) and Firestone and Scholl (2016), it should not come as a surprise more generally that there is some variance in color perception. We already know that some people are colorblind, even in extreme ways, such as in the condition achromatopsia, where people literally see the world in black and white (for an account of this, see Sacks, 1995). Furthermore, in the dress case mentioned earlier, some people see the dress as blue and black while other people see it as white and gold, illustrating that under the right conditions, color perception can vary widely even among normal-sighted color perceivers. My claim is that cases of memory color are just one more instance of variance in color perception—one that is due to learning.
196
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
the memory color effect makes it the same color as the background, memory color may actually make the object blend into the background. So if the background is banana-yellow, and a banana in front of that background is faded just such that the memory color effect makes it banana-yellow, this may make it blend into the background rather than stand out. Another circumstance that may worsen differentiation, although not thwart it, would be with the same or a similar background color, but where the memory color effect does not make the banana fully blend into the background, and instead just makes it closer to the background color. These are very specific sets of circumstances, however. More typically, especially under low lighting conditions, memory color enables better figure-ground separation. Recall that perceptual learning includes cases of “differentiation,” where perceptual features that were once treated by the perceptual system as unified are later treated by the system as distinct (Goldstone, 1998, p. 596). As I have mentioned, one illustrative case is of a man who became able to “distinguish by taste between the upper and lower half of a bottle of old Madeira” ( James, 1890, p. 509). One way to understand this case is that the man’s gustatory system previously treated the bottle of wine as a single unit, but later began to treat it as two distinct units. A more recent case of differentiation involves speech perception: Logan, Lively, and Pisoni (1991) found that they could train native Japanese speakers living in the United States, for whom English was their second language, to better distinguish between the phonemes /r/and /l/. Memory color enables better differentiation. Consider an ecological example. Imagine someone foraging for bananas who has never seen bananas before. Suppose furthermore that there is a cluster of bananas in dappled lighting on a banana plant in front of them. Given that they have never seen bananas before, the memory color effect will not be operative. Because of this, it might be 197
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
difficult for them to differentiate the bananas from the background, given the poor lighting. After they have seen many bananas, however, the memory color effect modulates the banana’s color in this setting. They see the shaded bananas as more like their prototypical yellow than they actually are. This might enable them to easily differentiate the shaded cluster of bananas from the green banana plant background (see Figure 7.6). Given the effect of memory color,
Figure 7.6. This picture shows clusters of bananas in front of a banana plant background. Several psychology studies have provided evidence that under some conditions (such as dim lighting), we see objects that have prototypical colors (such as yellow bananas) as more like their prototypical color. My claim is that this effect enables us to more easily differentiate objects from their backgrounds, such as bananas from the banana plant background in this figure. Again, this is especially relevant in dim lighting situations. Image from http:// www.banana-plants.com/Goldfinger.html. 198
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
the person looks at the plant and is able to identify the bananas immediately. I have been focusing on the case of memory color in the identification of fruit, specifically bananas. However, memory color is helpful for identifying all different kinds of objects under low lighting conditions. The studies that I cite in this chapter offer evidence that the memory color effect goes well beyond fruit. Witzel and colleagues (2011) alone provide evidence that memory color occurs for Smurfs, the Pink Panther, German mailboxes (recall that the study participants were from a German university), Nivea Creme tins, and a particular German road sign. And these are just objects that have been tested and found to have a memory color effect; there remain many objects that have never been tested. In short, it seems very likely that memory color helps us to better see a whole range of objects against their backgrounds. Since memory color allows us to better see a whole range of objects against their backgrounds, memory color enhances a particular kind of differentiation: whole s timulus differentiation. As a point of contrast, when Logan, Lively, and Pisoni (1991) trained native Japanese speakers living in the United States for whom English was their second language to better distinguish between the phonemes /r/and /l/, this was a case of differentiation between attributes. Memory color, on the other hand, helps us in poor lighting to distinguish between objects, whether it is the cluster of bananas from the banana plant or the stop sign from the darkness at night. It is important to clarify one point on my account. While the memory color effect will be useful during difficult-to-discriminate circumstances (such as when a cluster of bananas is difficult to see because of poor lighting), the perceptual learning process will often happen during easy-to-discriminate circumstances (such as when 199
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
you see a banana under normal lighting conditions). Consider the experiment from Hansen et al. (2006), for instance. That experiment provides evidence that our ordinary experience with bananas makes it such that grayscale bananas look yellower. What gave rise to this effect? It was not exposure to grayscale bananas. Rather, it was exposure to normal bananas in easy-to-discriminate circumstances. It was those normal bananas, seen in easy to discriminate circumstances, that made the grayscale bananas look yellower. This is to say that the context of the perceptual learning process is different from the context where the effect of that process is useful. If this were not the case, then it would be difficult for the memory color effect to arise in the first place. One hypothesis that I am ruling out is that one needs to recognize the object first in order to experience the memory color effect. On my view, memory color enables one to recognize types of objects (such as a banana), so my view would be false if recognition of an object-type were necessary for the memory color effect. But if recognition of an object-type is not necessary for the memory color effect, which cues are responsible for it? On my view, it is not just shape cues that are sufficient for the memory color effect. Olkkonen, Hansen, and Gegenfurtner (2008) have provided evidence that shape cues alone (even 3-D ones) are insufficient for explaining the memory color effect. Rather, I think it is likely a combination of low-level cues such as 3-D shape, depth, texture, and potentially some others which are sufficient for the effect. At the same time, I take it to be an empirical hypothesis which cues are necessary and sufficient for the memory color effect. And there might be ways in which this could be tested empirically. One option would be to separate the low-level cues from the object-type. For instance, one might get an irregularly shaped banana and spur research participants into recognizing it as a banana (perhaps by 200
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
directing their attention), and then test for the effect. If participants experience memory color under those circumstances, then this would provide evidence that prototypical shape is not necessary for the memory color effect. Similarly, one might put the texture of a lemon on a banana, and spur participants into recognizing it as a banana. If participants experience the memory color effect under those circumstances, then prototypical texture is not necessary for the memory color effect. On the other hand, one might get a croissant with very similar shape and depth cues to a banana, but which is not recognizable as a banana, and test for a banana-like memory color effect. If participants experience a banana-like memory color effect under those circumstances, then this would provide evidence that recognizing it as a banana is not necessary for the memory color effect. The traditional explanation of memory color is that it enhances color constancy—that is, the memory color effect helps us to see objects as the same color despite differences in illumination or the viewing conditions. As Olkkonen, Hansen, and Gegenfurtner (2012) put it, memory color functions to “stabilize color appearance” (p. 194). Yet, explaining memory color as something that enhances perceptual learning can also explain how the cases stabilize color appearance. In the banana case, for instance, the idea is that you first see a lot of bananas. This results in a long-term change in your perception of bananas. When you now see bananas under bad lighting, you see them as more prototypically colored than they actually are. This stabilizes the color appearance of bananas, since there is less variation in color across different illuminations. In section 7.3, I pointed to a difficulty for explaining memory color as a case of color constancy. A stimulus that invokes a color constancy effect will do so for nearly all human observers, while one that invokes a memory color effect will do so only if one has 201
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
prior experience with that kind of object. An explanation in terms of color constancy does not explain this highly subjective difference between viewers, but an explanation in terms of perceptual learning is designed to handle these subjective differences. The story is simple. Some subjects have undergone perceptual learning on those objects, while others have not. Those who have undergone perceptual learning on those objects are disposed to experience them as more like their prototypical color due to learning, while those who have not undergone perceptual learning are not so disposed. If memory color is a case of color constancy, then it is an odd instance of it—one that deviates from the well-studied standard cases of it. Olkkonen, Hansen, and Gegenfurtner (2102) seem to recognize this point, saying, “Much is known about the sensory mechanisms of color constancy, whereas the role of higher-level mechanisms is far less understood” (p. 179); and “[t]he mechanisms at lower levels of processing are relatively well understood, but we are only beginning to understand how higher-level processes could also play a role in color constancy” (p. 193). While memory color would count as an odd case of color constancy, it has a straightforward explanation in terms of perceptual learning. Memory color cases serve to enhance differentiation, a standard mechanism of perceptual learning. Recall that the standard use of memory color in the philosophical literature is as a case of cognitive penetration, where “the phenomenal character of perceptual experience [is] altered by the states of one’s cognitive system, for example, one’s thoughts or beliefs” (Macpherson, 2012, p. 24). As I mentioned, one major role of perceptual learning in the philosophical literature has been to explain away putative cases of cognitive penetration. This is because in cases of perceptual learning, it is the external environment that drives the perceptual changes, not the influence of cognitive states on perception. 202
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
My own view is that since memory color is explicable in terms of perceptual learning, there is no need to hold that memory color cases involve cognitive penetration.7 That is to say, since we can explain memory color just in terms of prior exposure to certain kinds of objects and their prototypical colors, we need not posit that cognitive states such as one’s thoughts or beliefs are playing a role in influencing that perception. Furthermore, there are other good reasons to think that perceptual learning in this case does not count as cognitive penetration. Consider Lyons (2011), who offers a similar hypothesis to mine as one of three potential explanations of memory color cases. Lyons writes that one option is that “subjects’ history with yellow bananas has produced an associative connection whereby low-level perceptual features of bananas (and not the banana identification itself) prime yellow in the color detection system, producing a lateral, rather than top-down, effect on the perceptual state” (p. 302). As Raftopoulos and Zeimbekis (2015) point out, this would not count as a case of cognitive penetration (p. 47). One obvious reason is that to count as a case of cognitive penetration, the influence on perception has to come from cognition and not from some other place. While I do not think that memory color cases count as instances of cognitive penetration, to my mind the most plausible cognitive penetration interpretation of memory color cases begins with the distinction made by Jerry Fodor (1984) between synchronic penetration and diachronic penetration. In diachronic penetration, but
7. Firestone and Scholl (2016) argue similarly that the memory color results need not imply cognitive penetration, since the memory color results could simply involve perceptual learning. However, they also wonder whether the memory color effect is even a perceptual effect at all (p. 66). I agree with them that memory color need not imply cognitive penetration, since it may instead involve perceptual learning. However, I think that the memory color effect is genuinely perceptual (see sec. 7.3).
203
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
not in synchronic penetration, “experience and training can affect the accessibility of background theory to perceptual mechanisms” (p. 39). An example of synchronic penetration is Siegel’s (2012) case of Jack and Jill, where Jill perceives that Jack is angry only because she believes that Jack is angry. This is a case of synchronic penetration because the penetration does not involve experience and training, but rather a belief that Jill has in the moment. However, for the proponent of a cognitive penetration understanding of memory color cases, some cases of perceptual learning might more plausibly fit into the category of diachronic penetration, where the penetration involves experience and training and where cognition still plays a role in influencing the perception.
7.6 MEMORY COLOR AND THE OFFLOADING VIEW As I have been arguing throughout the book, in perceptual learning, tasks that were previously done in a controlled, deliberate manner get offloaded onto perception, thereby freeing up cognition to do other tasks. Take the case of identifying Greebles— the computer- generated 3- D figures created by Gauthier and Tarr for an experiment on object recognition. When you are first introduced to one of these figures, you have to deliberately try to locate its identifying features so that you can determine the gender and family of the Greeble. This is a controlled, cognitively taxing process. You might even have a checklist in mind, and attend to some particular set of Greeble features until you have checked off each of those features. After a while, however, when you have had sufficient training with the Greebles, your perceptual system processes them in such a way 204
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
that enables you to identify the gender and family of a Greeble immediately, thereby freeing you up to do other tasks. The Greebles case is an instance of unitization, but the Offloading View applies just as well to cases of differentiation. Consider William James’s case of a person learning to differentiate between the upper and lower half of a particular type of wine. One way to understand this is as follows: The first time a person is exposed to that type of wine, this would be a difficult task involving both time and cognitive resources. Through experience, however, the expert’s perceptual system becomes able to differentiate the upper and lower halves of the wine immediately. This frees up cognitive resources so that the expert can make further inferences about the vineyard or vintage of the wine. My claim was that the memory color effect enhances differentiation. Specifically, it enhances our ability to perceptually differentiate objects, such as a cluster of bananas from a banana plant in poor lighting. That is, it helps with figure-ground separation. We can further understand the memory color effect by treating it as a case that involves an offloading of a task from cognition to perception, freeing up cognition to do other tasks. Again, imagine the ecological case of a person foraging for bananas. Without the memory color effect, bananas under dim lighting may be difficult to discriminate from the background. After the person has seen a lot of bananas, however, her perceptual system adjusts the color for them. Given the memory color effect, a bushel of bananas that is seen under less-than-ideal lighting conditions may appear more like the prototypical color of bananas. This enables the person to identify the banana more quickly, without having to stop and wonder whether it is a cluster of bananas or not. This saves the person time, effort, and the cognitive resources that would have been spent if she had to examine the bushel more closely, which in 205
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
turn enables her to focus on other things, such as finding the trail again or being on the lookout for predators. According to the offloading account, the memory color effect confers an advantage on us because it frees up cognitive resources. It may seem at first glance that memory color does not free up very many cognitive resources. But note three things. First, memory color appears to be widespread. It occurs on such diverse objects as hearts, lips, apples, Smurfs, bananas, oranges, and the Pink Panther. It even occurs on objects found just in the local culture (such as German mailboxes and road signs). Furthermore, only a limited number of objects have been tested for the memory color effect to date, and the effect is likely to include a wide range of objects that have not yet been tested. Given the apparent widespread nature of memory color, the cumulative effect will be to free up quite a large amount of cognitive resources. Secondly, the memory color effect will confer an advantage, especially in cases when something requires quick identification. Imagine an ecological case in which someone needs to quickly identify fruits while foraging before a predator notices them. Given that the memory color effect frees up cognitive resources, this will enable the person to quickly spot the food and, at the same time, allow her to focus on fleeing before the predator notices. Since memory color is helping us to better see all sorts of common objects around us under poor lighting conditions, this means the following: When this effect happens, we do not have to wonder as much what kind of thing it is (e.g., whether it’s a stop sign or another kind of sign). This frees up cognitive resources in our everyday lives, potentially for a wide range of objects. Thirdly, if the color of a particular type of object is stable over time, this will likely help you to reidentify it more quickly and easily over time (see Mendelovici, 2013, p. 428). For example, if your coat were to change colors frequently, it would be harder to identify in the coat 206
D i f f e r e n t i at i n g Obj e c t s vi a M e m o r y C o l o r
rack than it actually is. Yet memory color makes the colors of types of objects more stable over time. In turn, this helps with visual search tasks. It is easier and quicker to reidentify hearts, lips, apples, Smurfs, bananas, oranges, the Pink Panther, and numerous other kinds of objects, given the existence of memory color. Since you can more quickly and easily reidentify objects, this frees up cognitive resources for other tasks.
7.7 CONCLUSION In the philosophical literature, the main interest in cases of memory color to date has been in whether or not they are cases of cognitive penetration. I think cases of memory color are not cases of cognitive penetration, but I suggested what I thought was the most plausible way for cognitive penetration advocates to understand memory color. Understanding it as diachronic penetration would take into account experience and training. This chapter has also focused on the more neutral question of why cases of memory color occur in the first place. This question may be of interest to both sides in the cognitive penetration debate; however, it may be of special interest to the side that thinks cognitive penetration occurs. Let me explain. If cases of memory color are cases of cognitive penetration, then understanding why they occur may illuminate why cognitive penetration occurs more generally. In particular, my claim was that memory color enables tasks that were previously done in a controlled, cognitive manner to get offloaded to perception, thereby freeing cognition to do other tasks. Plausibly, for those who accept cognitive penetration, this can speak more generally to why cognitive penetration cases occur. Why might “the phenomenal character of perceptual experience be altered by the 207
T h e Sc o p e o f P e r c e p t u a l L e a r n i n g
states of one’s cognitive system, for example, one’s thoughts or beliefs?” (Macpherson, 2012, p. 24). The answer may well be that these alterations of one’s perception serve a purpose: to free up cognition to do other things.
In Part I of this book, I argued that perceptual learning genuinely occurs (in the perceptual sense in which I have been understanding it throughout the book). This chapter, along with the previous four, lends support to that argument. In chapters 3 through 7, I argued that perceptual learning occurs in natural kind recognition, sensory substitution, multisensory perception, speech perception, and color perception. And if perceptual learning occurs in any single one of those perceptual domains, then perceptual learning occurs in the perceptual sense in which I have been understanding it. Chapter 2 relied on introspective, neuroscientific, and behavioral evidence to argue that perceptual learning genuinely occurs (and is genuinely perceptual). The arguments of c hapters 3 through 7 provide further evidence for this conclusion.
208
Conclusion
Perceptual Learning beyond Philosophy of Mind
Imagine a NASA astronomer and a layperson looking at the same pictures of Pluto, or a cell biologist and a nonbiologist looking at the same plant cells under a microscope. I have argued that there is empirical evidence that such experts and novices may see the same things differently because of what they have learned. This is because the experts are more likely to have undergone perceptual learning—long-term changes in perception that result from practice or experience. This book focused on the nature and scope of perceptual learning. I explored the significance of perceptual learning for several domains in the philosophy of perception, including natural kind recognition, sensory substitution, multisensory perception, speech perception, and color perception. The book drew on psychology and neuroscience to distinguish between different kinds of perceptual learning, from changes in how we attend to changes in how we differentiate or unitize what we see or hear. I also argued that despite the different kinds of it, perceptual learning is unified by a single function: relieving cognitive load. For instance, a novice wine taster drinking a Cabernet Sauvignon
209
Con clusion
may have to think about its features first and then infer the type of wine, whereas an expert would be able to identify it immediately. This learned ability to immediately identify the wine enables the expert to think about other things, such as the vineyard or the vintage of the wine. Cases of perceptual learning have played an important part in the history of philosophy, from Diogenes Laertius’s third-century discussion of Stoic philosophy to the work of the contemporary philosophers Susanna Siegel, Berit Brogaard, Casey O’Callaghan, Charles Siewert, and many others. One of the main goals of the book has been to clarify the appropriate role of perceptual learning in these philosophical arguments. To draw on just one example from the book, Brogaard (2016) uses perceptual learning cases to argue that meanings can become represented in perception. In c hapter 6, however, I argued that the perceptual learning evidence that she appeals to does not in fact show this. Part I focused on the nature of perceptual learning: on what perceptual learning is, how we can know it occurs, how we can distinguish between different types of it, and what function it serves. Part II focused on the scope of perceptual learning and argued that perceptual learning occurs in all sorts of domains in the philosophy and psychology of perception. In what follows, I show how perceptual learning has relevance for philosophy far beyond philosophy of mind—in epistemology, philosophy of science, and social philosophy, among other domains. Here are some initial sketches of ways in which we can apply knowledge of perceptual learning to those domains: 1. Epistemology (a). If our perceptions are different from one another’s due to learning, as I have argued, then how can we arrive at common knowledge of the world? As a point of 210
Percep tual Le arning beyond Philosophy of Mind
contrast, imagine that every person’s perceptions were the same. If this were the case, then we would have less difficulty syncing our perceptions with the perceptions of other people. However, perceptual learning suggests that every person’s perception is not the same, due to learning. Now, there are all sorts of other reasons why two people’s perceptions might not be the same, even assuming that the two are perceiving from the same angle, under the same conditions, and so on. For instance, maybe one person is colorblind but the other is not, or one person has 20/20 vision but the other person does not. These differences between people can all create impediments to arriving at common knowledge of the world. However, such differences are often known. A person who is colorblind, for instance, often (though not always) knows this. Someone who has poor vision also often knows it. Those who know that they are colorblind or have poor vision can bring that information to bear when they try to sync what they see with what other people see. Other perceptual differences between people are less known, as evidenced by the surprise felt by many people in the dress case (see sec. 7.4). People were surprised that they saw a dress as blue and black, when others saw the same dress as white and gold, or vice versa. Perceptual differences due to perceptual learning seem to fall more into this camp, where people are less aware that the perceptual differences even exist. This raises a special impediment for arriving at common knowledge of the world in cases where there are perceptual differences between people that are due to learning. Take the evidence from memory color studies in chapter 7. The evidence suggests that given prior experience with bananas, Smurfs, and oranges, discolored bananas sometimes 211
Con clusion
appear yellower to us than they actually are, discolored Smurfs bluer, and oranges more orange. In general, faded types of objects with which we associate a prototypical color are sometimes perceived as closer to that color than they actually are. One important upshot is that if I have seen bananas and you have not, your yellow might be different from mine when looking at a banana, and we might never even know it. But then how can we arrive at common knowledge of the colors (and other properties) of things given these subjective differences? Perhaps part of the battle here is knowing that there are such subjective differences and taking these differences into account when attempting to arrive at shared knowledge of the world through perception. Epistemology (b). In The Rationality of Perception, Susanna Siegel (2017) presents what she calls “the problem of hijacked experience” (p. 6). Recall Siegel’s case of Jack and Jill. Jill believes that Jack is angry. Because of her belief, Jill sees Jack’s face as expressing anger, even though it is not. Jill’s experience has been “hijacked” by her belief. Here is the question, though: Is it rational for Jill to believe her eyes? On the one hand, as Siegel notes, it does seem rational. Jill has no evidence that her perception is untrustworthy, and Jack really does look angry to her. As Siegel puts it, “what else could Jill reasonably believe about his emotional state, other than that he is angry?” (p. 6). On the other hand, it seems irrational for Jill to believe her eyes. She sees Jack’s face as expressing anger only because she believes that Jack is angry. But Jack is not angry. So Jill’s perception ends up strengthening her initial belief, which was false. As Siegel puts it, “[ Jill] seems to have moved illicitly from her starting suspicion to a strengthening of it, via her experience” (p. 6). 212
Percep tual Le arning beyond Philosophy of Mind
So, is it rational for Jill to believe her eyes? Siegel ultimately argues that it is not rational. Roughly and briefly, Siegel’s novel idea is that both perceptions and the processes that give rise to them can be rational or irrational (2017, p. 15). It is irrational for Jill to believe her eyes because Jill’s perception arose through an irrational process (p. 14). Since Jill’s perception arose through irrational means, it would then be unreasonable for her to believe that Jack is angry. The reasonable thing to do would be to suspend judgment (p. 14). (Siegel defends this picture in several chapters of her book.) On Siegel’s view, the processes through which perception arises can be rational or irrational. But which processes? Siegel focuses mostly on the process that occurs temporally in between the stimulation of the senses and conscious perception. For instance, in Jill’s case, her perception is hijacked at some point between when the light bouncing off of Jack hits her retina and when she sees Jack’s face as expressing anger. In addition to this, Siegel also writes about the ways in which certain patterns of attention might be rational or irrational (2017, pp. 159–161). My account provides a natural way to extend Siegel’s argument. The processes through which perception arises can be extended to include the practice or experience that typically goes into perceptual learning. These perceptual learning processes can be thought of as rational or irrational. In turn, these processes give rise to rational or irrational perceptions. In many cases, the learning process is rational, as when a radiologist in training is exposed to a large and varied group of different X-rays during the long learning process during which she becomes an expert. When she then sees a new X- ray, her prior practice and experience are part of what gives 213
Con clusion
rise to her perception. She then forms rational beliefs about the X-ray on the basis of her perception. In such a case, it might be tempting to speak of the radiologist’s perceptual process just in terms of the process between the stimulation of the senses and conscious perception. However, that would not be a complete account. After all, it is the radiologist’s prior practice and experience that explains why the process between the stimulation of the senses and conscious perception is rational. If she did not undergo the training, the process between the stimulation of her eye and her conscious perception of the X-ray may not have been rational. In addition to rational perceptual learning processes, irrational processes of perceptual learning can also occur. Imagine a poorly trained radiologist, who has been exposed to a large, but unvaried, group of X-rays. When she then sees a new X- ray, her prior practice and experience are part of what gives rise to her perception. Since the perceptual learning process is irrational for her, this can lead to irrational perceptions of X-rays, and irrational perceptual beliefs about them. If we extend Siegel’s account such that the rational or irrational processes include perceptual learning, as I am suggesting, then a person’s perceptual education, upbringing, and training become quite important epistemologically. This is because the things that a person is exposed to perceptually can have an impact on whether that person’s subsequent perceptions are rational or irrational. Whether the perceptual learning process went right or wrong can be the difference between knowledge and false belief. 2. Philosophy of Science. If people really do see the world differently due to learning, as I have argued, what does this mean for the empirical study of perception? In designing perception 214
Percep tual Le arning beyond Philosophy of Mind
experiments for the lab, many researchers seem to assume that it is possible to fix the participant’s percept though careful control of the experimental environment and stimulus. If, however, participants see the world differently due to long-term changes from past experience, then no amount of careful control in the lab will enable the experimenter to fully control that participant’s perceptual experience. Instead, what the participant sees will be determined partly by what the experimenter presents and partly by the participant’s past experiences and expertise. Indeed, given perceptual learning, when two participants are presented with the same stimulus in the same experimental conditions, they may have different percepts due to stable changes from past perceptual learning. What ought perception science to do to overcome this problem? Is it enough to expect that any effects of perceptual differences will simply come out in the wash? The issue at hand is whether subjective differences due to perceptual learning create a difficulty for the empirical study of perception. This is not to say that the difficulty is insurmountable, just that it should be accounted for in lab contexts. Perhaps there should be more prescreening for those who are outliers due to their learning and expertise. It may be that researchers need to be more upfront that the results of certain perception experiments are indicative of one particular perceptual group, and that these results might not be indicative for other perceptual groups. All of this is especially timely in the face of recent failures in the replication of psychology studies (see Open Science Collaboration, 2015). If failure of replication is in fact a serious problem, one question worth exploring is whether subjective differences between groups plays some role in it. 215
Con clusion
3. Social Philosophy. Philosophical cases of perceptual learning to date have focused largely on the individual, such as the expert jeweler or the sculptor. However, one unexplored area is how perceptual learning gets implemented on a group level. We frequently have common perceptual inputs, shared among many other people. That is, our perceptual systems often get tuned as a group, through advertising, media, and other institutions. In what way does perceptual learning tune our perceptual systems through such means? For instance, does this harmfully distort our perceptions of members of social groups, and if so, what can be done about it? Perhaps the only way to avoid the harmful effects of such perceptual learning is to change our environment (see Huebner, 2016). Discussions of perceptual learning often focus on cases that involve social good, such as the case of the radiologist looking at X-rays in order to help her patients. Perceptual learning is not always the hero, however. Sometimes, it has consequence in the lives of people for the worse. It is one thing for the doctor’s perceptual system to be tuned by the X-rays she has seen. It is another thing for our perceptual systems to be tuned by advertising, media, and other of our institutions, sometimes without our consent. When, for instance, perceptual learning tunes our perceptual systems through such means in ways that harmfully distort our perceptions of members of social groups, it acts as the villain. A well-cited study from 2001 by Keith Payne reported that research participants were more likely to falsely identify a tool as a gun after being primed with a Black face than after being primed with a White face. Assuming, not uncontroversially, that the participants literally saw the tool as a gun in those cases, rather than just falsely judged the object to be a gun, 216
Percep tual Le arning beyond Philosophy of Mind
this raises a question. Why would these participants—undergraduate students at Washington University in Saint Louis— be more likely to see a tool as a gun when primed by a Black face? Have these study participants really been exposed, in real life, to more Black people with guns than White people with guns? It seems more likely that their perceptions have been tuned through other means, by advertising, media, and other institutions, and, at some level, tuned against their will. An important area of future study is how perceptual learning occurs at the group level, and what consequences this has for good and for bad.
The preface began with a case of perceptual learning from my own life, a doctor who was able to detect skin cancer through years of practice and training. Perceptual learning often works for good. Through perceptual learning, experts in all sorts of domains from biology to astronomy to mathematics perform their jobs better, often yielding good for society. Furthermore, the process of perceptual learning can be a source of joy, as when it occupies our hobbies (e.g., music listening) and our social times (e.g., wine or beer tasting). Perceptual learning is often leisurely and fun. At the same time, perceptual learning has the ability to cause social harm, particularly in cases in which people’s perceptual systems are tuned to harmfully distort their perceptions of members of social groups. In these cases, as I have mentioned, perceptual learning is not the hero, but the villain. All of this is to say that there are different ends for which perceptual learning can be harnessed, some good and some bad. One hope of mine is that this book can contribute a little bit to harnessing it for the good.
217
ACKNOWLEDGMENTS
The ideas for this book started to take shape in 2012. I had been thinking a lot about Susanna Siegel’s case of pine trees looking different to someone after being exposed to them for a long time. I wondered what psychology had to say about such cases, and stumbled into the vast literature on perceptual learning. Fortunately for me, I was a postdoctoral fellow with the Network for Sensory Research run by Mohan Matthen. The goal of the network was to run interdisciplinary workshops that would bring together philosophers, psychologists, and neuroscientists to talk about perception. That year with the network, I helped to organize two workshops on perceptual learning, where I got to meet a number of leading researchers working in the area. I am very grateful to Mohan for that opportunity. I continued the perceptual learning project as a Mellon Postdoctoral Fellow at the University of Pennsylvania in 2014, working with Gary Hatfield on a project on perceptual learning in color perception. At the start of that year, the book was just a series of papers. By the end, thanks in large part to conversations 219
Ack n o w l e d g m e n t s
with Gary, I had a partial book draft. A while after the project had fully turned into a book project, I was fortunate to attend a National Endowment for the Humanities Summer Institute on Perception and Presupposition at Cornell University, organized by Susanna Siegel and Nico Silins. I learned a great deal from the other participants there, both from their presentations and in the roundtable discussions. I also benefited from a great many conversations and comments about perceptual learning there, and I would like to thank especially Maria Brincker, Rebecca Copenhaver, Keota Fields, Bryce Huebner, Krista Hyde, Zoe Jenkin, Gabbrielle Johnson, Rae Langton, Anna Lee, Jessie Munton, Jake Quilty-Dunn, Eve Rabinoff, Nico Silins, Jason Stanley, Katie Tullman, Jona Vance, and Preston Werner. The final push on the book happened during the 2016–2017 academic year, when Adrienne Prettyman and I were awarded a grant on perceptual learning from the Cambridge New Directions in the Study of the Mind Project run by Tim Crane. The grant funded a postdoctoral fellowship for me at the University of Pennsylvania. It also funded interdisciplinary workshops on the topic of perceptual learning at the University of Pennsylvania, from which I learned a tremendous amount. The workshops followed the Network for Sensory Research model, bringing together philosophers, psychologists, and neuroscientists to share their knowledge of perceptual learning across the disciplines. We received further assistance, financial and otherwise, from the University of Pennsylvania Department of Philosophy, the University of Pennsylvania Visual Studies Program, and the Greater Philadelphia Philosophy Consortium. Thanks very much to all of these institutions and people for providing assistance for these workshops. The grant also funded a reading group on perceptual learning at the University of Pennsylvania. I am grateful to 220
Ack n o w l e d g m e n t s
the participants for their conversation and feedback: Devin Curry, Louise Daoust, Ting Fung Ho, Adrienne Prettyman, Ben White, and especially Gary Hatfield. During all this time, I had been presenting my research in various venues, and I would like to thank the audiences there for very helpful feedback, including in Michael Weisberg’s Philosophy of Science Group at the University of Pennsylvania, the Penn Humanities Forum, the National Endowment for the Humanities Summer Institute on Perception and Presupposition at Cornell University, Robert Goldstone’s Percepts and Concepts Lab in the Department of Psychological and Brain Sciences at Indiana University, the Yonsei University Philosophy Summer Conference, the Multisensory Integration Workshop at the University of Toronto, the Workshop on Perceptual Learning and Perceptual Recognition at the University of York, the American Philosophical Association Central Division, the American Philosophical Association Eastern Division, a Brains Blog Mind & Language symposium on sensory substitution, the Swarthmore/Haverford/Bryn Mawr Mind Initiative at Haverford College, Peter Baumann’s Swarthmore Epistemology Group, and a Philosophy Department Colloquium at the University of Pennsylvania. Parts of the book have drawn on my previously published work. I borrow a few paragraphs from Connolly (2017), published in the Stanford Encyclopedia of Philosophy. Chapter 3 is a heavily revised version of Connolly (2014b), published in Erkenntnis. Chapter 4 is a revised version of Connolly (2018), published by Oxford University Press in a volume of the Proceedings of the British Academy, edited by Fiona Macpherson. Chapter 5 is a revised version of Connolly (2014a), published in Frontiers in Psychology, in a research topic edited by Aleksandra Mroczko-Wasowicz. I thank the editors and the publishers for their help and copyright permissions. 221
Ack n o w l e d g m e n t s
I greatly benefited from many comments and conversations along the way on the content of this book. I owe an extra-special debt to Casey O’Callaghan, Susanna Siegel, Mohan Matthen, Gary Hatfield, Rob Goldstone, Adrienne Prettyman, Rebecca Copenhaver, and to the reviewers at OUP. They are responsible for a large number of ideas that I have incorporated into the text, as well as for a great deal of inspiration on the topic. Very special thanks also to Peter Ohlin and David Chalmers at OUP for their help. I also want to express my gratitude for helpful conversations and comments that benefited parts of the book to Nicholas Altieri, Peter Baumann, Tim Bayne, Christine Briggs, Robert Briscoe, Derek Brown, David Chalmers, Jonathan Cohen, Tim Connolly, Frank Durgin, Kevan Edwards, Paul Franks, Craig French, Ellen Fridland, Matt Fulkerson, Todd Ganson, James Genone, Charles Gilbert, Andree Hahmann, Ben Henke, Steven James, Errol Lord, Fiona Macpherson, Christine Massey, Michael McCauley, Ian McLaren, Neil Mehta, Lisa Miracchi, Maria Olkkonen, Diana Raffman, Madeleine Ransom, Yuka Sasaki, John Schwenkler, Richard Shiffrin, Charles Siewert, Barry C. Smith, Linda Smith, David Suarez, Min Tang, Gerald Vision, Takeo Watanabe, Sebastian Watzl, Janet Werker, and Wayne Wu. Thanks to the copy editor Ginny Faber, the production editor Anitha Jasmine Stanley, and to editorial assistant Isla Ng. Thanks also to Linda Roppolo and Ayo Seligman for their work on the book cover. I am also grateful to Tom Kremer, Thea Walmsley, and Jungang Zhu for assistance with the proofreading, references, and index. Thanks to the friends who have helped me during the process of writing this book, many of them mentioned above. Thanks to my family, especially to Adrienne (who probably deserves a whole chapter here) and to Paul Liam. Thanks also to Adrienne’s family, and to my father, Katie, Gretchen, and the kids. Last of all, thanks to my mother and my brothers—who first introduced me to good arguments, and who have patiently put up with the consequences. 222
REFERENCES
Abrams, J., Barbot, A., & Carrasco, A. (2010). Voluntary attention increases perceived spatial frequency. Attention, Perception, and Psychophysics, 72(6), 1510–1521. Ahissar, M., & Hochstein, S. (1993). Attentional control of early perceptual learning. Proceedings of the National Academy of Sciences, 90(12), 5718–5722. Ahissar, M., & Hochstein, S. (1995). How early is early vision? Evidence from perceptual learning. In T. V. Papa-Thomas, C. Chubb, A. Gorea, & E. Kowler (Eds.), Early vision and beyond (pp. 199–206). Cambridge, MA: MIT Press. Ahissar, M., & Hochstein, S. (1996). Perceptual learning transfer over space and orientation. Investigative Ophthalmology and Visual Science, 37, 3182. Arstila, V. (2016). Perceptual learning explains two candidates for cognitive penetration. Erkenntnis, 81(6), 1151–1172. Ashworth, A. R., III, Vuong, Q. C., Rossion, B., & Tarr, M. J. (2008). Recognizing rotated faces and Greebles: What properties drive the face inversion effect? Visual Cognition, 16(6), 754–784. Bach, K. (1997). Review essay: engineering the mind. Review of Naturalizing the Mind, by Fred Dretske. Philosophy and Phenomenological Research, 57(2), 459–468. Bach-y-Rita, P. (1972). Brain mechanisms in sensory substitution. New York, NY: Academic Press. Bach-y-Rita, P., & Kercel, S. W. (2003). Sensory substitution and the human– machine interface. Trends in Cognitive Sciences, 7(12), 541–546. Baker, L. J., & Levin, D. T. (2016). The face-race lightness illusion is not driven by low-level stimulus properties: An empirical reply to Firestone and Scholl (2014). Psychonomic Bulletin and Review, 23(6), 1989–1995.
223
References
Bao, M., Yang, L., Rios, C., He, B., & Engel, S. A. (2010). Perceptual learning increases the strength of the earliest signals in visual cortex. Journal of Neuroscience, 30(45), 15080–15084. Bayne, T. (2009). Perception and the reach of phenomenal content. Philosophical Quarterly, 59(236), 385–404. Behrmann, M., Marotta, J., Gauthier, I., Tarr, M. J., & McKeeff, T. J. (2005). Behavioral change and its neural correlates in visual agnosia after expertise training. Journal of cognitive neuroscience, 17(4), 554–568. Berardi, N., & Fiorentini, A. (1987). Interhemispheric transfer of visual information in humans: Spatial characteristics. Journal of Physiology, 384(1987), 633–647. Berkeley, G. (1949). Three dialogues between Hylas and Philonous. In T. E. Jessop & A. A Luce (Eds.), The works of George Berkeley, Bishop of Cloyne (Vol. 2, pp. 1948–1957). London: Thomas Nelson. Original work published in 1713. Bertelson, P., & de Gelder, B. (2004). The psychology of multimodal perception. In C. Spence & J. Driver (Eds.), Crossmodal space and crossmodal attention (pp. 141–177). Oxford, UK: Oxford University Press. Biederman, I., & Shiffrar, M. M. (1987). Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4), 640–645. Bitter, D. (2014). Is low-level visual experience cognitively penetrable? Baltic International Yearbook of Cognition, Logic, and Communication, 9(1), 1–26. Blaser, E., Sperling, G., & Lu, Z. L. (1999). Measuring the amplification of attention. Proceedings of the National Academy of Sciences, 96(20), 11681–11686. Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18, 227–247. Block, N. (2003). Tactile sensation via spatial perception. Trends in Cognitive Sciences, 7, 285–286. Block, N. (2010). Attention and mental paint. Philosophical Issues, 20(1), 23–63. Bransford, J. D., Franks, J. J., Vye, N. J., & Sherwood, R. D. (1989). New approaches to instruction: Because wisdom can’t be told. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 470–479). Cambridge, UK: Cambridge University Press. Braun, C., Schweizer, R., Elbert, T., Birbaumer, N., & Taub, E. (2000). Differential activation in somatosensory cortex for different discrimination tasks. Journal of Neuroscience, 20(1), 446–450. Brewer, B. (2013). Attention and direct realism. Analytic Philosophy, 54(4), 421–435. Briscoe, R. (2018). Bodily action and distal attribution in sensory substitution. In F. Macpherson (Ed.), Sensory, substitution and augmentation (Vol. 219, pp. 173– 186). Oxford, UK: Oxford University Press. Brogaard, B. (2013). Do we perceive natural kind properties? Philosophical Studies, 162(1), 35–42. Brogaard, B. (2018). In defense of hearing meanings. Synthese, 195(7), 2967–2983.
224
References
Brogaard, B., & Gatzia, D. E. (2015). Is the auditory system cognitively penetrable? Frontiers in Psychology, 6, 1166. Brogaard, B., & Gatzia, D. E. (2016). Is color experience cognitively penetrable? Topics in Cognitive Science, 9(1), 193–214. Brogaard, B. B., & Gatzia, D. E. (2018). The real epistemic significance of perceptual learning. Inquiry, 61(5-6), 543–558. Brown, D. (in press). Colour constancy. In D. Brown & F. Macpherson (Eds.), The Routledge handbook on philosophy of colour. London: Routledge. Bryan, W. L., & Harter, N. (1897). Studies in the physiology and psychology of the telegraphic language. Psychological Review, 4(1), 27–53. Bryan, W. L., & Harter, N. (1899). Studies on the telegraphic language: The acquisition of a hierarchy of habits. Psychological Review, 6(4), 345–375. Bushara, K. O., Hanakawa, T., Immisch, I., Toma, K., Kansaku, K., & Hallett, M. (2003). Neural correlates of cross-modal binding. Nature Neuroscience, 6(2), 190–195. Bushnell, I. W. R., Sai, F., & Mullin, J. T. (1989). Neonatal recognition of the mother’s face. British Journal of Developmental Psychology, 7(1), 3–15. Byrne, A. (2001). Intentionalism defended. Philosophical Review, 110(2), 199–240. Byrne, A. (2009). Experience and content. Philosophical Quarterly, 59(236), 429–451. Byrne, A., & Hilbert, D. R. (2003). Color realism and color science. Behavioral and Brain Sciences, 26(1), 3–64. Campbell, J. (2002). Reference and consciousness. Oxford, UK: Oxford University Press. Carrasco, M., Ling, S., & Read, S. (2004). Attention alters appearance. Nature Neuroscience, 7(3), 308–313. Cecchi, A. S. (2014). Cognitive penetration, perceptual learning and neural plasticity. Dialectica, 68(1), 63–95. Chandler, P., & Sweller, J. (1991). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. Cheng, T. (2015). Obstacles to testing Molyneux’s question empirically. i-Perception, 6(4), 1–5. Chirimuuta, M. (2015). Outside colour: Perceptual science and the puzzle of colour in philosophy. Cambridge, MA: MIT Press. Chopin, A., Levi, D. M., & Bavelier, D. (2017). Dressmakers show enhanced stereoscopic vision. Scientific Reports, 7(1), 3435. Chudnoff, E. (2018). The epistemic significance of perceptual learning. Inquiry, 61(5-6), 520–542. Clark, A. (2016). Surfing uncertainty: Prediction, action, and the embodied mind. New York, NY: Oxford University Press. Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
225
References
Clark, J. F., Ellis, J. K., Bench, J., Khoury, J., & Graman, P. (2012). High-performance vision training improves batting statistics for University of Cincinnati baseball players. PLoS ONE, 7(1), e29109. Clarke, S. (2016). Investigating what felt shapes look like. i-Perception, 7(1), 1–6. Cohen, J. (2009). The red and the real. Oxford, UK: Oxford University Press. Cohen, J. (2010). Color relationalism and color phenomenology. In B. Nanay (Ed.), Perceiving the world (pp. 13–32). New York, NY: Oxford University Press. Cohen, M. A., & Dennett, D. C. (2011). Consciousness cannot be separated from function. Trends in Cognitive Sciences, 15, 358–364. Colzato, L. S., Raffone, A., & Hommel, B. (2006). What do we learn from binding features? Evidence for multilevel feature integration. Journal of Experimental Psychology: Human Perception and Performance, 32(3), 705–716. Connolly, K. (2013). How to test Molyneux’s question empirically. i-Perception, 4(8), 508–510. Connolly, K. (2014a). Making sense of multiple senses. In R. Brown (Ed.), Consciousness inside and out: Phenomenology, neuroscience, and the nature of experience (pp. 351–364). Dordrecht, the Netherlands: Springer. Connolly, K. (2014b). Multisensory perception as an associative learning process. Frontiers in Psychology, 5, 1095. Connolly, K. (2014c). Perceptual learning and the contents of perception. Erkenntnis, 79(6), 1407–1418. Connolly, K. (2017). Perceptual learning. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy. Retrieved from https://plato.stanford.edu/archives/sum2017/ entries/perceptual-learning. Connolly, K. (2018). Sensory substitution and perceptual learning. In F. Macpherson (Ed.), Sensory substitution and augmentation: Proceedings of the British Academy (Vol. 219, pp. 235–249). Oxford, UK: Oxford University Press. Copenhaver, R. (2010). Thomas Reid on acquired perception. Pacific Philosophical Quarterly, 91(3), 285–312. Copenhaver, R. (2016). Additional perceptive powers: Comments on Van Cleve’s problems from Reid. Philosophy and Phenomenological Research, 93(1), 218–224. De Brigard, F., & Prinz, J. (2010). Attention and consciousness. Advanced Review, 1, 51–59. de Wit, T. C., Falck-Ytter, T., & von Hofsten, C. (2008). Young children with autism spectrum disorder look differently at positive versus negative emotional faces. Research in Autism Spectrum Disorders, 2(4), 651–659. Delk, J. L., & Fillenbaum, S. (1965). Differences in perceived color as a function of characteristic color. American Journal of Psychology, 78(2), 290–293. Dennett, D. C. (1988). Quining qualia. In A. J. Marcel & E. Bisiach (Eds.), Consciousness in contemporary science (pp. 42–77). Oxford, UK: Clarenden Press. Deroy, O. (2013). Object-sensitivity versus cognitive penetrability of perception. Philosophical Studies, 162(1), 87–107.
226
References
Deroy, O., & Auvray, M. (2015). A crossmodal perspective on sensory substitution. In S. Biggs, M. Matthen, & D. Stokes (Eds.), Perception and its modalities (pp. 327–349). Oxford, UK: Oxford University Press. Deroy, O., Chen, Y. C., & Spence, C. (2014). Multisensory constraints on awareness. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 369(1641), 20130207. Deroy, O., & Spence, C. (2016). Crossmodal correspondences: Four challenges. Multisensory Research, 29(1–3), 29–48. Deveau, J., Ozer, D. J., & Seitz, A. R. (2014). Improved vision and on-field performance in baseball through perceptual learning. Current Biology, 24(4), R146–R147. De Weerd, P., Reithler, J., van de Ven, V., Been, M., Jacobs, C., & Sack, A. T. (2012). Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning. Journal of Neuroscience, 32(6), 1981–1988. Diamond, R., & Carey, S. (1986). Why faces are and are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107–117. Dill, M. (2002). Specificity versus invariance of perceptual learning: The example of position. In M. Fahle & T. Poggio (Eds.), Perceptual learning (pp. 219–231). Cambridge, MA: MIT Press. Dill, M., & Fahle, M. (1999). Display symmetry affects positional specificity in same-different judgement of pairs of novel visual patterns. Vision Research, 39, 3752–3760. Donohue, S. E., Green, J. J., & Woldorff, M. G. (2015). The effects of attention on the temporal integration of multisensory stimuli. Frontiers in Integrative Neuroscience, 9, 32. Dretske, F. (1995). Naturalizing the mind. Cambridge, MA: MIT Press. Dretske, F. (2004). Change blindness. Philosophical Studies, 120(1), 1–18. Dretske, F. (2015). Perception versus conception: The goldilocks test. In J. Zeimbekis & A. Raftopoulos (Eds.), The cognitive penetrability of perception: New philosophical perspectives (pp. 163–173). Oxford, UK: Oxford University Press. Duncker, K. (1939). The influence of past experience upon perceptual properties. American Journal of Psychology, 52(2), 255–265. Fahle, M. (2002). Introduction. In M. Fahle & T. Poggio (Eds.), Perceptual learning (pp. ix–x x). Cambridge, MA: MIT Press. Firestone, C., & Scholl, B. (2015). Can you experience “top-down” effects on perception? The case of race categories and perceived lightness. Psychonomic Bulletin and Review, 22(3), 694–700. Firestone, C., & Scholl, B. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences, 39, 1–77. Fodor, J. (1984). Observation reconsidered. Philosophy of Science, 51(1), 23–43. Freschi, E. (2011). The refutation of any extra-sensory perception in Vedānta Deśika: A philosophical appraisal of Seśvaramīmāṃsā ad MS 1.1.4. Unpublished
227
References
manuscript. Retrieved from http://www.ikga.oeaw.ac.at/mediawiki/images/ 3/3e/The_refutation_of_any_extra-sensory_perc.pdf. Fridland, E. (2015). Skills, propositions, and the cognitive penetrability of perception. Journal of General Philosophy of Science, 46(1), 105–120. Fulkerson, M. (2014). Explaining multisensory experience. In R. Brown (Ed.), Consciousness inside and out: Phenomenology, neuroscience, and the nature of experience (pp. 365–373). Dordrecht, the Netherlands: Springer. Furmanski, C. S., Schluppeck, D., & Engel, S. A. (2004). Learning strengthens the response of primary visual cortex to simple patterns. Current Biology, 14(7), 573–578. Gallace, A., Tan, H. Z., & Spence, C. (2006). The failure to detect tactile change: A tactile analogue of visual change blindness. Psychonomic Bulletin and Review, 13(2), 300–303. Ganson, T., & Bronner, B. (2013). Visual prominence and representationalism. Philosophical Studies, 164(2), 405–418. Garraghty, P. E., & Kaas, J. H. (1992). Dynamic features of sensory and motor maps. Current Opinion in Neurobiology, 2(4), 522–527. Gauthier, I., & Tarr, M. J. (1997). Becoming a “Greeble” expert: Exploring mechanisms for face recognition. Vision Research, 37(12), 1673–1682. Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation of the middle fusiform “face area” increases with expertise in recognizing novel objects. Nature Neuroscience, 2(6), 568–573. Gauthier, I., Williams, P., Tarr, M. J., & Tanaka, J. (1998). Training “Greeble” experts: a framework for studying expert object recognition processes. Vision Research, 38(15), 2401–2428. Genone, J., & Van Buskirk, I. (2017). Complex systems. In S. M. Kosslyn & B. Nelson (Eds.), Building the intentional university: Minerva and the future of higher education (pp. 109–120). Cambridge, MA: MIT Press. Gibson, E. J. (1963). Perceptual learning. Annual Review of Psychology, 14, 29–56. Gibson, E. J. (1969). Principles of perceptual learning and development. Englewood Cliffs, NJ: Prentice-Hall. Gibson, E. J., Owsley, C. J., Walker, A., & Megaw-Nyce, J. (1979). Development of the perception of invariants: Substance and shape. Perception, 8(6), 609–619. Gibson, E. J., & Walk, R. D. (1956). The effect of prolonged exposure to visually presented patterns on learning to discriminate them. Journal of Comparative and Physiological Psychology, 49(3), 239–242. Gibson, J. J., & Gibson, E. J. (1955). Perceptual learning: Differentiation or enrichment? Psychological Review, 62(1), 32–41. Gilbert, C. D. (1996). Plasticity in visual perception and physiology. Current Opinion in Neurobiology, 6(2), 269–274. Gilbert, C. D., & Li, W. (2012). Adult visual cortical plasticity. Neuron, 75(2), 250–264.
228
References
Gislén, A., Warrant, E. J., Dacke, M., & Kröger, R. H. (2006). Visual training improves underwater vision in children. Vision Research, 46(20), 3443–3450. Glezer, L. S., Kim, J., Rule, J., Jiang, X., & Riesenhuber, M. (2015). Adding words to the brain’s visual dictionary: Novel word learning selectively sharpens orthographic representations in the VWFA. Journal of Neuroscience, 35(12), 4965–4972. Gobell, J., & Carrasco, M. (2005). Attention alters the appearance of spatial frequency and gap size. Psychological Science, 16(8), 644–651. Gold, J. I., & Watanabe, T. (2010). Perceptual learning. Current Biology, 20(2), R46–48. Goldstone, R. L. (1994). Influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General, 123(2), 178–200. Goldstone, R. L. (1998). Perceptual learning. Annual Review of Psychology, 49, 585–612. Goldstone, R. L. (2003). Learning to perceive while perceiving to learn. In R. Kimchi, M. Behrmann, & C. Olson (Eds.), Perceptual organization in vision: Behavioral and neural perspectives (pp. 233–278). Mahwah, NJ: Lawrence Erlbaum. Goldstone, R. L. (2010). Forward to Gauthier. In M. Tarr & D. Bub (Eds.), Perceptual expertise: Bridging brain and behavior. New York, NY: Oxford University Press. Goldstone, R. L., Braithwaite, D. W., & Byrge, L. A. (2012). Perceptual learning. In N. M. Seel (Ed.), Encyclopedia of the sciences of learning (pp. 2580–2583). Heidelberg, Germany: Springer Verlag. Goldstone, R. L., & Byrge, L. A. (2015). Perceptual learning. In M. Matthen (Ed.), The Oxford handbook of the philosophy of perception (pp. 812–832). Oxford, UK: Oxford University Press. Goldstone, R. L., Landy, D. H., & Brunel, L. C. (2011). Improving perception to make distant connections closer. Frontiers in Psychology, 2, 385. Goldstone, R. L., de Leeuw, J. R., & Landy, D. H. (2015). Fitting perception in and to cognition. Cognition, 135, 24–29. Goldstone, R. L., Weitnauer, E., Ottmar, E. R., Marghetis, T., & Landy, D. H. (2016). Modeling mathematical reasoning as trained perception-action procedures. Design Recommendations for Intelligent Tutoring Systems: Volume 4-Domain Modeling, 4, 213–223. Gottlieb, J., Oudeyer, P. Y., Lopes, M., & Baranes, A. (2013). Information-seeking, curiousity, and attention: Computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585–593. Hagemann, N., Shorer, J., Cañal-Bruland, R., Lotz, S., & Strauss, B. (2010). Visual Perception in fencing: Do the eye movements of fencers represent their information pickup? Attention, Perception, and Psychophysics, 72(8), 2204–2214. Hansen, T., Olkkonen, M., Walter, S., & Gegenfurtner, K. R. (2006). Memory modulates color appearance. Nature Neuroscience, 9(11), 1367–1368.
229
References
Hardin, C. L. (1993). Color subjectivism. In A. Goldman (Ed.), Readings in philosophy and cognitive sciences (pp. 493–507). Cambridge, MA: MIT Press. Hatfield, G. (2007). The reality of Qualia. Erkenntnis, 66(1–2), 133–168. Hatfield, G. (2009). Objectivity and subjectivity revisited: Color as a psychobiological property. In G. Hatfield (Ed.), Perception and cognition: Essays in the philosophy of psychology (pp. 281–296). Oxford, UK: Oxford University Press. Heidegger, M. (1977). The origin of the work of art. In D. F. Krell (Ed.), Basic writings from Being and Time (1927) to The Task of Thinking (1964) (pp. 139–212). San Francisco, CA: Harper & Row. Held, R. (1985). Binocular vision: Behavioral and neuronal development. In J. Mehler & R. Fox (Eds.), Neonate cognition: Beyond the blooming buzzing confusion (pp. 37–44). Hillsdale, NJ: Lawrence Erlbaum. Held, R., Ostrovsky, Y., de Gelder, B., Gandhi, T., Ganesh, S., Mathur, U., & Sinha, P. (2011). The newly sighted fail to match seen with felt. Nature Neuroscience, 14(5), 551–553. Henke, B. Connolly on perceptual learning. Unpublished manuscript. Department of Philosophy. Washington University in St. Louis. Hering, E. (1964). Outlines of a theory of the light sense. (L. M. Hurvich & D. Jameson, Trans.). Cambridge, MA: Harvard University Press. Original work published in 1920. Hohwy, J. (2012). Attention and conscious perception in the hypothesis testing brain. Frontiers in Psychology, 3(96), 1–14. Honey, R. C., & Hall, G. (1989). Acquired equivalence and distinctiveness of cues. Journal of Experimental Psychology: Animal Behavior Processes, 15(4), 338–346. Hubel, D. H., & Wiesel, T. N. (1977). Ferrier lecture: Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London B: Biological Sciences, 198(1130), 1–59. Huebner, B. (2016). Implicit bias, reinforcement learning, and scaffolded moral cognition. In M. Brownstein & J. Saul (Eds.), Implicit bias and philosophy, Vol.1, Metaphysics and Epistemology (pp. 47– 79). Oxford, UK: Oxford University Press. Hume, D. (1951). A treatise of human nature. London: J. M. Dent and Sons. Original work published in 1738. Hurley, S., & Noë, A. (2003). Neural plasticity and consciousness. Biology and Philosophy, 18(1), 131–168. James, W. (1890). The principles of psychology. New York, NY: Henry Holt and Company. Jarodzka, H., van Gog, T., Dorr, M., Scheiter, K., & Gerjets, P. (2013). Learning to see: Guiding students’ attention via a model’s eye movements fosters learning. Learning and Instruction, 25, 62–70. Karni, A., & Sagi, D. (1993). The time-course of learning a visual skill. Nature, 365(6443), 250–252.
230
References
Karni, A., & Sagi, D. (1995). A memory system in the adult visual cortex. In B. Julesz & I. Kovacs (Eds.), Maturational windows and adult cortical plasticity: SFI studies in the sciences of complexity (Vol. 24, pp. 149–174). Reading, MA: Addison-Wesley. Kellman, P. J. (2002). Perceptual learning. In H. Pashler & C. R. Gallistel (Eds.), Stevens’ handbook of experimental psychology: Learning, motivation, and emotion (Vol. 3, 3rd ed., pp. 259–299), New York, NY: Wiley. Kellman, P. J., & Garrigan, P. B. (2009). Perceptual learning and human expertise. Physics of Life Reviews, 6(2), 53–84. Kellman, P. J., & Massey, C. M. (2013). Perceptual learning, cognition, and expertise. In B. H. Ross (Ed.), The psychology of learning and motivation (Vol. 58, pp. 117–165). Amsterdam, the Netherlands: Elsevier. Kellman, P. J., Massey, C., Roth, Z., Burke, T., Zucker, J., Saw, A., . . . & Wise, J. A. (2008). Perceptual learning and the technology of expertise: Studies in fraction learning and algebra. Pragmatics and Cognition, 16(2), 356–405. Kellman, P. J., & Spelke, E. S. (1983). Perception of partly occluded objects in infancy. Cognitive Psychology, 15(4), 483–524. Kiverstein, J., Farina, M., & Clark, A. (2015). Substituting the senses. In M. Matthen (Ed.), The Oxford handbook of the philosophy of perception (pp. 659–675). Oxford, UK: Oxford University Press. Knight, R. T. (1996). Contribution of human hippocampal region to novelty detection. Nature, 383(6597), 256–259. Kok, P., & de Lange, P. F. (2015). Predictive coding in sensory cortex. In U. B. Forstmann & E. J. Wagenmakers (Eds.), An introduction to model-based cognitive neuroscience (pp. 221–244). New York, NY: Springer. Kosslyn, S. M. (2007). How the brain gives rise to the mind. In E. E. Smith & S. M. Kosslyn (Eds.), Cognitive psychology: Mind and brain. New York, NY: Pearson. Kosslyn, S. M. (2017). Practical knowledge. In S. M. Kosslyn & B. Nelson (Eds.), Building the intentional university: Minerva and the future of higher education (pp. 19–43). Cambridge, MA: MIT Press. Kosslyn, S. M., & Nelson, B. (Eds.). (2017). Building the intentional university: Minerva and the future of higher education. Cambridge, MA: MIT Press. Krekelberg, B. (2011). Microsaccades. Current Biology, 21(11), R416. Kruschke, J. K. (2003). Attention in learning. Current Directions in Psychological Science, 12(5), 171–175. Kubovy, M., & Schutz, M. (2010). Audio-visual objects. Review of Philosophy and Psychology, 1(1), 41–61. Kupers, R., & Ptito, M. (2004). “Seeing” through the tongue: Cross-modal plasticity in the congenitally blind. International Congress Series, 1270, 79–84. Laertius, D. (1925). Lives of eminent philosophers (Vol. 2, R. D. Hicks, Trans.). Loeb Classical Library 185. Cambridge, MA: Harvard University Press. Law, C. T., & Gold, J. I. (2008). Neural correlates of perceptual learning in a sensory- motor, but not a sensory, cortical area. Nature Neuroscience, 11(4), 505–513.
231
References
Lee, G. (2014). Temporal experience and the temporal structure of experience. Philosophers’ Imprint, 14(3), 1–21. Leibniz, G. W. (1982). New essays on human understanding. (P. Remnant & J. Bennett, Eds. & Trans.). Cambridge, UK: Cambridge University Press. Original work published in 1765. Levin, D. T., & Banaji, M. R. (2006). Distortions in the perceived lightness of faces: The role of race categories. Journal of Experimental Psychology: General, 135(4), 501–512. Levy-Tzedek, S., Hanassy, S., Abboud, S., Maidenbaum, S., & Amedi, A. (2012a). Fast, accurate reaching movements with a visual-to-auditory sensory substitution device. Restorative Neurology and Neuroscience, 30, 313–323. Levy-Tzedek, S., Novick, I., Arbel, R., Abboud, S., Maidenbaum, S., Vaadia, E., & Amedi, A. (2012b). Cross-sensory transfer of sensory-motor information: Visuomotor learning affects performance on an audiomotor task, using sensory-substitution. Scientific Reports, 2, 949. Lewis, D. (1976). The paradoxes of time travel. American Philosophical Quarterly, 13(2), 145–152. Locke, J. (1975). An essay concerning human understanding. (P. H. Nidditch, Ed.). New York, NY: Oxford University Press. Original work published in 1690. Logan, J. S., Lively, S. E., & Pisoni, D. B. (1991). Training Japanese listeners to identify English /r/and /l/: A first report. Journal of the Acoustic Society of America, 89(2), 874–886. Logue, H. (2013). Visual experience of natural kind properties: Is there any fact of the matter? Philosophical Studies, 162(1), 1–12. Lotto, R. B., & Purves, D. (2002). The empirical basis of color perception. Consciousness and Cognition, 11(4), 609–629. Lubow, R. E. (2010). A short history of latent inhibition research. In R. Lubow & I. Weiner (Eds.), Latent inhibition: Cognition, neuroscience and applications to schizophrenia (pp. 1–22). Cambridge, UK: Cambridge University Press. Lubow, R. E., & Kaplan, O. (1997). Visual search as a function of type of prior experience with target and distractor. Journal of Experimental Psychology: Human Perception and Performance, 23(1), 14–24. Lubow, R. E., & Moore, A. U. (1959). Latent inhibition: The effect of non-reinforced preexposure to the conditional stimulus. Journal of Comparative and Physiological Psychology, 52(4), 415–419. Lubow, R. E., & Weiner, I. (Eds.). (2010). Latent inhibition: Cognition, neuroscience and applications to schizophrenia. Cambridge, UK: Cambridge University Press. Lyons, J. (2005). Clades, Capgras, and perceptual kinds. Philosophical Topics, 33(1), 185–206. Lyons, J. (2009). Perception and basic beliefs: Zombies, modules, and the problem of the external world. New York, NY: Oxford University Press.
232
References
Lyons, J. (2011). Circularity, reliability, and the cognitive penetrability of perception. Philosophical Issues, 21(1), 289–311. Macpherson, F. (2012). Cognitive penetration of colour experience: Rethinking the issue in light of an indirect mechanism. Philosophy and Phenomenological Research, 84(1), 24–62. Macpherson, F. (2017). The relationship between cognitive penetration and predictive coding. Consciousness and Cognition, 47, 6–16. Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W.H. Freeman. Martin, J. R., & Le Corre, F. (2015). Sensory substitution is substitution. Mind and Language, 30(2), 209–233. Matthen, M. (2005). Seeing, doing, and knowing: A philosophical theory of sense perception. Oxford, UK: Clarendon. Matthen, M. (2014). How to be sure: Sensory exploration and empirical certainty. Philosophy and Phenomenological Research, 88(1), 38–69. Matthen, M. (2015). Play, skill, and the origins of perceptual art. British Journal of Aesthetics, 55(2), 173–197. Matthen, M. (2017). Is perceptual experience normally multimodal? In B. Nanay (Ed.), Current controversies in philosophy of perception (pp. 121–135). New York, NY: Routledge. McDowell, J. (2008). Avoiding the myth of the given. In J. Lindgaard (Ed.), John McDowell: Experience, norm, and nature (pp. 1–14). Oxford, UK: Blackwell. McLaren, I. P. L., Kaye, H., & Mackintosh, N. J. (1989). An associative theory of the representation of stimuli: Applications to perceptual learning and latent inhibition. In R. G. M. Morris (Ed.), Parallel distributed processing: Implications for psychology and neurobiology (pp. 102–130). Oxford, UK: Oxford University Press. McLaren, I. P. L., & Mackintosh, N. J. (2000). An elemental model of associative learning: I. Latent inhibition and perceptual learning. Animal Learning and Behavior, 38(3), 211–246. Meltzoff, A. N., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198(4312), 75–78. Mendelovici, A. (2013). Reliable misrepresentation and tracking theories of mental representation. Philosophical Studies, 165(2), 421–443. Mole, C. (2015). Attention and cognitive penetration. In J. Zeimbekis & A. Raftopoulos (Eds.), The cognitive penetrability of perception: New philosophical perspectives (pp. 218–237). Oxford, UK: Oxford University Press. Naber, M., Frassle, S., Rutishauser, U., & Einhauser, W. (2013). Pupil size signals novelty and predicts later retrieval success for declarative memories of natural scenes. Journal of Vision, 13(2), 11. Nanay, B. (2010). Attention and perceptual content. Analysis, 70(2), 263–270.
233
References
Nanay, B. (2011). Do we see apples as edible? Pacific Philosophical Quarterly, 92(3), 305–322. Nanay, B. (2012). Perceptual phenomenology. Philosophical Perspectives, 26(1), 235–246. Navarra, J., Yeung, H. H., Werker, J. F., & Soto-Faraco, S. (2012). Multisensory interactions in speech perception. In B. E. Stein (Ed.), The new handbook of multisensory processing (pp. 435–452). Cambridge, MA: MIT Press. Nazir, T. A., & O’Regan, J. K. (1990). Some results on translation invariance in the human visual system. Spatial Vision, 5(2), 81–100. Noë, A. (2004). Action in perception. Cambridge, MA: MIT Press. Oades, R. D., & Sartory, G. (1997). The problems of inattention: Methods and interpretations. Behavioural Brain Research, 88, 3–10. O’Callaghan, C. (2008). Seeing what you hear: Cross‐modal illusions and perception. Philosophical Issues, 18(1), 316–338. O’Callaghan, C. (2010). Experiencing speech. Philosophical Issues, 20(1), 305–332. O’Callaghan, C. (2011). Against hearing meanings. Philosophical Quarterly, 61(245), 783–807. O’Callaghan, C. (2014a). Intermodal binding awareness. In D. Bennett & C. Hill (Eds.), Sensory integration and the unity of consciousness (pp. 73–103). Cambridge, MA: MIT Press. O’Callaghan, C. (2014b). Not all perceptual experience is modality specific. In D. Stokes, S. Biggs & M. Matthen (Eds.), Perception and its modalities (pp. 133– 165). Oxford, UK: Oxford University Press. O’Callaghan, C. (2015). Speech perception. In M. Matthen (Ed.), The Oxford handbook of the philosophy of perception (pp. 475–494), Oxford, UK: Oxford University Press. O’Callaghan, C. (2017a). Beyond vision: Philosophical essays. Oxford, UK: Oxford University Press. O’Callaghan, C. (2017b). Grades of multisensory awareness. Mind and Language, 32(2), 155–181. Olkkonen, M., Hansen, T., & Gegenfurtner, K. R. (2008). Color appearance of familiar objects: Effects of object shape, texture, and illumination changes. Journal of Vision, 8(5), 1–16. Olkkonen, M., Hansen, T., & Gegenfurtner, K. R. (2012). High-level perceptual influences on color appearance. In G. Hatfield & S. Allred (Eds.), Visual experience: Sensation, cognition, and constancy (pp. 179–198). Oxford, UK: Oxford University Press. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. O’Regan, J. K. (2011). Why red doesn’t sound like a bell: Understanding the feel of consciousness. Oxford, UK: Oxford University Press. Orlandi, N. (2014). The innocent eye: Why vision is not a cognitive process. Oxford, UK: Oxford University Press. 234
References
Payne, B. K. (2001). Prejudice and perception: The role of automatic and controlled processes in misperceiving a weapon. Journal of Personality and Social Psychology, 81(2), 181–192. Peacocke, C. (1992). A study of concepts. Cambridge, MA: MIT Press. Pearce, J. M. (1987). A model for stimulus generalization in Pavlovian conditioning. Psychological Review, 94(1), 61–73. Pelphrey, K. A., Sasson, N. J., Reznick, J., Paul, G., Goldman, B. D., & Piven, J. (2002). Visual scanning of faces in autism. Journal of Autism and Developmental Disorders, 32(4), 249–261. Perrett, D. I., Smith, P. A., Potter, D. D., Mistlin, A. J., Head, A. S., Milner, A. D., & Jeeves, M. A. (1984). Neurones responsive to faces in the temporal cortex: studies of functional organization, sensitivity to identity and relation to perception. Human neurobiology, 3(4), 197–208. Pettit, D. (2010). On the epistemology and psychology of speech comprehension. Baltic International Yearbook of Cognition, Logic and Communication, 5, 1–43. Phillips, I. (2013). Perceiving the passing of time. Proceedings of the Aristotelian Society, 113(3), 225–252. Polat, U., & Sagi, D. (1994). Spatial interactions in human vision: From near to far via experience-dependent cascades of connections. Proceedings of the National Academy of Sciences USA, 91, 1206–1209. Pons, F., Lewkowicz, D. J., Soto-Faraco, S., & Sebastián-Gallés, N. (2009). Narrowing of intersensory speech perception in infancy. Proceedings of the National Academy of Sciences, 106(26), 10598–10602. Pourtois, G., de Gelder, B., Vroomen, J., Rossion, B., & Crommelinck, M. (2000). The time-course of intermodal binding between seeing and hearing affective information. Neuroreport, 11(6), 1329–1333. Prettyman, A. (2017). Perceptual content is indexed to attention. Synthese, 194(10), 4039–4054. Prettyman, A. (2018). Seeing the forest and the trees: A response to the identity crowding debate. Thought, 7(1), 20-30. Price, R. (2009). Aspect-switching and visual phenomenal character. Philosophical Quarterly, 59(236), 508–518. Prinz, J. (2006). Beyond appearances: The content of sensation and perception. In T. S. Gendler & J. Hawthorne (Eds.), Perceptual experience (pp. 434–459). Oxford, UK: Oxford University Press. Prinz, J. (2012). The conscious brain. New York, NY: Oxford University Press. Proulx, M. J., Brown, D. J., Pasqualotto, A., & Meijer, P. (2014). Multisensory perceptual learning and sensory substitution. Neuroscience and Biobehavioral Reviews, 41, 16–25. Ptito, M., Moesgaard, S. M., Gjedde, A., & Kupers, R. (2005). Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain, 128(3), 606–614. 235
References
Pylyshyn, Z. (1999). Is vision continuous with cognition? The case for cognitive impenetrability of visual perception. Behavioral and Brain Sciences, 22(3), 341–365. Qu, Z., Hillyard, S. A., & Ding, Y. (2016). Perceptual learning induces persistent attentional capture by nonsalient shapes. Cerebral Cortex, 27(2), 1512–1523. Raffman, D. (1994). Vagueness without paradox. Philosophical Review, 103(1), 41–74. Raftopoulos, A. (2001). Is perception informationally encapsulated? The issue of the theory-ladenness of perception. Cognitive Science, 25(3), 423–451. Raftopoulos, A. (2005). Perceptual systems and a viable form of realism. In A. Raftopoulos (Ed.), Cognitive penetrability of perception: Attention, strategies and bottom-up constraints (pp. 73–106). Hauppauge, NY: Nova Science. Raftopoulos, A., & Zeimbekis, J. (2015). The cognitive penetrability of perception: An overview. In J. Zeimbekis & A. Raftopoulos (Eds.), The cognitive penetrability of perception: New philosophical perspectives (pp. 218–237). Oxford, UK: Oxford University Press. Ramachandran, V. S., & Braddick, O. (1973). Orientation-specific learning in stereopsis. Perception, 2(3), 371–376. Recanzone, G. H., Schreiner, C. E., & Merzenich, M. M. (1993). Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys. Journal of Neuroscience, 13(1), 87–103. Rehder, B., & Hoffman, A. B. (2005). Eyetracking and selective attention in category learning. Cognitive Psychology, 51(1), 1–41. Reid, T. (1997). An inquiry into the human mind on the principles of common sense (D. R. Brookes, Ed). Edinburgh, Scotland: Edinburgh University Press. Original work published in 1764. Reiland, I. (2015). On experiencing meanings. Southern Journal of Philosophy, 53(4), 481–492. Romito, B. T., Krasne, S., Kellman, P. J., & Dhillon, A. (2016). The impact of a perceptual and adaptive learning module on transoesophageal echocardiography interpretation by anaesthesiology residents. British Journal of Anaesthesia, 117(4), 477–481. Sacks, O. (1995). The case of the colorblind painter. In O. Sacks (Ed.), An anthropologist on Mars: Seven paradoxical tales (pp. 3–41). New York, NY: Alfred A. Knopf. Sagi, D. (2011). Perceptual learning in Vision Research. Vision Research, 51(13), 1552–1566. Salasoo, A., Shiffrin, R. M., & Feustel, T. C. (1985). Building permanent memory codes: codification and repetition effects in word identification. Journal of Experimental Psychology: General, 114(1), 50–77. Sasaki, Y., & Watanabe, T. (2015). Visual perceptual learning and sleep. In K. Kansaku, L. G. Cohen, & N. Birbaumer (Eds.), Clinical systems neuroscience (pp. 343–357). Dordrecht, the Netherlands: Springer.
236
References
Savelsbergh, G. J. P., Williams, A. M., Van der Kamp, J., & Ward, P. (2002). Visual search, anticipation and expertise in soccer goalkeepers. Journal of Sports Sciences, 20(3), 279–287. Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84(1), 1. Schurgin, M. W., Nelson, J., Iida, S., Ohira, H., Chiao, J. Y., & Franconeri, S. L. (2014). Eye movements during emotion recognition in faces. Journal of Vision, 14(14), 1–16. Schwartz, S., Maquet, P., & Frith, C. (2002). Neural correlates of perceptual learning: A functional MRI study of visual texture discrimination. Proceedings of the National Academy of Sciences, 99(26), 17137–17142. Schwenkler, J. (2012). On the matching of seen and felt shapes by newly sighted subjects. i-Perception, 3, 186–188. Schwenkler, J. (2013). Do things look the way they feel? Analysis, 73(1), 86–96. Shiffrin, R. M. (1988). Attention. In R. C. Atkinson, R. J. Herrnstein, G. Lindzey, & R. D. Luce (Eds.), Stevens’ handbook of experimental psychology, Vol. 2: Learning and cognition (2nd ed., pp. 739–811). New York, NY: Wiley. Shiffrin, R. M., & Lightfoot, N. (1997). Perceptual learning of alphanumeric-like characters. In R. L. Goldstone, P. G. Schyns, & D. L. Medin (Eds.), Psychology of learning and motivation (Vol. 36, pp. 45–82). San Diego, CA: Academic Press. Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory. Psychological Review, 84(2), 127. Siegel, S. (2006). Which properties are represented in perception? In T. S. Gendler & J. Hawthorne (Eds.), Perceptual experience (pp. 481–503). Oxford, UK: Oxford University Press. Siegel, S. (2007). How can we discover the contents of experience? Southern Journal of Philosophy, 45(S1), 127–142. Siegel, S. (2010). The contents of visual experience. Oxford, UK: Oxford University Press. Siegel, S. (2012). Cognitive penetrability and perceptual justification. Nous, 46(2), 201–222. Siegel, S. (2017). The rationality of perception. Oxford, UK: Oxford University Press. Siegle, J. H., & Warren, W. H. (2010). Distal attribution and distance perception in sensory substitution. Perception, 39(2), 208–223. Siewert, C. (1998). The significance of consciousness. Princeton, NJ: Princeton University Press. Silins, N. (2016). Cognitive penetration and the epistemology of perception. Philosophy Compass, 11(1), 24–42. Sinha, P., & Poggio, T. (2002). High-level learning of early visual tasks. In M. Fahle & T. Poggio (Eds.), Perceptual learning (pp. 273–298). Cambridge, MA: MIT Press.
237
References
Slater, A., Mattock, A., & Brown, E. (1990). Size constancy at birth: Newborn infants’ responses to retinal and real size. Journal of Experimental Child Psychology, 49(2), 314–322. Smith, A. D. (2002). The problem of perception. Cambridge, MA: Harvard University Press. Smith, B. C. (2009). Speech sounds and the direct meeting of minds. In M. Nudds & C. O’ Callaghan (Eds.), Sounds and perception: New philosophical essays (pp. 183–210). Oxford, UK: Oxford University Press. Smith, B. C. (2013). Taste: Philosophical perspectives. In H. E. Pashler (Ed.), Encyclopedia of the mind (pp. 731–734). Thousand Oaks, CA: SAGE. Smith, K. S., Mahler, S. V., Pecina, S., & Berridge, K. C. (2010). Hedonic hotspots: Generating sensory pleasure in the brain. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the brain (pp. 27–49). Oxford, UK: Oxford University Press. Smith, L. B., & Thelen, E. (1993). A dynamic systems approach to development: Applications. Cambridge, MA: MIT Press. Soames, S. (2007). What are natural kinds? Philosophical Topics, 35(1/2), 329–342. Spelke, E., Hirst, W., & Neisser, U. (1976). Skills of divided attention. Cognition, 4(3), 215–230. Spence, C. (2007). Audiovisual multisensory integration. Acoustical Science and Technology, 28(2), 61–70. Spence, C., & Bayne, T. (2015). Is consciousness multisensory? In S. Biggs, D. Stokes, & M. Matthen (Eds.), Perception and its modalities (pp. 95–132). Oxford, UK: Oxford University Press. Spence, C., & Driver, J. (Eds.). (2004). Crossmodal space and crossmodal attention. Oxford, UK: Oxford University Press. Stanley, J., & Krakauer, J. W. (2013). Motor skill depends on knowledge of facts. Frontiers in Human Neuroscience, 7, 503. Stanley, J., & Williamson, T. (2017). Skill. Noûs, 51(4), 713–726. Stazicker, J. (2011). Attention, visual consciousness and indeterminacy. Mind and Language, 26(2), 156–184. Stein, B. E. (2012). The new handbook of multisensory processing. Cambridge, MA: MIT Press. Stein, B. E., & Wallace, M. T. (1996). Comparisons of cross-modality integration in midbrain and cortex. Progress in Brain Research, 112, 289–299. Stevenson, R. A., Segers, M., Ferber, S., Barense, M. D., & Wallace, M. T. (2014). The impact of multisensory integration deficits on speech perception in children with autism spectrum disorders. Frontiers in Psychology, 5, 379. Stokes, D. (2014). Cognitive penetration and the perception of art. Dialectica, 68(1), 1–34. Stokes, D. (2015). Towards a consequentialist understanding of cognitive penetration. In J. Zeimbekis & A. Raftopoulos (Eds.), The cognitive penetrability of
238
References
perception: New philosophical perspectives (pp. 75–100). Oxford, UK: Oxford University Press. Stokes, D. (2018). Attention and the cognitive penetrability of perception. Australasian Journal of Philosophy, 96(2), 303–318. Strawson, G. (2010). Mental Reality. Cambridge, MA: MIT Press. Original work published in 1994. Talsma, D., Senkowski, D., Soto-Faraco, S., & Woldorff, M. G. (2010). The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences, 14(9), 400–410. Tanaka, J. W., & Curran, T. (2001). A neural basis for expert object recognition. Psychological Science, 12(1), 43–47. Thompson, E. (1995). Colour Vision. London: Routledge. Tolman, E. C., & Honzik, H. C. (1930). Introduction and removal of reward, and maze performance of rats. University of California Publications in Psychology, 4, 257–275. Treisman, A. (1996). The binding problem. Current Opinion in Neurobiology, 6(2), 171–178. Treisman, A. (1999). Solutions to the binding problem: Progress through controversy and convergence. Neuron, 24(1), 105–125. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. Twilley, N. (2017, May 15). Seeing with your tongue. The New Yorker, pp. 38–42. Tye, M. (2000). Consciousness, color, and content. Cambridge, MA: MIT Press. Van Cleve, J. (2004). Reid’s theory of perception. In T. Cuneo & R. van Woudenberg (Eds.), The Cambridge companion to Thomas Reid (pp. 101–133). Cambridge, UK: Cambridge University Press. Van Cleve, J. (2015). Problems from Reid. New York, NY: Oxford University Press. Van Cleve, J. (2016). Replies to Falkenstein, Copenhaver, and Winkler. Philosophy and Phenomenological Research, 93(1), 232–245. Vatakis, A., & Spence, C. (2007). Crossmodal binding: Evaluating the unity assumption using audiovisual speech stimuli. Perception and Psychophysics, 69(5), 744–756. Vitevitch, M. S. (2003). Change deafness: The inability to detect changes between two voices. Journal of Experimental Psychology: Human Perception and Performance, 29(2), 333–342. Wagner, J. B., Hirsch, S. B., Vogel-Farley, V. K., Redcay, E., & Nelson, C. A. (2013). Eye-tracking, autonomic, and electrophysiological correlates of emotional face processing in adolescents with autism spectrum disorder. Journal of Autism and Developmental Disorders, 43(1), 188–199. Wang, T., & Mitchell, C. J. (2011). Attention and relative novelty in human perceptual learning. Journal of Experimental Psychology: Animal Behavior Processes, 37(4), 436.
239
References
Watanabe, T., & Sasaki, Y. (2015). Perceptual learning: Toward a comprehensive theory. Annual Review of Psychology, 66(3), 197–221. Watzl, S. (in press). Can intentionalism explain how attention affects appearances? In A. Pautz & D. Stoljar (Eds.), Themes from Block. Cambridge, MA: MIT Press. Westbrook, R. F., & Bouton, M. E. (2010). Latent inhibition and extinction: Their signature phenomena and the role of prediction error. In R. Lubow & I. Weiner (Eds.), Latent inhibition: Cognition, neuroscience, and applications to schizophrenia (pp. 23–39). Cambridge, UK: Cambridge University Press. Wiesel, T. N., & Hubel, D. H. (1963). Single-cell responses in striate cortex of kittens deprived of vision in one eye. Journal of Neurophysiology, 26(6), 1003–1017. Williams, A. M., & Davids, K. (1998). Visual search strategy, selective attention, and expertise in soccer. Research Quarterly for Exercise and Sport, 69(2), 111–128. Witzel, C., Valkova, H., Hansen, T., & Gegenfurtner, K. (2011). Object knowledge modulates colour appearance. i-Perception, 2, 13–49. Wu, W. (2014). Attention. New York, NY: Routledge. Wu, W. (2017). Shaking up the mind’s ground floor: The cognitive penetration of visual attention. Journal of Philosophy, 114(1), 5–32. Yang, J., McCauley, M., & Masotti, E. (2014). Effectiveness evaluation of search and target acquisition training prototype using performance metrics with eye- tracking data. Military Psychology, 26(2), 101–113. Zeimbekis, J. (2013). Color and cognitive penetrability. Philosophical Studies, 165(1), 167–175. Zenger, B., & Sagi, D. (2002). Plasticity of low-level visual networks. In M. Fahle & T. Poggio (Eds.), Perceptual learning (pp. 177–196). Cambridge, MA: MIT Press.
240
INDEX
Argument from Homophones, 159–64 associative learning, 109, 129–32, 134 attention attentional shift, 10, 25, 67–8, 73, 82–5, 88, 90, 105, 107–8, 120, 123, 182 attentional training, 17, 75, 83, 103–24 attentional tuning (see attention: attentional weighting) attentional weighting, 24–8, 82, 99, 103–24 bottom-up vs. top-down, 26, 130 controlled vs. automatic, 26, 114n, 175–6 covert vs. overt, 82–3, 90 cueing, 90–2, 96, 120–1 distal vs. proximal, 106–7 focal vs. diffuse, 82 inattention, 24–8, 101, 114–15 tracking of, 76, 113n attentional weighting. See under attention attribute differentiation. See under differentiation auditory phenomenology. See under phenomenology Banaji, M. R., 191 banana case, the, 179, 181, 184, 188–90, 192n, 193, 196–201, 203, 205–7, 211–12 Bayne, T., xiii, 5, 47, 133n, 138, 152, 154n Berkeley, G., 66
binding. See feature binding Blind Flailing Model, 67, 76–97 Block, N., 5, 21–2, 25, 35, 47, 89–90, 103, 124, 154 brain plasticity, 49–50, 59, 102 Brogaard, B., xii–xiii, 46, 61, 97, 154–6, 158, 160–1, 163, 170–3, 175–7, 182–3, 210 Bryan, W. L., 29–31, 35, 108 Byrne, A., 97, 158, 185 Carrasco, M., 25, 83, 89–90 change blindness, 19n chicken sexing, 9–10, 75, 83–4, 167 chunking in memory, 139 in multisensory perception, 139–47 in speech perception, 158, 172–4 Chase and Sanborn case, 10–12 Clark, A., 32, 103, 114n, 124 cognitive impenetrability, 74–5 See also cognitive penetration cognitive load, 29n, 33, 108–9, 209–10 cognitive penetration, 9–10, 74, 83, 180, 182–4, 202–4, 207 diachronic vs. synchronic, 203–4 See also cognitive impenetrability cognitive resources, xiv, 6, 29–30, 32–6, 68, 99, 123, 175–7, 181, 205–7
241
I ndex color constancy, 180–1, 193–6, 201–2 experience vs. judgment, 189–92 objectivism, 185 perception, 7, 36, 60, 153, 178, 196n, 208–9 under low lighting conditions, 178–9, 181, 193, 197–201, 205–6 relationalism, 185 subjectivism, 185 concepts. See under perceptual learning contents of perception, 36, 60, 70–1, 74–5, 85, 89n, 91–2, 97–8, 126, 127n, 155, 157–9, 177 rich vs. thin (see high-level kind properties; low-level properties) Copenhaver, R., 5n, 46n crossmodal illusions, 147–8 Cyrillic case, 4, 7, 13, 25, 35, 77–8, 91–2, 107–9, 154, 158
feature binding, intermodal, 129–38, 141–9, 152 awareness, 129, 131–8, 143, 146, 152 process, 133, 136–8, 146, 149, 152 See also part binding figure-ground separation, 181, 197, 205 Fillenbaum, S., 182, 186–7 Firestone, C., 191–2, 196n, 203n flavor perception, 149–50 See also perceptual learning: and wine tasting Gauthier, I, 22–3, 204 generalization. See perceptual learning: and failed generalization Gibson, E. J., xii–xiii, 3, 7, 14, 65n, 67, 73–4, 78, 82, 85n, 174 Gilbert, C., 49–50, 57–8 Gold, J. I., 7, 11n, 39, 82n Goldstone, R. L., xiii, 15, 17, 20, 23–5, 29, 34, 48, 50, 67, 76–9, 82–3, 113–14, 121– 23, 139–40, 142, 166–9, 174, 197 Greebles, 22–4, 33, 45, 140–2, 204–5
Delk, J., 182, 186–7 Dennett, D., 10–12, 82 depression-induced perceptual states. See under perceptual learning Deśika, V., xii, 4–5, 7, 35, 46, 61 differentiation, 20–2, 35–6, 60, 153, 156, 166–70, 176–8, 196–7, 199, 202, 205 whole stimulus (object) differentiation vs. attribute differentiation, 167–8, 170, 178, 199 Diogenes. See Laertius, D. distal attribution, 104–5, 107–8, 125 dress case, the, 194, 196, 211 Dretske, F., 11, 19n, 37, 42–5, 50, 55, 59–60, 122 drug-induced perceptual states. See under perceptual learning Duncker, K., 180, 185–7, 193
Hansen, T., 180, 187–9, 193, 200–2 Harter, N., 29–31, 35, 108 Hatfield, G., 185, 193 hearing meanings, xiii, 107–8, 154–64, 166, 170–5, 177–8, 210 Held, R., 14, 128n, high-level kind properties, xiii, 65–70, 74, 89n, 96–7, 170 vs. low-level properties, xiv, 61, 65–8 (see also low-level properties) homophones. See Argument from Homophones Hubel, D., 49–50 Hume, D., 66
empiricism, 128 eye movements, 82–3, 87n, 119–20 EyeMusic, 117–20. See also sensory substitution: sensory substitution devices Fahle, M., 15, 48, 50, 58 failed generalization. See under perceptual learning
imprinting. See stimulus imprinting Inferential View of language comprehension. See under language comprehension intentionalism, 157–8 intermodal binding. See feature binding intermodal feature binding awareness. See under feature binding intuitional content, 98
242
I ndex James, W., xii, 20, 166–7, 197, 205
multisensory perception, 6–7, 20n, 36, 60, 126–38, 142–53, 208–9
K-properties, 71, 91. See also high-level kind properties Kellman, P., 14–15, 29, 31, 35 Laertius, D., xii, 4–5, 7, 210 language comprehension, 171, 175–6 Inferential View vs. Perceptual View, 171 language learning, 5, 21–2, 34n, 35, 47, 154–5, 157–9, 164–70 language learning perceptual learning case, 5, 21–2, 35, 47, 154, 164–5 latent inhibition, 109–19 vs. latent learning, 110–12 Law, C. T., 11n, 39 Leibniz, G. W., 128 levels of analysis, 6, 40–1, 45–6, 48 Levin, D. T., 191–2 Locke, J., 128 long-term perceptual changes vs. short-term adaptive effects. See under perceptual learning low-level properties, xiv, 61, 65–8, 73, 78, 89, 97, 108, 146, 170, 192n, 200, 203 Lubow, R. E., 110–14 Lyons, J., 22, 69n, 81, 86, 164, 203 Mackintosh, N., 109–10, 115 Macpherson, F., 9–10, 73, 180–4, 202, 208, 221 Matthen, M., 95n, 127n, 185 McDowell, J., 37, 41–2, 46–7, 98 McLaren, I., 109–10, 115 memory color and cognitive penetration, 180, 182–4, 202–4, 207 and color constancy, 180–1, 193–6, 201–2 empirical evidence for, 185–92 minimally multimodal argument that perceptual awareness is not, 135–6, 152 misperception vs. false belief, 67–8 Molyneux’s question, 127–8, 134, 153 multisensory integration, 130. See also multisensory perception
Nanay, B., 87, 89n, 97 nativism, 128 natural kind recognition, 65–7, 99, 108. See also high-level kind properties; K-properties Noë, A., 103, 124 novelty. See under perceptual learning O’Callaghan, C., xii–xiii, 5, 21–2, 46–7, 61, 128–31, 133–8, 143, 146–7, 150, 152, 154–6, 158–61, 163–70, 176–77, 210 Offloading View of perceptual learning, 6, 28–36, 68, 77, 135, 156, 181 and attentional weighting, 99–100 and latent inhibition, 115 and memory color, 204–7 and perceptual hacking, 123 and sensory substitution, 108–9 and speech perception, 174–8 and unitization, 151–2 Olkkonen, M., 180, 187, 193, 200–2 part binding, 141–2. See also feature binding Peacocke, C., xii, 4–5, 7, 25–6, 35, 61, 77, 91–2, 107, 154, 158 perception-based motor skills. See under perceptual learning perceptual content. See contents of perception perceptual development, 14–16, 49–50, 55, 73 vs. perceptual learning, (see under perceptual learning) vs. perceptual maturation, 14–16, 42 perceptual hacking. See under perceptual learning perceptual learning behavioral evidence for, 6, 44–5, 57–9, 72 Cognitive view vs. Perceptual view, 40, 44–5, 48, 52, 55, 59 and concepts, 38–9, 41, 43–4, 91–2, 95n, 98, 183 (see also recognitional dispositions)
243
I ndex perceptual learning (cont.) enrichment vs. specificity, 73–4 and epistemology, xiii, 61, 210–14 and exposure, 13, 19, 59–60, 74, 78, 80–1, 109, 112–13, 115, 117–8, 123, 151, 165–6, 174, 183, 192n, 200, 203 and failed generalization, 57–8 instruction (vs. exposure), 16, 109, 121–3 see also perceptual learning: supervised vs. unsupervised introspective evidence for, xiii, 6, 41, 45–8 as long-term perceptual changes vs. short term adaptive effects, 7–10 mechanisms of, 18–28 neuroscientific evidence for, 6, 37, 45–6, 48–60 as perceptual, 10–11 and novelty, 67, 80–1, 83, 101, 109–18, 124–6, 173 and philosophy of science, 214–15 and race, 191, 192n, 216–7 as resulting from practice or experience, 11–14 skepticism about, 36, 38–47 slow vs. fast, 19–20 and social philosophy, 215–17 and speech (see speech perception) supervised vs. unsupervised, 13, 78, 174 (see also perceptual learning: instruction (vs. exposure)) taxonomy, 18–28 training regimens, 122–3 vs. depression-induced perceptual states, 17–18 vs. drug-induced perceptual states, 17 vs. learning based on perception, 11, 105 vs. mere biological changes, 12, 14 vs. perception-based motor skills, 16–17, 39–40 vs. perceptual development, 14–16 vs. perceptual hacking, 121–4 vs. time dilation perceptual states, 17–18 and wine tasting, xii–xiv, 3, 18–21, 38, 40, 43–45, 59–60, 77, 110, 114n, 115, 139–40, 166–7, 197, 205, 209–10, 217
perceptual maturation. See under perceptual development perceptual phenomenology, 6, 42–4, 46, 55, 69–72, 83, 85, 89, 91–3, 96, 136, 141, 156–66, 182–4, 202, 207–8 auditory, 157–66 sensory (see sensory phenomenology) visual, 91, 158–9, 183 perceptual vs. post-perceptual, 38–40, 52, 55, 138, 152, 177 phenomenal character of perception. See perceptual phenomenology Phenomenal Contrast Argument, 66, 69–72, 74–6, 85–9, 91, 96–7, 136n, 155–64 attentional reply to the, 72–96 for Hearing Meanings, 156–64 for Seeing Meanings, 158–9 See also phenomenal similarity arguments phenomenal similarity arguments, 97–9 phonemes, 21, 34n, 122, 150–1, 156, 165, 167–70, 176, 178, 197, 199 pine tree case, xiv, 4, 7, 13, 19n, 46, 65–78, 80–2, 84–6, 88–92, 94–7, 100–1, 123, 156, 182–3 plasticity. See brain plasticity post-perceptual. See perceptual vs. post-perceptual primary sensory cortex, 6, 48, 50, 56–8 primary visual cortex (V1), 49, 51–3, 55–6, 58, 184 Prinz, J., 5, 47, 82, 154n properties high-level vs. low-level (see high-level kind properties; low-level properties) proximal vs. distal properties, 104–8 pupil dilation, 116, 118, 125 Pylyshyn, Z., 9n, 10, 74–5, 83, 183 Raftopoulos, A., 10, 19, 183, 203 recognitional dispositions, 67n, 70, 73–4, 88, 90–6. See also perceptual learning: and concepts Reid, T., xii, 5, 7, 35, 46, 61 Scholl, B., 191–2, 196n, 203n
244
I ndex sense modalities, 5, 14, 56–7, 59, 66, 128–9, 135, 143, 151–2, 213–4 hybrid, 103, 124–5 substituting vs. substituted, 102–3, 105n, 124–5 senses, the. See sense modalities sensory phenomenology, 43–4, 46, 55, 69–72, 89, 156 sensory substitution, 7, 36, 60, 100–9, 116–21, 123–6, 208–9 hybrid sense modality (see under sense modalities) sensory substitution devices (SSDs), 101–5, 108, 116–21, 123–5 (see also EyeMusic; tactile-vision sensory substitution; the vOICe) substituting vs. substituted (see under sense modalities) shape-gestalt view, 85–6 Shiffrin, R. M., 17, 27, 29, 140, 174 Siegel, S., xii, xiv, 4–5, 7, 9, 17–18, 21, 25–6, 46–7, 61, 65–77, 80, 84–7, 89n, 91, 97, 100, 108, 154n, 156–8, 181–3, 204, 210, 212–14 Siewert, C., xii, 4–5, 7, 24, 46, 65n, 210 Smith, A. D., 47, 97–8 Smith, B. C., 149, 150n, 164 speech perception xiiin, 7, 21–2, 36, 60, 133–4, 150, 153–78, 197, 209 differentiation in, 166–70, 176–8 unitization in, 155, 168–70, 178 Spence, C., 131, 133n, 138, 143, 146, 152 Stanley, J., 16–17, 95n stimulus imprinting, 20n Strawson, G., xii, 5, 22, 47, 154, 164
tactile-vision sensory substitution (TVSS), 118. See also sensory substitution: sensory substitution devices Tarr, M. J., 22–3, 204 taste perception. See flavor perception Thatcher effect, 141 time dilation. See under perceptual learning Tolman, E. C., 110–12 transcranial magnetic stimulation, 53–4 Treisman, A., 141 Tye, M., 5, 42–3, 47, 50, 55, 154n unitization, 20, 22–4, 36, 60, 126, 129, 135, 137–53, 155, 168–70, 178, 205 and differentiation, 168–70 V1. See primary visual cortex ventriloquist effect, 147–8 visual acuity, 21, 60, 122n visual phenomenology. See under phenomenology the vOICe, 103–4, 117, 120. See also sensory substitution: sensory substitution devices Walk, R., 78, 174 Watanabe, T., 7, 50, 54n, 82n whole stimulus differentiation. See under differentiation Wiesel, T., 49–50 wine tasting. See under perceptual learning Witzel, C., 182, 189–90, 199 Wu, W., 10, 26–7 Zeimbekis, J., 10, 181–2, 184n, 189–92, 196n, 203
245