222 27 5MB
English Pages 390 [392] Year 2014
BEFORE MODERN HUMANS
For Jim and Lew
BEFORE MODERN HUMANS New Perspectives on the African Stone Age
Grant S. McCall
Walnut Creek, California
LEFT COAST PRESS, INC. 1630 North Main Street, #400 Walnut Creek, CA 94596 www.LCoastPress.com Copyright © 2015 by Left Coast Press, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the publisher. ISBN 978-1-61132-222-4 hardback ISBN 978-1-61132-656-7 institutional eBook ISBN 978-1-61132-224-8 consumer eBook Library of Congress Cataloging-in-Publication Data: McCall, Grant S., author. Before modern humans : new perspectives on the African Stone Age/Grant S. McCall. pages cm Includes bibliographical references and index. ISBN 978-1-61132-222-4 (hardback : alk. paper)—ISBN 978-1-61132-224-8 (institutional eBook)—ISBN 978-1-61132-656-7 (consumer eBook) 1. Hominids—Behavior—Evolution—Africa. 2. Hominids—Evolution—Africa. 3. Paleoanthropology—Africa. 4. Mesolithic—period—Africa. 5. Paleolithic period—Africa. 6. Paleontology—Pleistocene. 7. Tools, Prehistoric—Africa. 8. Stone implements—Africa. I. Title. GN772.4.A1M35 2014 569.9096—dc23 2014019572 Printed in the United States of America The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials, ANSI/NISO Z39.48–1992.
Contents
List of Illustrations Preface
7 13
1 Introduction 21 2 Stone Tool Technology and the Organizational Approach
61
3 The Organization of Early Stone Age Lithic Technology
93
4 The Organization of Middle Stone Age (MSA) Lithic Technology
141
5 Fear and Loathing in Paleolithic Faunal Analysis
187
6 Implications of Lower and Middle Pleistocene Faunal Assemblage Composition
213
7 Implications of Lower and Middle Pleistocene Hominin Bone Modification Patterns
243
8 Alternative Perspectives on Hominin Biological Evolution and Ecology
281
9 Conclusion 323 Notes
339
References 343 Index
385
About the Author
391
Illustrations
Figures 1.1 Cranium of the so-called Taung Child, the type specimen 28 of Australopithecus africanus 1.2 Mary and Louis Leakey excavate at the FLK 22 32 Zinjanthropus locality of Olduvai Gorge 1.3 Engraved ocher fragment and perforated marine shell beads 44 associated with Still Bay industry artifacts at Blombos Cave 2.1 Acheulean handaxe type specimen recovered by Jacques 63 Boucher de Perthes from Abbeville, France, in 1867 2.2 François Bordes knaps a blade core using indirect percussion 64 2.3 (a) Example of residential mobility from the Ju/’hoansi 76 (formerly !Kung) foragers of the Kalahari; (b) example of logistical mobility from the Nunamiut 2.4 Flowchart diagramming the organizational interconnections 79 between environmental dynamics, mobility and settlement systems, and technological characteristics 3.1 Handaxe concentration at the Main Site complex of 107 Olorgesailie 3.2 (a) Handaxe collected by Jacques Boucher de Perthes from 107 Abbeville housed at the Muséum d’Histoire Naturelle de Toulouse; (b) replica handaxe made on basalt for use in butchery experimentation 3.3 Map of Olorgesailie localities discussed in the text 109 3.4 Graph showing the relationship between the percentage 115 of small flakes from the total assemble and the percentage of bifaces from the core assemblage at Olorgesailie 3.5 Map showing the location of the Gemsbok Acheulean 118 sites near Oranjemund, Namibia 3.6 Small handaxe typical of the Gemsbok Acheulean from 121 the Namib Desert, Namibia
3.7 Graph showing the relationship between the percentage 123 of cortical flakes and the percentage of bifaces from the core assemblage at the Gemsbok sites 3.8 Graph showing the relationship between the percentage of 124 cortical flakes and the percentage of cores from the total assemblage at the Gemsbok sites 3.9 Graph showing the relationship between the percentage of 125 retouched flakes and the percentage of bifaces from the core assemblage at the Gemsbok sites 4.1 Typical MSA Levallois point from the site of Erb Tanks, 146 Namibia 4.2 Typical MSA retouched point from the site of Tsoana, 146 Namibia 4.3 Map of the location of the Gademotta and Kulkuletti 151 archaeological sites, Ethiopia 4.4 Graph showing the relationship between the percentage of 153 primary flakes and the percentage of cores from the total assemblage at the Gademotta and Kulkuletti 4.5 Graph showing the relationship between the percentage of 154 retouched flakes and the percentage of cores from the total assemblage at the Gademotta and Kulkuletti 4.6 Graph showing the relationship between the percentage 155 of primary flakes and the percentage of prepared cores from the total core assemblage at Gademotta and Kulkuletti 4.7 Graph showing the relationship between the percentage 156 of primary flakes and the percentage of prepared core flakes from the total flake assemblage at the Gademotta and Kulkuletti 4.8 Graph showing the relationship between the percentage of 157 primary flakes and the percentage of retouched tools from the total flake assemblage at the Gademotta and Kulkuletti 4.9 Map showing the location of the Omo Kibish archaeological 161 sites, Ethiopia 4.10 Graph showing the PC loadings for the frequencies of 164 various stone tool types at the Omo Kibish sites 4.11 Graph showing the regression scores for individual Omo 166 Kibish sites for PC 1 and PC 2 4.12 Graph showing raw percentages of different categories 167 of stone tool debris at the Omo Kibish sites 5.1 Meat-packing plant workers break down hog carcasses 198 in Chicago in 1905 5.2 Carnivore-ravaged eland (Taurotragus oryx) in 199 the Kgalagadi Transfrontier Park, South Africa
6.1 (a) Graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22; (b) graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22; (c) graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22 6.2 (a) Graph plotting the residual values against the adjusted residual values for the regression analysis of the Bunn and Kroll (1986) data; (b) graph plotting the residual values against the adjusted residual values for the regression analysis of the Potts (1988) data; (c) graph plotting the residual values against the adjusted residual values for the regression analysis of the Monahan (1996) data 6.3 Hierarchical cluster analysis based on PC regression scores for element frequencies based on the analysis of selected animal bone assemblages 6.4 Carcass segments associated with PC regression clusters 6.5 Hierarchical cluster analysis based on PC regression scores for various animal bone assemblages 7.1 Graph showing the percentages of cut marked bones for various carcass segments for the Kakinya and Bear sites 7.2 Graph showing the percentages of cut marked bones for various carcass segments for Hadza sites 7.3 Graph showing a hierarchical cluster analysis of ethnoarchaeological sites based on cut mark frequencies for bones from various carcass segments 7.4 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on cut mark frequencies for bones from various carcass segments 7.5 Graph showing the percentages of cut marked bones for various carcass segments for the Olduvai archaeological localities 7.6 Graph showing the percentages of cut marked bones for various carcass segments for Gesher Benot Ya’aqov 7.7 Graph showing the mean cut mark frequencies for bone elements belonging to various carcass segments 7.8 Graph showing the percentages of cut marks on shafts vs. ends for long bones at the Hadza sites 7.9 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on the raw percentages of cut marks on long bone shafts vs. ends for long bone elements
217
220
226 226 229 251 251 252 255 258 258 259 261 263
7.10 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on the ratios of cut marks on long bone shafts vs. ends for long bone elements 7.11 Repeated/conflated cut marks on bones from FLK 22 7.12 Lioness scavenges a gemsbok carcass from a lion kill in the Kgalagadi Transfrontier park, dragging the torso to the cover of a shade tree 8.1 (a) Graph of the relationship between mean annual temperature and mean male body mass for selected modern forager groups; (b) graph showing the relationship between mean annual temperature and mean male stature for selected modern forager groups 8.2 (a) Graph showing the relationship between percentage of calories attained from hunting and fishing and mean male body mass for selected modern forager groups; (b) graph showing the relationship between percentage of calories attained from hunting and fishing and mean male stature for selected modern forager groups 8.3 (a) Graph showing the relationship between population density and mean male body mass for selected modern forager groups; (b) graph showing the relationship between population density and mean male stature for selected modern forager groups 8.4 Graph showing mean estimated body mass for hominins of various ages 8.5 Graph showing cranial capacities for Pleistocene hominin fossils of various ages 8.6 (a) Graph showing mean cranial capacities for Pleistocene hominin fossils of various ages; (b) graph showing mean natural log of cranial capacities for Pleistocene hominin fossils of various ages; (c) graph showing mean EQ for Pleistocene hominin fossils of various ages; (d) graph showing mean natural log EQ for Pleistocene hominin fossils of various ages 8.7 Graph comparing body sizes and brain sizes for selected carnivorous mammals, great apes, and hominin species
264
271 272 286
287
289
292 307 309
312
Tables 3.1 Rotated PC matrix for Olorgesailie lithic assemblage data 3.2 Eigenvalues and percentages of variation explain for PCA of Olorgesailie lithic assemblage data
112 113
3.3 List of variable clusters derived from PCA of Olorgesailie stone tool typological data 4.1 Rotated PC matrix for Omo Kibish lithic assemblage data 4.2 Eigenvalues and percentages of variation explain for PCA of Omo Kibish lithic assemblage data 6.1 List of zooarchaeological cases considered in this comparative analysis 6.2 Rotated PC matrix for comparative faunal assemblage data 6.3 Eigenvalues and percentages of variation explain for PCA of comparative faunal assemblage data 7.1 Ratios of cut mark frequencies on long bone ends vs. shafts for selected zooarchaeological cases 8.1 Partial correlation of mean male body mass and stature with percentage of calories attained through hunting/fishing and population density, holding mean annual temperature constant 8.2 Partial correlation of mean male body mass and stature with mean annual temperature and population density, holding percentage of calories attained through hunting/fishing constant 8.3 Partial correlation of mean male body mass and stature with mean annual temperature and percentage of calories attained through hunting/fishing, holding population density constant 8.4 Descriptive statistics for encephalization quotient values for selected mammalian carnivore families, great apes, and hominins 8.5 Encephalization quotient and studentized residual values for the most encephalized mammalian carnivore species
114 162 163 223 224 225 262 290
290
291
313 314
Preface
T
his book is about asking better questions, not about giving definitive answers. The field of paleoanthropology stands at the brink of a revolution brought about by the development of new scientific techniques that seem certain to overhaul the ways in which our research is conducted in some fundamental ways. Sophisticated technologies now facilitate complex analytical techniques that increasingly pervade the analysis of all forms of material evidence. Some techniques, such as the analysis of DNA sequences, have already had profound effects on the field, and this situation will no doubt continue as certain techniques become more widely available, quicker, and cheaper. Other techniques, such as the sourcing of lithic raw materials using portable X-ray florescence (pXRF) technology or the micromorphological analysis of sediments, are just now beginning to show their full potential as analytical tools. At this point, one thing seems astonishingly clear: in coming decades, our field will be flooded with detailed data concerning aspects of the archaeological and fossil records that could not have been dreamed of even a short time ago. And, as always, new discoveries will continue to force us to reconsider our ideas about our evolutionary past and the methods with which we study it. Given these prospects, I think that now is the right time to begin a process of rethinking the ways in which we go about asking questions concerning various scenarios and processes of human evolution. One aspect of this question is inherently historical in nature: where did research problems within the field of paleoanthropology come from, and how have they influenced the development of the modern discipline? I would argue that our current constellation of research problems derived from what might be called a “founder’s effect”—the result of the orientations of the most influential scholars responsible for early work on issues of human evolution. Recently, Lewis Binford (2001) used this term in discussing the effects of one founder’s effect on anthropologists 13
14 Preface
working with hunter-gatherer groups, and I think this term is equally applicable in our case. It is clear that a small number of early researchers working on human evolution were responsible for shaping prevalent research problems and analytical tactics, which have continued with little alteration until today. In terms of human evolutionary prehistory in Africa—the main subject of this book—the roles of Raymond Dart and Louis Leakey have been particularly important. Both men were profoundly influential figures in demonstrating that our ancestors originated in Africa, which thus deserved special attention with regard to the study of human evolution. However, because of a pervasive Eurocentric bias with respect to the nature of the African fossil and archaeological records, the work of both men was intensely scrutinized. As I discuss in detail shortly, the Western intellectual tradition of the first half of the 20th century was simply unwilling to accept Africa as the homeland of the human species. The result of this refusal was an antagonistic intellectual environment that imbued research on human evolution in Africa with a unique rhetorical structure accompanied by a distinctive set of argumentative tactics. Simply put, scholars such as Dart and Leakey needed to make the case that our African ancestors appeared far earlier than what had been traditionally assumed and that they had been more human-like than previously thought. In other words, they attempted to show that our African ancestors predated ancestors known from other regions of the world (and especially Europe), while also possessing the fundamental spark of humanity that distinguishes us from our other close primate relatives. For these reasons and others, a tradition of research emerged that focused on the demonstration of both the antiquity and, perhaps more important, the basic humanity of our ancestors. Thus we must recognize that the motivations for these lines of research were not purely scientific but were responses to biases stemming from political or even metaphysical beliefs. Eventually Dart and Leakey were obviously vindicated in their views about the role of Africa in human evolution, and the latent racism apparent within the Western tradition of human evolutionary thought was exposed. By this point, however, major orientations of research on human evolution in Africa had already solidified. Originating from this founder’s effect, mid-20th-century research questions often took the form of the evaluation of proxy evidence concerning the behavioral complexity, intellectual sophistication, and relative humanity of our various ancestors. Furthermore, such research often manifested itself in the examination of the potential similarities between Paleolithic peoples and modern hunter-gatherer groups. This approach was at the heart of the influential evolutionary theory-building
Preface 15
efforts of Sherwood Washburn, who sought to integrate a broad range of fossil, archaeological, ethnographic, and primatological evidence. In fact, Washburn’s approach and models still predominate within the modern field of paleoanthropology. Glynn Isaac’s archaeological research on early hominin behavior shared this orientation, effectively arguing for strong similarities between our early hominin ancestors and modern forager groups based on inferences of big game hunting, technological sophistication, food sharing, and home base site structure. Thus, as academic research on human evolution expanded and diversified under the influence of such researchers as Washburn and Isaac, concern for the relative humanity of our early ancestors was both reproduced and amplified. For these reasons also, when research critical of the views of such scholars as Dart, Leakey, Washburn, and Isaac began to emerge, it largely shared the same set of questions surrounding the relative humanity of our early ancestors. When, for example, Lewis Binford and C. K. Brain began presenting evidence that complicated the early hominin roles in accumulating bones at what are now archaeological sites, these facts were marshaled behind arguments for the relatively apelike behavior of these early hominins and their dissimilarity with modern humans. In demonstrating that australopithecines were not the primary accumulators of bones in the caves in which their bones were found and were actually the hunted prey of other large carnivores, Brain fundamentally undermined the arguments of Dart for both tool use and big game hunting. Likewise, when Binford suggested that our early hominin ancestors were marginal scavengers rather than hunters of big game, the true importance of this argument was its suggestion that early hominins were profoundly different from and less sophisticated than modern humans. Although the last 40 years have seen an immense surge in the sophistication of research on site formation processes and other related issues, debate concerning the relative complexity, sophistication, and humanity of early hominins continues among researchers on both sides of the issues. The failing of this research orientation is that the various forms of evidence available to paleoanthropologists do not articulate with the questions being asked. To this point, at least, studies of stone tools and animal bones have dominated archaeological research on early hominin behavior. It should be fairly obvious that Paleolithic archaeologists did not choose to study stone tool and animal bone assemblages because these materials are optimally suited to evaluating the cultural sophistication of our early hominin ancestors or their similarity with modern humans. Put bluntly, we study stone tools and animal bones because they are available, owing to their ubiquity and
16 Preface
durability in the archaeological record. Likewise, paleoanthropologists do not choose which individuals from the past to examine but are at the mercy of what has been found in the fossil record. Although our field has been highly creative in seeking ways in which to articulate the forms of evidence available to us according to the research interests discussed here, few researchers have pretended that our evidence and our interests are a natural fit. Given the nature of our goals and the forms of evidence available to us, I strongly believe that new research questions are needed. Paleoanthropology is distinguished from other fields of historical science by its teleological focus on how we, as a species, originated. This fact is betrayed even in the usage of terminology such as “anatomically modern humans” and “behavioral modernity.” No right-minded paleontologist would consider the evolution of “anatomically modern” trilobites or “behaviorally modern” sauropods. The fact that the application of this terminology to nonhuman animal species is such nonsense bespeaks the uniqueness of paleoanthropology as a historical science. Although there may be metaphysical value in approaching human evolution as a teleological exercise, I do not think this approach represents a viable framework for scientific research. In other words, I believe that people commonly assume that, once we have arrived at the “true story” of human origins, its implications for our collective identity as a species or even “the meaning of life” will be self-evident and will hold some linear narrative structure. But I severely doubt that this assumption is true, and I believe that this pursuit places us on unsure footing as we collect scientific evidence and test our evolutionary models. If it is even possible to wean ourselves from this teleological orientation, where do we go from there? This book makes the case for renewed attention to the behavioral ecology of early hominins and its processes of reorganization over time. Although bodies of evidence such as stone tool and animal bone assemblages may not articulate with questions of cultural sophistication, they do provide us with ways of examining the economic lifeways of early hominins. They offer us a firmer basis for considering issues such as how early hominins moved around the landscape, how they organized their technological systems, how they made use of various spaces on the landscape, and how they acquired certain types of food resources. Phrased another way, these types of evidence integrate much more clearly with the examination of the roles of early hominins within their ecosystemic context in terms of both biology and behavior. Such lines of evidence may be used to understand variation and change in a comparative framework within a wide range of spatial and temporal scales. Furthermore, they may be integrated with available
Preface 17
fossil evidence in order to examine diachronic relationships between hominin biology and behavioral ecology as specific evolutionary processes. This approach will allow us to make use of well-understood biological principles discovered through the comparative examination of nonhuman animal species in examining various types of human evolutionary process. Abandoning prevalent views of the uniqueness of humans and the belief in our evolution as a linear narrative will let us reframe paleoanthropological evidence in terms of more appropriate research questions and to put our ancestors back into the ecosystems in which they actually originated. To this end, this book offers reanalyses of some key sets of evidence in the examination of the transition from the Early to Middle Stone Age and in the biological evolution of hominins before the appearance of truly anatomically modern humans. One major aspect of this book is the decoupling of certain types of hominin behavior, such as the manufacture of Acheulean handaxes and the predation of large game, with inferences of cultural sophistication or similarities with modern peoples. The other major aspect is the reconsideration of hominin evolutionary processes in light of my inferences concerning their behavioral ecology and relative to basic ecological principles discovered through the comparative examination of other animal species. The results include some new ways of thinking about hominin evolution in terms of both specific historical scenarios and broader principles of evolutionary theory. I do not hold any delusions that my work here will be found ultimately to be absolutely true. Instead, my hope is that it will be a stepping stone in a process of asking new and better research questions of the archaeological and fossil records. Given the impending flood of paleoanthropological data stemming from both technical and technological innovations, as well as the certainty of future discoveries, I believe that shifting our research orientation is important now. While new forms of technical analysis may reduce our dependence on stone tools and animal bones as forms of archaeological evidence, and new fossil finds will expand our knowledge of biological variation, I strongly doubt that these forms of evidence will inform us about the relative humanity of our early hominin ancestors in any more direct way. Instead, new techniques and technologies offer us the promise of more accurately inferring patterns of hominin behavior and the functional organization of biological features within an evolutionary framework. Although these types of information may be difficult to relate to questions of cultural sophistication, their value for reconstructing hominin ecology and their implications for dynamics of evolution are clear. For this reason, now more than ever, we need innovative research questions and challenging theoretical models as our field begins to shift its analytical practices.
18 Preface
This book’s ecological orientation will not be attractive to magazine cover designers, to documentary film makers, or to science fiction writers. It does not offer a linear teleological narrative about how humans evolved with ever-increasing sophistication to become masters of the natural world. It does, however, offer a scientific framework that works with the larger fields of biology and evolutionary ecology and, therefore, articulates a set of questions with which our available data may be explored. In response to anticipated criticism, I contend that the search for “humanity” in the fossil and archaeological records of our early hominin ancestors has led us to some delusional answers to our evolutionary research questions. Although learning about the details of hominin foraging ecology and its relationship with evolutionary process may not be popular, it produces questions that may be scientifically investigated. And once we have arrived at firmer understandings of hominin ecology and dynamics of evolutionary change over time, we may then reconsider the metaphysical implications of this body of knowledge and the nature of our teleological narratives.
Acknowledgments For support in writing this book, I owe my deepest thanks to a great many individuals—more than can be reasonably acknowledged here—so let me begin by apologizing to any who have been left out. As the dedication of this book states, I am profoundly thankful to James Enloe and Lewis Binford for instruction and conversations that have strongly shaped my research over the last decade. I am very proud to have been a part of this intellectual tradition, and I hope that this book contributes to it. I am certain that both Jim and Lew would strongly disagree with many of the specific conclusions presented here and perhaps also aspects of its overall approach; I’m sad that I will not have the opportunity to hear Lew’s critiques of the research presented and to engage in debate with him. (In contrast, I may also be sad to hear Jim’s opinion of this work!) More specifically, I am grateful to Jim for his collaboration on much of the research presented in the faunal chapters of this book. Some material presented in Chapter 6 was the result of research that we did together in composing a presentation for the 2008 Society for American Archaeology meetings in Vancouver, Canada, including the use of some unpublished data from Pincevent and Verberie. Other individuals made important contributions to other portions of this book. I am grateful to John Whittaker for all his help over the years— especially for conversations about knapping issues and for permission to reprint his excellent photo of Francois Bordes. I wish to thank Rachel
Preface 19
Horowitz, whose work on lithic reduction and technological organization has provided focus for some of my opinions on these subjects. I am grateful to Robert Franciscus and Trenton Holliday for their guidance on many issues discussed in Chapter 8; specifically, I must thank Bob for fostering my initial interest in issues of modern forager body size and shape and their implications for our understanding the hominin fossil record. In addition, Lucas Friedl and Greg Tellford did work on this topic that helped shape some of my views. Trent has contributed to this book through innumerable conversations and other invaluable professional support. I am grateful to Hannah Marsh for her collaboration in studying the Orange River Man fossil in Namibia and for other help and support over the years. I wish to thank Andrew Childers, whose highly insightful work provided me with important information concerning patterns of hominin brain size and encephalization quotients. And I am grateful to Theodore Marks and all others who have lifted a trowel and shaken a screen with me in the field in Namibia over the years. Once again, however, I must emphasize that the opinions expressed in this book are purely my own and do not necessarily reflect the views of the individuals named. Finally, I thank my wife, Sarah McCall, and my parents, George McCall and Nancy Shields. In addition to giving me all their love and support over the years, my parents are responsible for my interest in social science and my work in Africa. They also made some important substantive contributions to this book, especially in terms of its quantitative methods. Sarah has endured my preoccupation with this book now for several years, as well as my absences while working in the field, with both kindness and humor. She also contributed a great deal to the editing and indexing of this manuscript, in addition to being my best friend. Grant S. McCall September 2014
Chapter 1
Introduction
A
fter more than a century of dedicated research, there is now a strong consensus that the earliest members of our species, Homo sapiens, emerged in sub-Saharan Africa between 200–100 ka. This inference is supported by numerous lines of genetic, paleontological, and archaeological evidence, and it has radically reshaped the way we have perceived the role of Africa in the evolution of modern humans. Furthermore, although this statement may be somewhat self-evident, this period in African prehistory marked a crucial turning point for subsequent patterns of human ecology and behavior. By around 100 ka, African modern humans were using unprecedented forms of foraging technology and were manufacturing the first durable symbolic objects. By around 50 ka, modern humans had spread as far Australia and were at the doorstep of Ice Age Europe. By around 15 ka, modern humans had filled every major habitable land mass, including the high latitudes of the Arctic and both continents of the New World. Modern humans had also become the last remaining species of hominin on Earth. The pace of change within modern human populations during the Upper Pleistocene is astonishing. In contrast, the periods of prehistory associated with earlier species of hominin before 200 ka show much different patterns of stability and change. Our earliest ancestor to share our body plan, Homo erectus (sensu lato), apparently emerged around 2 ma, and subsequent Lower and Middle Pleistocene1 prehistory is characterized by long periods of relative stasis. Although there are some striking technological innovations Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 21–60 © 2015 Left Coast Press, Inc. All rights reserved. 21
22 Chapter 1
during this time range, such as the emergence of the Acheulean handaxe, these are modest and few in comparison with those associated with modern human Paleolithic prehistory. Furthermore, the complex forms of foraging technology and symbolic objects associated with Upper Pleistocene modern humans seem to be utterly absent in Lower and Middle Pleistocene contexts. This archaeological pattern is all the more surprising given the substantial biological changes experienced by early hominins from the origins of Homo erectus to the ultimate emergence of modern humans. Over the course of the Lower and Middle Pleistocene, hominin brain sizes increased substantially. From the late Middle Pleistocene onward, body sizes and levels of robusticity noticeably decreased. Put simply, these were the key anatomical changes that made us human. In contrast, once modern humans emerged early in the Upper Pleistocene, subsequent anatomical changes were slight and linked with regional diversification, while levels of encephalization remained constant. It is surprising that such major anatomical changes underlay the long periods of early hominin cultural stasis typical of the Lower and Middle Pleistocene, while the remarkable behavioral shifts witnessed among early modern humans were not linked with any apparent biological evolution in terms of brain size, at least. This paradox lies at the root of two major theoretical debates within paleoanthropology, both having to do with teleological aspects of hominin evolution and the cognitive sophistication of modern humans relative to earlier hominin ancestors. The first of these debates centered on the explanation of the origin of the modern human body plan and larger brain size with Homo erectus in the Lower Pleistocene. Based on comparative studies of modern foragers and closely related primates, Sherwood Washburn (1950, 1959, 1960; Washburn and DeVore 1961; Washburn and Lancaster 1968) argued that cooperative hunting and food sharing (in the form of male hunters provisioning mates) induced increasing brain size and a body plan built for efficient terrestrial locomotion, as well as many of the salient cultural features that characterize modern human populations. Working from this theoretical premise, Glynn Isaac (1968, 1971, 1978a, 1978b; Isaac and Crader 1981) presented archaeological evidence from the Rift Valley of eastern Africa of the hunting of large game and the occupation of residential sites of the sort used by modern hunter-gatherers, both of which were taken to imply food sharing. Thus, in framing the most striking characteristics of modern foragers that differ from our primate relatives, Washburn and Isaac constructed a synthetic evolutionary model that continues to attract followers today.
Introduction 23
In the 1970s and 1980s, the Washburn and Isaac model was subjected to substantial criticism based on issues of site formation and the logic of archaeological inference. The most vocal of these critics was Lewis Binford (1977a, 1981, 1982, 1983, 1984, 1985, 1987), the architect of the so-called New Archaeology. Binford’s critique focused on untested and often erroneous assumptions concerning how Plio-Pleistocene archaeological sites formed and the dynamic forces that resulted in the various archaeological patterns used to support inferences of large game hunting and residential site use. Specifically, Binford argued that archaeological concentrations of stone tools and bones, taken by Isaac (and others) to represent large animal butchery and residential sites, were frequently the result of geological processes and/or the activities nonhominin carnivores. Instead, Binford contended that the early hominin role in bone accumulation and modification was as marginal scavengers of large animal carcasses and the occasional hunters of small game. He also presented a view of early hominins as possessing profoundly inferior cognitive, linguistic, and cultural capabilities compared with modern humans and argued that this situation resulted in fundamental differences in the organization of early hominin archaeological sites compared with those made by modern humans. Binford’s critique resulted in several decades of acrimonious argumentation, which I refer to as the hunting-and-scavenging debate and review in greater detail shortly. The second related debate concerns the nature, timing, and causes of the emergence of the sophisticated behavioral features that distinguish modern humans from archaic hominin species. This debate initially began from empirical generalizations concerning differences between the European Middle Paleolithic archaeological record deposited by Neanderthals and the Upper Paleolithic produced by early modern humans. In isolating these distinctions, Paul Mellars (1973, 1989) presented a substantial list of features of the Upper Paleolithic archaeological record that are absent from the Middle Paleolithic, including blade-based lithic technology, ground bone tools, effective complex projectile weaponry, and (especially) the production of striking portable and parietal art. These (and other) archaeological features came to define the phenomenon of “behavioral modernity” and were argued to indicate an “Upper Paleolithic revolution” with the arrival of early modern humans in Europe (see Bar-Yosef 2002 and Shea 2011 for reviews of these concepts). In assessing how and why behavioral modernity emerged in the Upper Pleistocene, it was necessary to confront the problem that modern humans had identical brain sizes with earlier and contemporaneous hominin
24 Chapter 1
species, such as Neanderthals, and that they were otherwise extremely similar in terms of their anatomy. In this regard, Richard Klein (2003) argued for an invisible neural mutation occurring around 50 ka within early modern human populations in sub-Saharan Africa, resulting in the possession of behavioral modernity and especially modern linguistic capabilities. Thus, the concept of the “Upper Paleolithic revolution” was expanded into a “modern human revolution,” including African populations and explaining the broader patterns of rapid change associated with early modern humans during the Middle Stone Age (MSA). This scenario drew evidentiary criticism from two directions: first, African archaeologists working at early modern human sites, such as Sally McBrearty and Alison Brooks (2000), observed that features of behavioral modernity had a much longer prehistory than 50 ka and that they appeared in both temporally and geographically mosaic fashion across the African continent. In short, what looked like a revolution in Upper Paleolithic Europe resulted from a much earlier and longer process of evolution in Africa. Second, European archaeologists, such Joao Zilhão (2007; Zilhão et al. 2010), demonstrated that Neanderthals did occasionally (if very rarely) manifest key features of behavioral modernity, including the production of symbolic objects. Thus, they must have possessed the same basic cognitive capabilities as their early modern human contemporaries. This debate over what I will refer to as the modern human revolution scenario continues in many circles today, and I will again provide a more complete review later in this chapter. These two debates have stimulated a vast amount of research and voluminous publication since the 1970s, as well as shaping concomitant methodological approaches and analytical tactics. In these respects, they are largely responsible for the character of the modern discipline of paleoanthropology. At present, however, it is apparent that the huntingand-scavenging debate has stagnated, while debate over the modern human revolution scenario is nearing the end of its utility. As interest in these problems wanes, new research questions and methodological directions are quickly taking their place. As we transition into the consideration of new sets of research problems, however, I think the evidence we have collected so far still has great potential to speak to the fundamental processes of hominin evolution and modern human origins. In thinking about these debates from an historical perspective, I argue that they are actually more related to one another than has typically been recognized. Furthermore, I feel that the ambiguities and unresolved aspects associated with these research problems actually stem from prehistoric dynamics with tremendous significance for our understanding of the deeper evolutionary processes responsible for the emergence of
Introduction 25
humankind. In seeking the fundamental roots of these debates and their evolutionary implications, this book presents an analysis of the Lower and Middle Pleistocene archaeological and paleontological records, paying special attention to sub-Saharan Africa, where the first hominins appeared, where hominin populations were largest throughout the bulk of the Pleistocene, and where major evolutionary changes seem to have happened earliest. In synthesizing this evidence, I draw several key conclusions: 1. Early hominin foragers exploited highly ranked food resources using simple technologies. While this included scavenging of large and dangerous prey, it also involved the hunting of small-to-medium-size game using simple weapons, such as hand delivered spears. 2. Irrespective of hunting capabilities, early hominin foragers employed social and economic systems with dramatic organizational differences from both modern foragers, Upper Pleistocene early modern humans, and their immediate ancestors during the MSA. Specifically, I conclude that early hominin foragers did not occupy “home base” residential sites but instead moved around the landscape in ways more similar to those known from other living primate species, such as chimpanzees (Pan troglodytes). 3. The transition from the Early Stone Age (ESA) to the MSA, occurring shortly after 300 ka in the northern Rift Valley of eastern subSaharan Africa, represented a major turning point in the organization of hominin forager lifeways and mapped onto the origins of modern patterns of residential site use. 4. This transition was sporadic and time-transgressive in the sense that it occurred at different times in different locations within Africa (as well as the rest of the Old World), and it took perhaps more than 100 ka to take hold pervasively. In addition, it was subtle in the sense that it was not accompanied by any major new technologies or symbolic systems. However, this social and economic reorganization was profoundly important for the subsequent origins of modern humans, because it laid the foundations for more complex forms of technology, social interaction, and symbolic communication. 5. This transition was stimulated by dynamics of subsistence intensification resulting from the interplay between larger hominin populations and fluctuating Pleistocene environmental productivity driven by glacial cycling. The modern pattern of residential site use emerged as a tactic for improving foraging efficiency by promoting task-based divisions of labor, increasing specialization in terms of economic activities, and the pooling of food resources collected by different task groups at residential camps.
26 Chapter 1
6. The implied dynamics of hominin population increase and subsistence intensification argued to be the source of this and other cultural changes are also evident in the patterns of Pleistocene hominin biological evolution. Related anatomical shifts include decreasing body size, increasing brain size, and reduced robusticity each of which occurred during the Upper Pleistocene following the transition from the ESA to the MSA. In short, I argue that the evidence collected to address the huntingand-scavenging and modern human revolution debates points to a subtle but profound turning point in hominin social organization that laid the groundwork for the emergence of modern humans in the Upper Pleistocene of Africa. I make the case that related dynamics of population increase, environmental fluctuation, and subsistence intensification brought about this fundamental organizational transition and also led to many of the anatomical characteristics common to modern human populations. Before presenting the evidence for my arguments, I briefly review the hunting-and-scavenging and modern human revolution debates in order to recognize their roles in fostering these research trajectories. In doing so, I offer some historical background on their origins, I discuss the salient aspects of prehistoric patterning they helped to identify, and I propose some new synthetic perspectives in order to move beyond the ambiguities responsible for their stagnation.
Historical Perspectives on the Hunting-and-Scavenging Debate With the discovery of Australopithecus africanus in 1924, Raymond Dart (1925) precipitated a serious crisis for the field of paleoanthropology in his day. This find confounded prevailing opinions of the time in several major ways: first, Australopithecus africanus was bipedal—an inference correctly drawn by Dart on the basis of the location of the specimen’s foramen magnum—but it had a brain size comparable with modern chimps. Thus, if the conclusion that Australopithecus africanus was ancestral to modern humans was to be believed, it was not increasing brain size that was responsible for the development of the modern pattern of locomotion and postcranial anatomy.2 Second, the thought of South Africa as the point of origin for our earliest ancestors was intolerable for the Eurocentric and often racist intellectual society of this time. While Darwin (1871) had already argued that Africa was the location of early human evolution, based on the presence of chimpanzees, our closest animal relatives, this view rapidly lost favor with the prevalence of Paleolithic archaeological finds and Neanderthal fossils from Western
Introduction 27
Europe. Of course, the Piltdown hoax exemplifies both the deep-seated urge for human origins to have taken place in heartlands of modern Western civilization, as well as the belief that brain-size expansion preceded other forms of postcranial evolution (Washburn 1953; Straus 1954; Halstead 1978; Sussman 1993; Bergman 2003). Based on these a priori biases, Dart faced withering criticism in his argument for australopithecines as early ancestors of modern humans. Evidence of this antagonism may be seen in the acrimonious debate between Dart and Arthur Keith (1925, 1947) in the 1920s–1940s. In responding to these criticisms, Dart felt that he needed to demonstrate the humanity of Australopithecus africanus relative to other modern ape species. What resulted was Dart’s (1957) “killer ape” theory and the interpretation of associated faunal assemblages as osteodontokeratic tools (see Derricourt 2009 for further discussion). To summarize, Dart argued that australopithecines systematically hunted large game, killing and butchering them using tools fashioned from intentionally fractured bones. It is an interesting and perhaps saddening implication of this chain of logic that Dart believed that proving australopithecines to have been bloodthirsty killers would convince the scientific community of their deeper humanity. One must consider the patterns of thought responsible for Dart’s (1957) “killer ape” and osteodontokeratic argumentation, which laid the foundation for further thinking about early hominins in Africa. Dart’s interest in this phenomenon actually stemmed from his desire to frame australopithecines as more human-like than the other living ape species with which they were initially compared. Dart felt that the manufacture and use of tools and the hunting of large game were two primary behavioral features that distinguished modern humans from the other apes. For Dart and his contemporaries, hunting and weapon manufacture were primarily advanced as proxies for the putative behavioral sophistication and humanity of australopithecines. In this sense, the “killer ape” theory and arguments for an osteodontokeratic tool culture may be seen as an apology for the apelike aspects of the Taung skull, especially its small brain size (Figure 1.1). Further discoveries made by Robert Broom (1947) would eventually turn the tide of public opinion toward Dart’s view of australopithecines as early bipedal ancestors of modern humans. However, the tactic of arguing for hunting and technological sophistication as evidence for evolutionary advancement had been firmly established. This line of reasoning emerged again in the 1950s and 1960s with Mary and Louis Leakey’s discoveries at Olduvai Gorge (L. S. B. Leakey 1959, 1960; L. S. B. Leakey, Tobias, and Napier 1964; M. D. Leakey 1971). Among other things, the Leakeys’ fieldwork succeeded in finding the first specimen
28 Chapter 1
Figure 1.1 Cranium of the so-called Taung Child, the type specimen of Australopithecus africanus
of Homo habilis, described as a “missing link” between the smallbrained australopithecines and the more human-like Homo erectus. They also succeeded in determining the first chronometric chronology of early hominin activities through the radiometric dating of volcanic ash, determining that Oldowan stone tools dated back to around 1.8 ma. These facts were once again shocking to the paleoanthropological community of this time, dramatically lengthening the known antiquity of human ancestry and firmly establishing sub-Saharan Africa as the cradle of human evolution. Initially, the Leakeys’ findings did surprisingly little to alter the prevalent views of Paleolithic archaeologists that Europe was center of early hominin cultural development. The Leakeys’ major discoveries and advances in chronometric dating occurred in the midst of an era of Paleolithic archaeology in which Eurocentric diffusionist models still held a great degree of currency. Evidence of this line of thinking may be found even in the later writings of Miles Burkitt (1956), whose mentor, the Abbe Henri Breuil, was the progenitor of European diffusionism and whose pupil was the inventor of the Early/Middle/Later Stone Age classification system still in use for the African Stone Age today. The Leakeys confronted this perspective (and considerably advanced their own careers) by demonstrating that early hominin fossils and
Introduction 29
archaeological remains in the Rift Valley were both extremely ancient— much older, in fact, than those known from Europe—and unexpectedly sophisticated. This “ancient-but-sophisticated” rhetorical strategy clearly owes much to Dart, who several decades earlier had struggled with criticisms stemming from the same set of biases. In addition, Louis Leakey’s philosophical beliefs concerning the definitional features of humankind also share much with those manifested by Dart. In arguing for his “man the toolmaker” model, Leakey (1960) focused on the manufacture of the earliest known stone tools at Olduvai Gorge as a primary stimulus for the larger brain sizes seen in H. habilis and H. erectus, and eventually in the origins of modern human cognitive sophistication and cultural complexity. Thus, the manufacture of stone tools began to act as proxy for early hominin behavioral cognitive capabilities in the same way that Dart’s osteodontokeratic culture had previously. Likewise, it was not long before the hunting of large game took on this same set of implications in terms of behavioral sophistication. The other central problem created by the discoveries of australopithecines and early members of the genus Homo was the explanation of brain-size expansion itself. Before Dart’s discoveries, brain size had actually served as an explanation of other evolutionary changes presumed to have occurred among our then-unknown ancestors (for example, Osborn 1915). Once it had been established that other human-like traits preceded brain-size expansion, theory-building efforts rapidly shifted and took on a much more scientific quality. The most significant model to emerge at this time was that of Washburn (1950), who brokered the neo-Darwinian evolutionary synthesis of the Cold Harbor Symposium on Quantitative Biology into the field of physical anthropology, forming the so-called New Physical Anthropology. As a primate ethologist, Washburn’s general approach was to compare the behavioral characteristics of human foragers (as understood at that time) and various nonhuman primate species, especially baboons. He then used the list of behavioral features common to modern human forager groups but lacked by nonhuman primates as a starting point for considering the evolution of increased brain size and modern cultural sophistication. Washburn’s (1960; Washburn and DeVore 1961; Washburn and Lancaster 1968) list included a number of related features: (1) reciprocal food sharing between non-kin individuals and the male provisioning of female mates; (2) cooperative hunting of large game; (3) the occupation of home base residential camps at which resources could be pooled for the purposes of sharing and where children, the elderly, and infirm individuals could remain without having to move daily alongside the rest of the group. In contrast with these patterns, nonhuman primates tend to share food primarily with closely related kin (parents, offspring,
30 Chapter 1
siblings, and occasionally mates); they rarely hunt, focusing instead on small sessile food sources, and they move around the landscape as a group without consistently returning to home base sites. In this model, cooperative hunting, reciprocal food sharing, and mate provisioning stimulated the development of larger brains by increasing the complexity of social relationships. Hunting was also seen as leading to larger brains by increasing the demands of problem solving associated with the pursuit of game and by necessitating the development of hunting technology. Home bases were implicated as archaeologically findable locations where animal butchery, food sharing, and the manufacture of technology all occurred (Isaac 1971, 1978a, 1978b). In my view, Washburn deserves a great deal of credit for the creativity of his approach and for putting forward the first truly scientific model of the origins of modern human brain size and cultural sophistication. While early perspectives on brain-size evolution had assumed an inevitable unilinear trajectory based on latent Enlightenment-era assumptions of improvement and increasing complexity, Washburn provided a model that could be related to external variables in the observable world and that could be investigated using archaeological and paleontological evidence. It is also noteworthy that Washburn (1957) retained a great deal of skepticism toward Dart’s more flamboyant views of australopithecines, seeing them instead as generally apelike in their behavioral orientations. Furthermore, as will become more apparent later in this book, Washburn’s thoughts on the significance of home-base sites are not fundamentally off-base but rather, I argue, simply assigned to the wrong time period. In the following chapters, I attempt to show that when hominins did begin making use of residential campsites at the boundary between the Middle and Upper Pleistocene, this use did indeed have profound consequence for hominin behavior and biological evolution. Washburn was limited mainly by the lack of Plio-Pleistocene archaeological and paleontological evidence, as well as misconceptions about the nature of modern human forager lifeways. Washburn’s model quickly became the focus of archaeological investigations of Pleistocene archaeological sites in eastern Africa, with a number of prominent archaeologists arguing for evidence of largegame hunting and home-base occupation. For example, Mary Leakey (1971) drew these conclusions from the Oldowan archaeological assemblages at Olduvai Gorge (especially the FLK 22 “Zinjanthropus floor”). I discuss these bodies of evidence in much greater detail in Part 2 of this book. The archaeological application of Washburn’s model, however, is much more widely attributed to Glynn Isaac (1968, 1971, 1977, 1978a, 1978b), who joined Washburn as a faculty member at the University of California–Berkeley in 1966. Isaac (1971, 1978a, 1978b)
Introduction 31
argued that Pleistocene archaeological sites, which tended to have dense concentrations of stone tools and (where preservation allowed) animal bones, resulted from large-game hunting, animal butchery, and home base use as proposed by Washburn. James O’Connell and colleagues (2002) have labeled this archaeological explication of the hunting/ fooding sharing/home base use model as the Washburn-Isaac synthesis, and it quickly became the dominant view of early hominin evolution. As a site warden and field collaborator with the Leakeys in the early 1960s, Isaac was clearly influenced by their approaches. Perhaps because of this relationship, I perceive a good deal of the Leakeys’ argumentation strategy in Isaac’s various writings. For example, Isaac (1976) was a firm adherent to Louis Leakey’s (1960) “man the toolmaker” view, even after it had been called into question by the documentations of tool manufacture and use among chimpanzees by Jane Goodall (1964) and others (for instance, McGrew 1974). More important, Isaac also showed a tendency toward the “ancient-but-sophisticated” rhetoric common to the Leakeys’ work. Although this tendency decreased over time as Isaac became more self-critical in his attention to problems of taphonomy and made better use of actualistic research, it clearly manifested in his early writing. For example, in his early review of the Lower Pleistocene archaeological record, Isaac states: All that is clear is that at the outset of the archaeological record in Olduvai Bed I, some basic aspects of behaviour were already more human than pongid. The early hominids were apparently dependent on tools, were operating from temporarily fixed home bases, were hunting and were bringing food back to camp to share. . . . A behavioural change of comparable magnitude is not documented again in prehistory until the development of farming. (1971: 21)
This excerpt clearly demonstrates that hominin carnivory, food sharing, and home base use were all directly equated with human-like behavior and cognitive capabilities and that they were all present extremely early in the East African archaeological record. Of these three types of behavior, the hunting of large game emerged as the easiest to test in terms of the Paleolithic archaeological record. As I discuss in greater detail later, issues of taphonomy rendered both food sharing and home-base use difficult to identify in the archaeological record. Furthermore, as Washburn and Lancaster (1968) and Isaac (1968, 1978a, 1978b) argued, the hunting of large game by itself was thought to have necessitated food sharing based on the levels of hunting risk observed among modern hunter-gatherers (see also Lee 1968) and the large package size of hunted food resources. Given the likelihood of
32 Chapter 1
failure and the return of vastly more meat than a single individual (or even kin group) could eat, they argued that it simply did not make sense to hunt large game in the absence of reciprocity systems like those known among modern hunter-gatherer groups—a premise that I will argue is dubious. Thus, research on the Pleistocene archaeological record became increasingly focused on the demonstration of early hominin large-game hunting as a proxy for both the other elements of the Washburn-Isaac synthesis and modern human-like cultural sophistication more broadly. In other words, this shift in research-problem focus represented the origins of the hunting position within the hunting-and-scavenging debate. Several prominent studies of Pleistocene animal-bone assemblages initially supported the inference of early hominin large-game hunting. For example, Leakey’s (1971) analysis of the FLK 22 “living floor” demonstrated the presence of numerous meat- and marrow-rich bone elements from medium- to large-size prey animals in association with Oldowan stone tool technology (Figure 1.2). Leakey concluded from these facts that FLK 22 was a home base with the remains of hunted and butchered large game animals. Subsequently, Isaac and colleagues (Isaac and Harris 1978; Bunn 1981; Isaac and Crader 1981; Shipman 1981; Harris 1983; Bunn and Kroll 1986) argued for early hominin hunting based on animal-part profiles with high frequencies of meaty
Figure 1.2 Mary and Louis Leakey excavate at the FLK 22 Zinjanthropus locality of Olduvai Gorge
Introduction 33
elements and the spatial distribution of bones relative to stone tool artifacts and other features. Shortly thereafter, Henry Bunn and Ellen Kroll (1986) presented the first major evidence of early hominin animal bone modification through the documentation of stone tool cut marks. Although such studies became crucial sources of information for the emerging hunting-scavenging debate, they broadly followed from Isaac’s perspective that framed large-game hunting, along with its implications of food sharing and home base use, as indications for early hominin cultural sophistication. The Taphonomic Critique and the Origins of the Scavenging Model The late 1960s through the 1970s was the field of archaeology’s time of great methodological and theoretical coming of age. In the New World, Binford (1962, 1968, 1978, 1981) had championed the scientific investigation of the processes of culture change and the development of actualistic or “middle-range” knowledge for building inferences on the basis of the static and inherently meaningless archaeological record available to use for study in the present. In the Old World, David Clarke (1968, 1972, 1973) fostered the interpretation of archaeological remains through the employment of actualistic knowledge from other fields of research, including geography, ecology, statistics, and systems theory. While fairly profound philosophical differences separated Binford’s processualism from Clarke’s analytical archaeology, the two had independently come to emphasize the rejection of historical processes such as diffusion as adequate explanations for archaeological variation, favoring instead testable models in which knowable external variables caused observable archaeological patterning. Perhaps more important, both had come to see the crucial importance of learning about the formation of archaeological sites through the observation of various forms of modern dynamics. Both Binford and Clarke profoundly influenced contemporaneous approaches to Paleolithic archaeology in terms of the inference of early hominin behavior and theoretical explanations of both cultural and biological evolution. While Binford’s impact was strong and direct (as shall become apparent), Clarke’s may have been more subtle. It is interesting to note that Clarke and Isaac were contemporaries at Peterhouse College at Cambridge University, and, as Gowlett (1989) suggests, some of Isaac’s early interest in experimental archaeology and site formation (for instance, Isaac 1967) may be attributable to his connections with Clarke. Indeed, Isaac (1981) later credited Clarke with a wide range of influences, including his multidisciplinary perspectives on early hominin site use patterns and settlement systems.
34 Chapter 1
It is also the case that the deeper interest in site formation processes in the 1960s was broadly based and involved many actors outside of nascent processualist circles. Notably, a key example of this line of interest directly relates to Dart’s “killer ape” and osteodontokeratic culture ideas. In the 1960s, C. K. “Bob” Brain began to investigate the formation of bone assemblages, especially those in cave contexts such as the find spots of the South African australopithecines. First, Brain effectively documented the roles of various natural agents of bone accumulation in caves, such as of leopards and porcupines (Brain and Ewer 1958). Later, he studied the ways in which nonhuman carnivores modified bones by examining Topnaar dog yards in the Namib Desert of Namibia (Brain 1967). Through these studies, Brain (1981) was able to demonstrate that (1) the bone assemblages present in association with major australopithecine finds were the result of other natural agents of faunal accumulation and that (2) the patterns of breakage interpreted by Dart as the result of tool manufacture were, in fact, mostly the result of modification by nonhominin carnivores. On the basis of these findings, Brain concluded that australopithecines were not vicious hunters or bone tool makers, but rather they entered their fossil contexts as the result of being another carnivore’s meal. Thus, scientific opinion on the hunting capabilities and implied behavioral sophistication of australopithecines was substantially reformed based on taphonomic studies of cave faunal assemblages. Shortly thereafter, Binford (1977a, 1981, 1983, 1984) followed with a more expansive critique of archaeological inferences made from animal bone assemblages primarily based on his ethnoarchaeological research with the Nunamiut foragers in Alaska. Like Brain, Binford focused his research on the ways in which nonhuman taphonomic forces, especially bone-modifying carnivores, altered the composition of animal bone assemblages and their associated patterns of damage. In documenting the ways in which carnivores accumulate bones while preferentially destroying certain elements and element portions, Binford radically overhauled the approaches for using animal bones as a source of archaeological inference. Among Binford’s (1981) targets were the putative flaked-bone tool industries of North America, which like Dart’s osteodontokeratic culture, Binford demonstrated to have been the result of natural breakage. For Pleistocene archaeologists, however, his most important critique concerned the inference of hunting from the Oldowan animalbone assemblages at Olduvai Gorge, especially that from FLK 22. Using element frequency data from Nunamiut dog yards where nonhuman element destruction was a primary variable conditioning assemblage composition, Binford conducted a multivariate reanalysis of Leakey’s
Introduction 35
(1971) faunal data from FLK 22 at Olduvai Gorge. Based on these comparisons, Binford came to two central conclusions: (1) nonhuman carnivore activities were primarily responsible for the accumulation of this animal bone assemblage and for its element frequency profile; (2) early hominins did have a noticeable role in modifying the animal bone assemblage characteristics, but this was primarily through scavenging the carcasses left by higher-ranked carnivores. Thus, Binford (1983: 59) concluded that, rather than being modern human-like hunters, the early hominins at Olduvai were the “most marginal of scavengers” and much more like other nonhuman primates in their behavior. Once again, this is a clear case in which new insights concerning the forces of site formation challenged prevailing models of early hominin behavior primarily based on the nature of animal bone assemblages. From a rhetorical perspective, it is also the case that Binford saw scavenging among early hominins as an indication of dramatic differences in behavioral organization, both in terms of foraging behavior and social systems. Interestingly, Binford (1984) was perhaps most vocal on this point in his discussion of the MSA faunal remains from the site of Klasies River in South Africa. Here he argued that the evolution of large brain size and a degree of technological sophistication emerged independently of large-game hunting activities. In contrast, Binford contended that large-game hunting required a dramatic enhancement of planning capabilities and was primarily the property of behaviorally modern humans. This perspective effectively separated the much longer process of brain evolution from the rapid origins of true behavioral modernity, which Binford framed as having occurred very late in Upper Pleistocene prehistory. Based on these and other facts, Binford (1984, 1987) concluded that modern humans and nonmodern hominins had economic and social systems that were organized in fundamentally different ways, owing in large part to increases in planning depth. The Maturation and Stagnation of the Hunting-and-Scavenging Debate Binford’s (1981, 1983, 1984, 1987) work spurred on further innovations in terms of the distinction of large-game hunting and scavenging among early hominins, while the implications of each model were largely taken for granted in terms of cognitive sophistication. One such line of research concerned the relative position of cut mark and nonhominin carnivore tooth marks. While cut marks and tooth marks were both recognized on animal bones at sites such as FLK 22 from the time of Leakey (1971), it was only with improvements in observational methods that these became viable data sources. Among such studies were those of Patricia Shipman (1986; Potts and Shipman 1981; Shipman and
36 Chapter 1
Rose 1983a, 1983b), who used high-power microscopy to definitively recognize cut marks and tooth marks on bones from Lower and Middle Pleistocene faunal assemblages, such as FLK 22 and Torralba/ Ambrona in Spain. Such studies confirmed that hominins did, in fact, use stone tools to butcher animal carcasses, and they helped to establish a standardized methodology for distinguishing between cut marks, tooth marks, and other forms of damage morphology. In addition, Shipman’s work showed the recurrent overlapping and superposition of homininproduced cut marks and tooth marks resulting from prior carnivore activities, generally taken to support the scavenging model. These provocative findings, however, did little to resolve the huntingand-scavenging debate. Using the same methodology and faunal assemblages from Olduvai, Bunn and Kroll (1986) arrived at almost precisely the opposite conclusion. Based on a larger sample of the bone assemblage from FLK 22, they argued that cut marks occurred in high frequencies on the midshafts of meaty long bones, having resulted from the defleshing of complete (and presumably hunted) carcasses. Bunn and Kroll also found low frequencies of cut marks at joints, which would have presumably resulted from carcass dismemberment. While stimulating widespread disagreement between faunal analysts, such findings were generally taken as support for the Washburn-Isaac synthesis. Based on such contradictory findings, a number of researchers recognized that there was an insufficient body of actualistic knowledge with which to interpret these archaeological findings. For example, Robert Blumenschine (1988, 1995), Curtis Marean (Blumenschine and Marean 1993), Marie Selvaggio (1994, 1998), Salvatore Capaldo (1997, 1998), and Manuel Domínguez-Rodrigo (1997) conducted experimental studies examining the relative frequencies and positions of cut marks and tooth marks on bones under two conditions: (1) when humans butchered carcasses first and bones were subsequently scavenged by other carnivores; (2) when carnivores had initial access to carcasses and humans removed any remaining flesh. Such studies were also augmented by ethnoarchaeological research among the Hadza hunter-gatherers of eastern Africa conducted by O’Connell, Lupo, and colleagues (Hawkes, Hill, and O’Connell 1982; Hawkes, O’Connell, and Blurton Jones 2001; Lupo 1994; Lupo and O’Connell 2002; see also O’Connell and Hawkes. 1988a, 1988b; O’Connell, Hawkes, and Blurton Jones 1990, 1999; O’Connell et al. 2002). These lines of research represented a substantial period of methodological maturation in terms of the development of clearer frames of reference for the analysis of archaeological cut mark and tooth mark patterning. Once again, the results of these lines of actualistic research were mixed. For example, Blumenschine (1995) found significantly higher frequencies
Introduction 37
of tooth marks and lower frequencies of cut marks on bones consumed initially by nonhuman carnivores and scavenged later by humans. Based on the high frequencies of tooth marks and the low frequency of cut marks on bones from the FLK 22 assemblage, Blumenschine (1995) argued that hominins were primarily scavengers with late carcass access. In contrast, Domínguez-Rodrigo (1997) found that tooth mark frequencies varied substantially on bones from carcasses initially butchered by humans and subsequently scavenged by nonhuman carnivores. In addition, he found that the FLK 22 bones had frequencies and locations of cut marks comparable with those produced experimentally by human butchers with initial access to carcasses. In reviewing these experiments alongside additional Hadza ethnoarchaeological data, Lupo and O’Connell (2002) found substantial variability between both actualistic and archaeological datasets. They found that part of this ambiguity resulted from difficulties in defining cut marks and tooth marks and controlling for postdepositional damage to bone surfaces. However, they also concluded that there may simply be a weaker relationship between the ordering of carcass access and various patterns of bone damage morphology than what had been hoped. Likewise, there has been a similar trajectory of change with respect to the ways in which element frequencies have been used to make inferences from early hominin animal bone assemblages. It has long been noted that early hominin assemblages, such as FLK 22, were dominated by heads and lower limbs, whereas axial elements and upper limbs were comparatively rare (Leakey 1971; Binford 1981; Marean et al. 1992). Referred to as the “Klasies pattern” by virtue of the presence of this type of patterning in the Klasies River animal bone assemblages (Klein 1976; Binford 1984; Bartram and Marean 1999), this pattern was initially interpreted as the result of the selective transport of bone elements, or the “schlepp effect” (Daly 1969; Binford 1981, 1984; Klein 1989; cf. Lyman 1984, 1994; Marean et al. 1992). Later, the “Klasies pattern” was taken by some as evidence for late hominin access to carcasses and the collection of marrow-rich lower limb bones and crania containing brains (Binford 1981, 1984; Stiner 1991, 1992, 1994; Stiner and Kuhn 1992), each of which represented a store of high-energy fatty tissue that would be unavailable to carnivores without stone tool technology. This view was complicated by increasing recognition of the role of various taphonomic forces on the relative abundance of certain bone elements. Here, bone density was identified as a key conditioning factor in terms of the frequencies of elements within archaeological assemblages. Many natural forces, including weathering, gnawing by nonhuman carnivores, and sediment compaction after burial, result in bone destruction. A bone’s ability to resist such forces is primarily
38 Chapter 1
determined by its density, with more-dense bones surviving in higher frequencies than less-dense bones. This phenomenon is often referred to as density-mediated attrition (Behrensmeyer 1975; Brain 1976; Binford and Bertram 1977; Binford 1981; Lyman 1984, 1994; Lam, Chen, and Pearson 1999). The recognition of the role of bone density in determining element frequencies fostered a reinterpretation of various instances of the “Klasies pattern”(Lyman 1984, 1994; Grayson 1989; Marean and Frey 1997; Bartram and Marean 1999; contra Stiner 2002). Rather than resulting from dynamics of hominin scavenging behavior, it was argued, the “Klasies pattern” was simply the result of in situ density-mediated bone attrition, biasing the ways in which elements were identified and counted (Bartram and Marean 1999). During this period, faunal analysts began come to grips with the concept of equifinality (Rogers 2000; O’Connell et al. 2002; Enloe 2004; Lam and Pearson 2005), or the tendency of unrelated dynamics of bone destruction and modification to result in identical forms of faunal patterning. This tendency is true in terms of cut mark and tooth mark patterns, since various orderings of hominin and nonhominin carnivore access to carcasses may produce indistinguishable patterns of damage morphology (Lupo and O’Connell 2002). This tendency is also true of element frequencies in the sense that density-mediated attrition produces similar patterns irrespective of how bones were initially accumulated at sites (Rogers 2000). Thus, problems of equifinality stemming from both taphonomic dynamics and analytical methodology severely complicate the construction on inferences concerning how early hominins acquired, modified, and consumed animal carcasses. In spite of such ambiguities, there has been a growing consensus that early hominins had relatively early access to animal carcasses (Stanford and Bunn 2001; O’Connell et al. 2002; Plummer 2004; Bunn 2007; Bunn and Pickering 2010; see Speth 2010 for a comprehensive review). Owing to the continued lack of evidence for appropriate hunting technology on the part of early hominins, aggressive scavenging3 has been proposed as an alternative to true hunting (Domínguez-Rodrigo 1997, 1999; Bunn 2001; O’Connell et al. 2002; Watts 2008; Villa and Lenoir 2009). In this scenario, it is thought that hominins cooperatively drove off primary carnivores responsible for making kills, thus achieving early access to carcasses. As an alternative to the kind of passive scavenging argued for by Binford (1981, 1987), the aggressive scavenging scenario has gained popularity over the last several decades, including among scholars generally skeptical toward the hunting viewpoint and its implied theoretical baggage (for example, O’Connell et al. 2002). Thus, while there is growing agreement concerning early hominin access to fleshy carcasses, there is still considerable uncertainty regarding how carcasses
Introduction 39
were actually acquired, the manners and places in which they were consumed, and the theoretical implications of resulting inferences. As a key test implication of the Washburn-Isaac evolutionary model, there is also still remarkably little clear evidence for the use of home bases. In this respect, the taphonomic critique has had a substantially greater effect on the home base use inference than it has on the identification of hunting and scavenging. It is now widely recognized that densitymediated attrition of bones and (not selective transport) is responsible for important aspects of the profiles of element frequency present at most early hominin archaeological sites (see Lam 2005 for a complete review). This fact alone calls into serious question the inference of animal part transport to home bases or other landscape focal points (O’Connell et al. 2002). Furthermore, the recognition of other geological and biological processes responsible for dynamics of site formation, such as fluvial transport and geological size-sorting (Binford 1977a; Schick 1987, 1992; Dibble et al. 1997; Benito-Calvo and de la Torre 2011), have undone canonical views of Pleistocene sites with large accumulations of stone tools and animal bones as home bases (Binford 1977a, 1981; Potts 1988, 1994; Schick and Toth 1993; Blumenschine, Whiten, and Hawkes 1991; Blumenschine et al. 2012a; O’Connell et al. 2002). Thus, while there is some agreement concerning early hominin carnivory, geological issues of taphonomy and problems of equifinality in terms of faunal patterning have rendered the identification of home base sites nearly impossible. Finally, while many scholars maintain skepticism toward the strict tenets of Washburn-Isaac synthesis, alternative theoretical scenarios explaining encephalization within the genus Homo and the rise of cultural behavior have been few and far between. Based on his inferences of passive scavenging, Binford (1983, 1984, 1987) argued that early hominins moved into a new ecological niche characterized by the scavenging of carcasses during the middle of the day when other nocturnal predators were absent. This model, however, does little to directly address the causes of increasing brain size and regards early stone tool technology as rather ancillary with respect to early hominin subsistence behavior (see especially Binford 1983). Perhaps the most cogent theoretical alternative to the Washburn-Isaac synthesis is the so-called grandmothering hypothesis proposed by Kristen Hawkes and colleagues (Hawkes et al. 1998; O’Connell, Hawkes, and Blurton Jones 1999; O’Connell et al. 2002). This model argues that human female postreproductive life span aided in parental child rearing and allowed engagement in labor-intensive subsistence activities, such as the digging and processing of buried plant storage organs. While novel in its avoidance of hunting or scavenging as a prime mover in hominin evolution, this model is plagued by a lack of relevant archaeological evidence and remains instead primarily supported
40 Chapter 1
by ethnographic observations made among the Hadza and (to a much lesser extent) other forager groups. Finally, other theoretical scenarios, such as Richard Potts’s (1998, 2001, 2002) argument for dramatic Pleistocene environmental variability as the catalyst for increasing brain size, can be adapted to fit either hunting or scavenging scenarios and have not been the focus of the debate concerning the implications of the early hominin archaeological record. I return to this idea again in this book’s conclusion. In short, while various scholars have recognized a range of both evidentiary and political problems with the WashburnIsaac synthesis, few theoretical alternatives have succeeded in gaining much traction. Currently, the field of paleoanthropology is in a position where considerable controversy and ambiguity pervades the construction of inferences based on our archaeological observations, while our efforts at the development of more sophisticated theoretical models has largely stagnated. While various solutions to this impasse have been suggested (O’Connell, Hawkes, and Blurton Jones 1999; O’Connell et al. 2002; Binford 2001; Pickering and Domínguez-Rodrigo 2010; Speth 2010), it is clear that both methodological and theoretical novelty are required now more than ever. It is equally obvious that the subjectivity of prevalent research problems in determining the relative humanity or modernity of our hominin ancestors has hindered the development of more sophisticated epistemological approaches. This book has as its core goals the development of new ways of examining the archaeological record of animal bones and stone tools associated with early hominins, as well as proposing an alternative theoretical framework for considering increasing brain size and cultural behavior experienced by hominins over the course of the Lower and Middle Pleistocene.
The Origins of the Behavioral Modernity Concept Debates about the origins of modern humans initially shared a great deal of their evidentiary and epistemological footing with the huntingand-scavenging scavenging debate. Until recently, both paleontological and archaeological studies of modern human origins focused on the relationship between members of our own species (Homo sapiens) and Neanderthals (Homo neanderthalensis) in Eurasia. Perspectives on this relationship have shifted significantly over the 150 years since the initial discovery of Neanderthals. The first half of the 20th century was dominated by views of Neanderthals as slouching, brutish cavemen that could not possibly have given rise to modern humans—a view stemming originally from Marcelin Boule’s (1911, 1923) description of the La Chapelle-aux-Saints fossils. In the second half of the 20th century, views
Introduction 41
of Neanderthals as essentially human-like became increasingly prevalent, fostered by archaeological descriptions of lithic technology by Francois Bordes (1961), putative symbolic behavior with the Shanidar “flower burial” (Solecki 1975), and the multiregional approach to the hominin fossil record (for example, Wolpoff 1989). Others during this time, such as Binford (1973, 1983; Binford and Binford 1966), argued that archaeological record of Neanderthals was structurally distinct from that associated with modern humans, indicating dramatic differences in behavioral organization and a general lack of cognitive capabilities on the part of Neanderthals. Debate over the relationship between Neanderthals and modern humans increasingly focused on the systematic comparison of the Middle and Upper Paleolithic archaeological records. What emerged was a more detailed understanding of the sorts of structural differences argued for by Binford (1973; Binford and Binford 1966). Perhaps the most influential of these descriptive efforts were those of Mellars (1973, 1989), who identified a list of archaeological traits common to Upper Paleolithic sites deposited by modern humans but absent at Middle Paleolithic sites left by Neanderthals. This list included characteristics such as the manufacture of blade-based lithic technology, the manufacture of ground bone tools, the specialized predation of specific prey species (especially caribou), and the production of symbolic objects. The former elements of this list were thought to reflect cognitive differences in terms of depth of planning in terms of technological design, hunting tactics, social relationships, and storage practices (cf. Binford 1987). More important, the striking symbolism common to the Upper Paleolithic, such as the manufacture of figurines and cave painting, were thought to indicate much more advanced linguistic capabilities on the part of modern humans (Binford and Stone 1986; Binford 1989; Mellars 1991; Mithen 1996). In fact, it was frequently argued that Neanderthals lacked altogether the ability to produce the kinds of arbitrary and syntactical language common to living human populations (Davis 1986; Binford 1989; Davidson and Noble 1989, 1993). This set of empirical generalizations about the differences between the Middle and Upper Paleolithic archaeological records had deep consequences for views of Neanderthal and modern human behavior and cognitive capabilities. The concepts of the Upper Paleolithic Revolution and behavioral modernity were born. In this scenario, Neanderthals were argued to have had social, economic, and mobility systems akin to those of Pleistocene early hominins, specifically the kind assumed by the scavenging viewpoint in the hunting-and-scavenging debate. Supporting this position in more direct terms, studies of Middle Paleolithic animal bone assemblages were also taken by Binford (1985) and Mary Stiner (1991, 1994;
42 Chapter 1
Stiner and Kuhn 1992) to suggest scavenging of large game rather than hunting. Such studies also employed the inference of scavenging rather than hunting as a gauge of the similarity of Neanderthals to modern human foragers, effectively placing them on the same footing as other early hominin species and adding another dimension to the distinction between archaic and modern behavior. Perhaps the most convincing pieces of evidence suggesting both the lack of a direct ancestor-descendent relationship between Neanderthals and modern humans and significant biological differences between the two species came from increasingly sophisticated genetic studies. The first of these was published by Rebecca Cann and colleagues (1987) using mitochondrial DNA from living humans to examine the location and timing of modern human origins. The results of this study suggested that all living humans descended from a small group of ancestors living in sub-Saharan Africa between 200 and 100 ka. This and subsequent studies of living human mitochondrial DNA also demonstrated a surprising lack of genetic diversity within members of our species (Vigilant et al. 1991). Such studies of mitochondrial DNA largely undermined multiregional views of modern humans’ origins, suggesting that Neanderthals (and other potential contemporaneous archaic hominin species outside Africa) did not significantly contribute to our genetic makeup and that modern humans had a relatively recent origin. In addition, early studies of Neanderthal fossil mitochondrial DNA showed quite dramatic differences between Neanderthal and modern human genes (Krings et al. 1997), also inconsistent with the idea of Neanderthals as modern human ancestors. Thus, early research on modern and fossil DNA agreed with the Upper Paleolithic revolution scenario, suggesting major differences in phylogenetic histories and genetic makeup between Neanderthals and modern humans. By the end of the 20th century, there was a growing consensus concerning the nature and degree of differences between modern humans and other archaic hominins, especially Neanderthals. Yet, these lines of research had resulted in what could be characterized as a grand set of empirical generalizations about the characteristics of the relevant archaeological, paleontological, and genetic records, without any forthcoming theoretical explanation of why these changes had occurred. To the extent that theoretical scenarios were generated to explain these patterns, such explanations bore a startling resemblance to the facts they were intended to explain. In this respect, Richard Klein’s (2000, 2003; Klein and Edgar 2002) scenario has received the most attention, arguing that early modern humans living in sub-Saharan Africa around 50 ka experienced a dramatic but invisible neural mutation, which was responsible for their increased cognitive and linguistic capabilities. This
Introduction 43
neural mutation allowed early modern human populations to move out of Africa to the rest of the Old World (and beyond), replacing all other archaic hominin species by around 30 ka. While this perspective has been the focus of great criticism over the last decade, it is perhaps only a more explicit and extreme version of what most proponents of the Upper Paleolithic revolution scenario believed. In the absence of significant differences in brain size and architecture between modern humans and contemporaneous archaic hominins, an invisible brain mutation was as appealing an explanation of such dramatically different patterns of behavior as any other. Complicating the Modern Human Revolution Scenario Empirical problems with the Upper Paleolithic revolution scenario and the broader concept of behavioral modernity began to emerge around the turn of the millennium based on both a surge in research on the African MSA and new findings from terminal Middle Paleolithic contexts in Europe. These new discoveries called into question the theoretical implications of the archaeological, paleontological, and genetic contrasts between Neanderthals and early modern humans, largely known from Western European contexts. In short, these new lines of research suggested two important kinds of prehistoric patterning: (1) the origins of the traits used to define the Upper Paleolithic revolution in Europe had a long and fluctuating history associated with anatomically modern humans during the MSA in sub-Saharan Africa; (2) while rare, late Neanderthals occasionally produced archaeological signatures of behavioral modernity, including symbolic objects. Archaeological research on the African MSA had long perceived this time period as evolutionarily retarded and static relative to the Middle Paleolithic of Europe. This perception is even reflected in the three-stage African Stone Age chronology introduced by Goodwin and van Riet Lowe (1929), who regarded each stage as similar to its European counterpart, while somewhat less sophisticated and occurring later in prehistory (by virtue of diffusion from a putative European evolutionary core). Despite the discovery of early anatomically modern human fossil remains associated with the African MSA, such as those from Klasies River (Singer and Wymer 1982) and Herto, Ethiopia (Clark et al. 2003), views of the MSA archaeological record as lacking any elements of behavioral modernity persisted late into the 20th century. In addition, while certain elements of the behavioral modernity package, including symbolic objects, bone tools, and blade-based stone tool industries, were found in various MSA contexts, these were largely discounted by virtue of problems with excavation methods and chronology (for
44 Chapter 1
example, see Binford’s 1984 discussion of Singer and Wymer’s “layer cake” excavation methods at Klasies River). In addition, faunal analyses conducted by Klein and Cruz-Uribe (Klein 1976, 1979, 1983; Klein and Cruz-Uribe 1996) suggested that MSA humans lacked effective projectile hunting weapon technology and were, in general, less capable of hunting large and dangerous prey than were their LSA descendants. Such research framed the view of substantial behavioral and cognitive differences between MSA and LSA populations, making the two roughly equivalent with the Middle and Upper Paleolithic of Europe. New excavations making use of modern techniques and technologies helped to bring fieldwork on the African MSA to its rightful place at the center of research on modern human origins. For example, Christopher Henshilwood and colleagues’ (2001a, b) excavations at the site of Blombos Cave, South Africa (Figure 1.3), succeeded in establishing the unequivocal presence of symbolic objects (in the form of engraved ocher pebbles and perforated marine shell beads) and bone tools in MSA layers dating to at least 70 ka. Subsequent fieldwork has demonstrated comparably dated or older symbolic objects throughout Africa (Vanhaeren et al. 2006; Assefa, Lam, and Miensis 2008; d’Errico et al. 2009; Henshilwood, d’Errico, and Watts2009; Texier et al. 2010).
Figure 1.3 Engraved ocher fragment and perforated marine shell beads associated with Still Bay industry artifacts at Blombos Cave (images by Christopher Henshilwood)
Introduction 45
In addition, blade-based stone tool technology is known from various regions of Africa from the Middle Pleistocene onward (Bar-Yosef and Kuhn 1999; Barham 2002), and the Howiesons Poort industry of southern Africa, dating between 55–65 ka, shares many similarities with the European Upper Paleolithic (see McCall and Thomas 2012 for detailed review). Finally, ground bone tools are known to predate 90 ka in Central Africa and are likely significantly older (Brooks et al. 1995; Yellen et al. 1995). The appearance of these and other archaeological traits evident in the African MSA are reviewed in great depth by McBrearty and Brooks (2000). Their article shows that every defining characteristic of the Upper Paleolithic revolution appeared first in the African MSA and that they appeared intermittently over a long period of the Upper Pleistocene—not as a sudden saltation or revolution. These findings had profound consequences for the archaeology of the European Upper Paleolithic revolution. The empirical pattern that had for so long looked like a revolution through the Eurocentric lens of 19th- and 20th-century Paleolithic archaeology now seemed to be a much more gradual, dispersed, and mosaic set of behavioral changes. Although certain practices and technologies no doubt arose within early modern human populations living in glacial Europe in response to extreme local environmental conditions, the underlying cognitive, linguistic, and social practices thought to define behavioral modernity had clearly emerged long ago in sub-Saharan Africa. In one sense, this ultimate conclusion is not unexpected, since Africa has long been known to have been the place of origin for modern human populations. In another sense, however, the archaeological record of the African MSA clearly shows that long periods of behavioral and technological adaptation to local ecological conditions offer better explanations for the various individual features of behavioral modernity than does any potential neural mutation or equivalent radical biological change (see also Shea 2011). More appropriately viewed from the African perspective, the Upper Paleolithic revolution would seem to represent early modern human populations pushed out of their African core by demographic surges adapting to novel and challenging environmental conditions. The other key line of archaeological evidence concerning the Upper Paleolithic revolution scenario resulted from discoveries associated with late Neanderthal populations making terminal Middle Paleolithic industries. The symbolic and linguistic capacities of Neanderthals have a long history of debate, including such discoveries as Ralph Solecki’s (1975) “flower burial” at Shanidar. In addition, beads have been known in Chatelperronian archaeological contexts since Andre Leroi-Gourhan’s
46 Chapter 1
discovery of bone and tooth beads at Arcy-Sur-Cure (Leroi-Gourhan and Leroi-Gourhan 1964). Following the discovery of Neanderthal skeletal remains in a Chatelperronian level at St. Cesaire (Leveque and Vandermeersch 1980), it has been securely known that Neanderthals were the makers of the Chatelperronian industry. Therefore, for some time, we have known that Neanderthals at least sporadically made beads shortly before their disappearance and around the time of the arrival of early modern humans in Western Europe. In addition, the Chatelperronian industry, long considered a part of the early Upper Paleolithic (a.k.a. the “Lower Perigordian”; Peyrony 1948; Bordes 1968a), was characterized by blade-based technology and ground bone tools. It is almost certainly true that the modern view of the Chatelperronian industry as the terminal Middle Paleolithic stems from its association with Neanderthals. Similar beads and other symbolic objects associated with Neanderthals are now known from a moderate number of Chatelperronian and contemporaneous terminal Middle Paleolithic contexts (Zilhão 2007; Zilhão et al. 2010). While it has sometimes been argued that such behavior on the part of Neanderthals in a period where contact with early modern human populations was plausible may represent acculturation (Stringer and Gamble 1993; Mellars 1999), this possibility seems increasingly unlikely as more and earlier examples are discovered. Furthermore, even if contact between Neanderthals and early modern humans does account for this phenomenon, it at least demonstrates that Neanderthals had the appropriate cognitive “hardware” to be deal with such symbolic behavior even if the “software” had a modern human origin (Chase 2007; see also the conclusion of this book and Malafouris 2007 for discussion of “hardware” and “software” issues as they relate to cognition). Currently, it seems very difficult to argue that Neanderthals had significantly different linguistic or symbolic capabilities in terms of brain hardware or that these accounted for the differences between the Middle and Upper Paleolithic archaeological records in any major way. Finally, more recent and sophisticated studies of both ancient and modern DNA have shown that Neanderthals (and perhaps other archaic hominin species) did, in fact, contribute modest but significant amounts of genetic code to modern human populations (for example, Green et al. 2006, 2010; Sankararaman et al. 2012). These studies support some long-held notions about the likelihood of interbreeding between Neanderthals, other archaic populations, and early modern humans (Smith 1991; Smith, Falsetti, and Donnelly 1989; Smith, Jankovic´, and Karavanic´ 2005; Smith and Ahern 2013; Holliday 2003, 2006). For the purposes of this discussion, it also undermines attitudes of extreme biological and behavioral difference between archaic and modern human
Introduction 47
populations, which constituted an important part of the modern human revolution scenario. Many researchers, such as Shea (2011), now argue that the behavioral modernity concept may have reached the end of its utility as epistemological framework for examining modern human origins and should be abandoned altogether. At a minimum, it is apparent that the trait-list approach inherent to the behavioral modernity concept largely fails as a proxy for cognitive or linguistic capability. The fact that MSA early modern humans in Africa and Neanderthals in Europe occasionally but rarely produced various signatures of behavioral modernity clearly demonstrates that they had cognitive capacity to do, though this capacity frequently went unexpressed for long spans of time. The more interesting and productive issue, to my mind at least, is understanding what circumstances triggered these noteworthy varieties of prehistoric behavior.
Developing New Learning Strategies and Approaches to Archaeological Data With the stagnation of the hunting-and-scavenging debate and the recognition of deeply problematic complications with the behavioral modernity concept, we risk losing sight of some important prehistoric patterns with key implications for modern human origins research. First, regardless of whether early hominins hunted, aggressively scavenged, or passively scavenged animal carcasses, there are substantial structural differences between the archaeological record of the Lower and Middle Pleistocene and that associated with modern humans in the Upper Pleistocene and Holocene. As O’Connell (1995) argues, it is not useful to consider all hominin foragers subsequent to the origins of Homo erectus as possessing basically modern forms of economic and social behavior. It is still a worthwhile set of research problems to examine why certain fundamental changes in hominin behavior occurred during the Upper Pleistocene and why the rate of change is so rapid thereafter. In this respect, Mellars’s (2005) argument for the correlation between the dramatic period of Upper Pleistocene cultural change and the emergence of modern humans as an “impossible coincidence” still warrants consideration, though it is now apparent that this period of rapidly shifting behavior was not limited to modern humans but was a global multispecies phenomenon. Second, while the nature and scope of Upper Pleistocene cultural change is striking, there are clearly no easy answers concerning their causes. It is apparent that African early modern humans underwent longer and more mosaic periods of cultural change than can be
48 Chapter 1
accurately considered a revolution associated with origins of the species. Furthermore, it is now also clear that late Neanderthals possessed the same capacity for what have traditionally been considered modern varieties of behavior, demonstrating that this period of innovation cannot be linked with a single species. Rather than relying on invisible neural mutation or some other sort of cognitive revolution associated with the origins of modern humans as an explanation, it seems more productive to tackle these problems in ecological terms and at a range of spatial scales. In this pursuit, we must make more extensive use of existing bodies of knowledge concerning forager behavioral ecology and ethnoarchaeology as frameworks for both constructing better inferences about the nature of prehistoric behavior and developing theory to explain it. Our current understanding of Paleolithic prehistory suggests an interesting paradox: while the pace of hominin cultural change is clearly inflected in Upper Pleistocene and roughly correlated with the origins of modern humans, it is now reasonably certain that the emergence of modern humans by itself did not cause all these changes. In addition, the pace of cultural change during Pleistocene was neither static nor linear, and it was not capped by a modern human revolution. Instead, it was apparently characterized as what I might describe as “punctuated gradualism,” with various periods of rapid culture change across the Pleistocene and steadily increasing in frequency across the Upper Pleistocene and Holocene. Beyond Hunting and Scavenging: New Approaches to Early Hominin Animal Bone Assemblages This situation suggests some new methodological tactics and theoretical directions that may help breathe further life into debates about the nature of early hominin lifeways and evolution, as well as the nature and causes of Upper Pleistocene culture change. With respect to the hunting-andscavenging debate, we must recognize the necessity of new methodological approaches for making inferences about early hominin economic behavior and mobility patterns. Hunting became a test implication of the Washburn-Isaac model largely because animal bones represent the only durable evidence of early hominin subsistence behavior. In spite of the recognition of daunting problems of taphonomically induced equifinality in the frequencies of various types of animal bones and the patterns of damage on them, there is moderate consensus that early hominins had early access to the fleshy carcasses of medium- and (perhaps) large-sized animals. Yet, such evidence is quite equivocal in terms of its implications for the Washburn-Isaac synthesis and other models of early hominin evolutionary dynamics. To make real progress
Introduction 49
in building better theory, we must change the nature of the questions we ask of Paleolithic faunal assemblages. To begin with, hunting, scavenging, and concomitant patterns of meat-eating behavior may not have been as exclusive in terms of early hominin diet as has been assumed by a great many evolutionary models (Washburn and Lancaster 1968; Isaac 1968, 1978a; Bunn and Kroll 1986; Stanford & Bunn 2001; Speth 2010). Feminist scholarship on hunter-gatherers has made this point in an effort to remind (mostly male) Paleolithic archaeologists of the existence of females in our evolutionary past and to fight back against the Washburn-Isaac synthesis implication that females were passive participants in a meat-for-sex food sharing scheme (Dahlberg 1981; Zihlman 1981, 1997, 2009; Hrdy 1999; Speth 2010). Furthermore, there are clear expectations based on modern forager economic variability that early hominins living in tropical/ subtropical and arid/semiarid regions of the Old World would have had diets largely focused on high-quality plant foods (Lee 1968, 1979; Kelly 1995; O’Connell et al. 1999; Binford 2001). Thus, it seems appropriate to retain skepticism toward hunting (or scavenging) and meat-eating as the soul of early hominin subsistence. Instead, we should question how we may use various forms of faunal assemblage patterning as proxy data sources for broader hominin subsistence patterns, which no doubt included major components of plant food resources that are not preserved in the archaeological record. Forager behavioral ecology and optimal foraging theory offer some ways of using data concerning faunal exploitation patterns for understanding broader subsistence decisions and dynamics (Charnov 1976; Winterhalder and Smith 1981, 2000; O’Connell and Hawkes 1981; Hawkes, Hill, and O’Connell 1982; Hawkes et al. 1991, 1998; Hill et al. 1987; O’Connell and Hawkes 1988a, 1988b; O’Connell, Hawkes, and Blurton Jones1999; O’Connell et al. 2002; Bird, Bird, and Codding 2009). Rather than viewing the hunting of medium- or largesized game as an indicator of complex cognitive capability or social behavior, such as food sharing, it may be more productive to consider the implications of faunal patterning for the organization and ranking of subsistence opportunities. In general, the acquisition of large intact animal carcasses (through either hunting or scavenging) with the simple technologies in use during Lower and Middle Pleistocene implies a relatively low investment of foraging effort and high returns in terms of nutrition. Thus, rather than viewing early hominin meat-eating as a sign of intelligence or modernity, this behavior is perhaps better viewed as an efficient high-return foraging strategy requiring little by way of complex technology.
50 Chapter 1
In examining the behavioral ecology of Lower and Middle Pleistocene hominin animal acquisition strategies, this book takes a comparative approach. While much of our knowledge of early hominin carnivorous behavior is known from a single site, FLK 22, there are numerous other faunal assemblages at least partly accumulated by hominins from a variety ESA/MSA and Lower/Middle Paleolithic contexts. In analyzing a sample of Pleistocene faunal assemblages, this book compares animalpart profiles from prey of various sizes in order to construct inferences concerning the nature of food packages collected by hominins. In addition, this book offers a comparative analysis of bone surface damage patterns in order to understand the nature of the nutritional resources targeted by hominin butchery practices. This analysis makes use of cut marks and other hominin-induced damage morphologies in order to examine practices of defleshing and element dismemberment, which offer key information concerning that nature of food resources acquired through faunal acquisition behavior. The results of these studies suggest a stable pattern of animal exploitation strategies focused on meat- and marrow-rich bone elements from small- and medium-sized prey. Comparisons of animal part profiles consistently point to density-mediated attrition as a primary factor influencing element frequencies. However, such analyses also show elevated frequencies of certain nutrient-rich elements, such as upper limb bones, suggesting the selective accumulation of certain animal highquality animal parts. The comparative analysis of bone modification patterns also suggests consistent patterns of cut marks focused on the removal of large masses of muscle and the extraction of marrow from particularly marrow-rich elements. In contrast, the placements, orientations, frequencies, and overall nature of bone damage patterns are quite variable across Lower and Middle Pleistocene contexts. Given this variability and disorganization, these damage morphologies do not seem to suggest that dismemberment was always a primary motivation for butchery behavior, and they are not generally consistent with food sharing. Rather, they are quite similar to the patterns of in situ carcass consumption and limited element removal common to other nonhominin carnivores. Tellingly, evidence for systematic dismemberment of the sort typically associated with food sharing remains extremely rare until the close of the Pleistocene. It’s interesting to note that the comparative analyses presented in this book suggest that the most significant change associated with later Upper Pleistocene and Holocene hominin carnivory is somewhat unexpected and backward from the initial expectations of early zooarchaeological researchers. Terminal Pleistocene and Holocene faunal assemblages are often characterized by high frequencies of small prey usually requiring
Introduction 51
specialized weapons or trapping technology, as well as substantial postcapture processing. Such food sources with relatively small nutrient returns and large handling costs represent increasing subsistence intensification (Binford 1968; Flannery 1973; Stiner, Munro, and Surovell 2000; Henshilwood and Marean 2003; Klein et al. 2004). Thus, rather than representing the incapability of earlier hominin hunters, the exploitation of small, labor-intensive game stems from the increasing scarcity of high-ranked subsistence resources driven by larger hominin populations and environmental over-exploitation. Paradoxically, modern hunters are marked by their tendency to seek out small or otherwise problematic game animals with costly technologies and using labor-intensive hunting tactics. New Perspectives on Hominin Stone Tool Assemblages For reasons that should be obvious, stone tool technology has not figured into the hunting-and-scavenging debate very much. Stone tools simply cannot be linked with either hunting or scavenging in a very direct way. While the absence of appropriate early hominin hunting technology (and putative stone tool components, such as projectile points) has been noted by advocates of the scavenging position (for instance, Binford 1981), proponents of hunting have countered by pointing to certain rare instances of preserved wooden tools, such as the presumed spears at the sites of Schöningen and Lehringen in Germany (Thieme 2005). It is certainly possible that effective hunting weapons were constructed from wood, which has simply not survived in vast majority of archaeological contexts. Furthermore, both sides agree that there is ample evidence for the use of stone tools for butchering animal carcasses in the form of cut marks on bones, as well as isolated studies of use-wear patterns (for example, Keeley and Toth 1981). Thus, there has been a general consensus concerning the link between stone tools and early hominin carnivory (Plummer 2004), though their specific implications for hunting and scavenging have remained ambiguous. Instead, stone tools have generally contributed to the hunting-andscavenging and modern human revolution debates by serving as either various kinds of culture-historical markers (especially chronological) or as proxies for hominin cognitive capabilities, skill, and social structures related to learning (for instance, Stout 2002). In terms of culture history, certain stone tool “type fossils” continue to form the backbone of both the European and African Paleolithic chronological schemes (Bisson 2000). For example, finding handaxes or Mousterian points in a given archaeological context may still provide a basic guess at its chronology and a starting point for constructing dating arguments. Furthermore,
52 Chapter 1
for the century or more of Paleolithic archaeological research conducted before the invention of chronometric dating technologies, studying stone tools in the interest of developing chronological sequences represented a primary activity. Understandably, early periods of Paleolithic archaeology focused on stone tools as chronological and/or cultural markers, and this focus formed the earliest basis for considering stone tools as a data source. By the latter half of the 20th century, problems of chronology were less daunting, and attention turned to the implications of stone tools for early hominin cultural sophistication. For example, Francois Bordes (1961) argued that the diversity of Mousterian lithic facies stemmed from contemporaneous tribes of Neanderthals living in Europe with fluctuating territorial boundaries. Thus, he reasoned that Neanderthals had complex ethnic identities and historical patterns similar to those manifested in more recent European history, making them cognitively and behaviorally like modern humans. While this logic was contested on both functional (Binford and Binford 1966; Binford 1973) and sequence-of-reduction grounds (Rolland and Dibble 1990; Dibble 1995), it demonstrates an early interest in using stone tools as a source of information about social dynamics. This logic is also evident in the use of blade-based lithic technology as a definitional characteristic of the Upper Paleolithic revolution, with blades serving as a marker of technical knapping capabilities, composite technologies, and complex projectile weapon types (Mellars 1989, 1992; Ambrose 2001; Bar-Yosef 2002). Blades were even considered by Binford (1983) to mark the presence of greater planning depth and overall cognitive sophistication. While such perspectives have been undermined by an increasing acknowledgment of blade-based technologies in preUpper Paleolithic contexts (Bar-Yosef and Kuhn 1999; McBrearty and Brooks 2000; Barham 2002), it is important to recognize their past role as an archaeological signature of behavioral modernity by virtue of their presumed linkage with other cultural and cognitive characteristic of modern humans. Current research on Paleolithic stone tools retains a clear interest in their implications for hominin social structures and cognitive capabilities. French Paleolithic analytical methods after Bordes developed a strong structuralist focus centered on the inference of cognitive processes and social structures from stone tool assemblages. Stemming from the pioneering work of Leroi-Gourhan (1964), lithic analyses conducted within the chaîne opératoire framework became the coin of the realm. Based on the refitting of lithic artifacts and other methods designed to isolate the ordering of knapping gestures, the chaîne opératoire approach was aimed at understanding the specific decisions faced by individual
Introduction 53
knappers in the past and, therefore, the technical goals underlying stone tool manufacture activities. In combination with incipient replicative flintknapping experiments (for example, Bordes and Crabtree 1969; Newcomer 1971), chaîne opératoire analyses also underscored the skills required by knappers in the use of certain complex techniques required to produce Levallois points, blades, or any number of similarly complicated knapping products. The chaîne opératoire perspective now constitutes a dominant analytical paradigm within Paleolithic archaeology directly relating to the field’s concern for early hominin cognition. The effect of such studies on views of hominin cognitive capabilities, skill, and social structures of teaching and learning has been especially prominent in the last decade. Owing to their symmetry, redundant shape categories, and other striking formal characteristics, Acheulean handaxes have been a favorite target of this line of argumentation. For example, with various logical underpinnings, many researchers have argued that the production of Acheulean handaxes indicates the presence of modern human language (Calvin 1993, 2002; Wynn 2002; Stout 2002; Stout and Chaminade 2007; Stout et al. 2008). More broadly, various Pleistocene lithic technologies, ranging from the early Oldowan in eastern Africa (de la Torre et al. 2003; Delagnes and Roche 2005; Davidson and McGrew 2005) to MSA/Middle Paleolithic Levallois flaking (Schlanger 1996; Soressi 1999; Pelegrin 2009; Wynn and Coolidge 2010; Sumner 2011; Eren and Lycett 2012) have been argued in one way or another to indicate the cognitive sophistication of associated hominin species. This book departs from a prevailing interest in the inference of cognitive capabilities, instead focusing on the potential of lithic assemblages to provide information concerning site use dynamics, mobility patterns, settlement systems, and the design of important technologies. This perspective, which has its roots in ethnoarchaeological studies of modern foragers, has often been deemed the organizational approach (Binford 1973, 1977b, 1978; 1979, 1980; Bamforth 1986, 1991; Bleed 1986; Shott 1986, 1996; Torrence 1989; Nelson 1991; Sellet 1993; Andrefsky 1994; Kuhn 1994, 1995; Nash 1996; Odell 1996; Carr and Bradbury 2011; McCall 2012). This approach relies on the fact that the nature, location, and timing of stone tool manufacture activities are determined by both immediate needs and the anticipation of future technical problems. Thus, the designs of tools and the characteristics of lithic assemblages were intimately structured by various aspects of economic behavior (that is, how individuals used their tools), as well as dynamics of site use and mobility. Specifically, this book works from the premise that stone tool assemblages may be important sources of information about whether, for instance, early hominins used sites as home bases or other forms of special
54 Chapter 1
activity areas. Lithic assemblages may be creatively employed in the examination of other implications of major models of hominin evolution, though they may be difficult to link directly with hunting, scavenging, behavioral modernity, or other historically salient theoretical issues. Rethinking the Implications of Hominin Patterns of Biological Evolution The field of anthropology has long stressed the coevolution of human cultural capacities and biological characteristics. Beginning with Darwin (1871), generations of anthropologists have professed a complementary relationship between the evolution of human biological characteristics and the capacity for cultural behavior, especially in terms of the manufacture of tools. This belief was an element of the Boasian fourfield approach common to early American anthropology (for instance, Boas 1940). At the opposite end of the theoretical spectrum, it also formed the core of neoevolutionary perspectives, such as White’s (1959) concept of the culture as an “extrasomatic means of adaptation,” which was to profoundly influence generations to come. It is also at the heart of the “man the toolmaker” definitional of view of humans as a unique animal species (Oakley 1956). In fact, the belief that human culture and biology coevolved has been one of the few things that the fractious field of anthropology in 20th century seems to have agreed on. Within paleoanthropology, this idea was incorporated in a few central ways. Guided by the Neo-Darwinian evolutionary synthesis, Washburn’s (1950) New Physical Anthropology focused its attention squarely on the adaptive role of hominin anatomical features. In terms of subsistence activities, it was widely thought that humans (and our closely related hominin ancestors) were biologically ill equipped to face the rigors of the Pleistocene in comparison with other primate species and the carnivores with whom they may have competed. Thus, technology and other forms of cultural behavior were viewed as compensating for this lack of biological capability. For example, C. Loring Brace (1962) argued that early hominin tooth size reduced with the development of stone tool technology, since tooth development was energetically expensive and tools were capable of the same functions. Invoking the “expensive tissue hypothesis” also inherent within Brace’s logic, Leslie Aiello and Peter Wheeler (1995) have further argued that developing cultural behavior allowed hominins (1) to survive without specialized tissues, such as teeth or claws, (2) to consume higher-quality diets, including proteins and fats derived from increased carnivory, which allowed gut sizes to reduce, and (3) to use resulting energy budget surpluses to fund increasing brain size. Thus, the development of stone tool technology, cooperative social
Introduction 55
behavior, and other forms of cultural behavior were thought to have both influenced and been influenced by patterns of biological evolution. While such coevolutionary views of the relationship between hominin biology and culture comprised the philosophical core of the field of anthropology, they have rarely offered any specific insights concerning current theoretical debates. Theoretical expectations derived from the expensive tissue hypothesis concerning the relationship between brain size increase, gut size decrease, hominin carnivory, and cultural behavior are basically the same within both hunting and scavenging scenarios (though Aiello and Wheeler 1995 clearly favor hunting and other elements of the Washburn-Isaac synthesis). Furthermore, although it seems likely that the basic philosophical linchpins of the expensive tissue hypothesis are valid, they are frequently employed within teleological models focusing on the uniqueness of extremely large human brain size and tautological argumentation concerning the causes of encephalization (see Navarrete, van Schaik, and Isler 2011). While such models are to be commended for considering in detail the metabolic costs of larger brains, the specific benefits in either social or behavioral terms remain regrettably vague, generally adhering to a “bigger is better” logic. In addition, the original variables that initiated encephalization within the hominin lineage remain either unspecified or speculatively correlated with large-scale variables, such as the Pliocene origins of Savanna environments in East Africa (for example, Potts 2002). Finally, a great deal of the scholarship attempting to explain processes of encephalization have ignored the fact that hominin brains actually continued to increase in size significantly throughout the Lower and Middle Pleistocene, well after the development of complex stone tool technology, carnivory (of one sort or another), and the reduction of gut sizes. In addressing the coevolution of brain size and cultural behavior, this book acknowledges that, like those of all other animal species, human brains are unique. However, this book proceeds from the axiom that this fact does not imply that human brains are somehow outside the range of comparability with other animal species. Thus, in attempting to understand potential causes for encephalization and the initial benefits of larger brains, it seems to be a fundamental first step in developing actualistically derived referential frameworks to examine variation in brain size across other mammalian taxa. Rather than assuming that large human brains are simply an “exaggeration” of an ancient primitive pattern common to all primates (Gould 1975: 26) or that primate brain sizes were bound to increase progressively over time, it is worth exploring the specific ecological dynamics that fostered increasing hominin brain
56 Chapter 1
size over the course of the Pleistocene. This comparative analysis puts humans and our hominin ancestors in the context of other living mammalian species for which we have detailed ecological knowledge. This serves as a basic body of actualistic knowledge with which to think about processes of hominin evolution. This book also takes a closer look at the timing of brain size increases among early hominins. While it has long been presumed that (1) the most significant changes in brain size occurred with the origins of the genus Homo (especially Homo erectus; Hawks et al. 2000; Relethford 2000) and that (2) subsequent encephalization occurred gradually over the course of the Lower and Middle Pleistocene (Conroy et al. 2000; Lee and Wolpoff 2003), this study demonstrates that both of these conclusions are at least partially erroneous. On the one hand, a closer examination of early hominin brains shows that substantial increases in brain size occurred well after the origins of Homo erectus, especially since the discovery of the small-brained specimens from Dmanisi in the Republic of Georgia. On the other hand, there is actually a moderate inflection point in the rate of brain size increase during the Middle Pleistocene associated with late members of Homo erectus and subsequent Middle Pleistocene hominin species. Although I will argue that this inflection point in brain size increase has important implications for the shifting ecology, demography, and subsistence of Middle Pleistocene hominins, it is perhaps most important in demonstrating that major increases in brain size occurred well after the origins of hominin carnivory, further calling into question meat eating as the sole explanation for encephalization. Using variation in other mammalian carnivore and primate species as a referential framework, this book argues that increasing brain size was, in fact, linked with the diversification of subsistence activities and population increases. While large brain size has served as perhaps the most salient definitional element of human anatomy, large body size and an essentially modern human bauplan are also both characteristics of Homo erectus recognized as divergent from earlier hominin species, especially the australopithecines (McHenry 1992; McHenry and Coffing 2000; Wood and Collard 1999; Richmond, Aiello, and Wood 2002; Ruff 2002). Unsurprisingly, trends in the evolution of early hominin body size and shape have received less attention than patterns of enchephalization, but they also have important implications for our understanding of evolution and ecology. Increases in body size and changes in limb proportions have often been linked with locomotion efficiency and hunting effectiveness, including perhaps the evolution of endurance running (Wood 1974; Witte, Preuschoft, and Recknagel 1991; McHenry and Coffing 2000; Bunn 2001; Bramble and Lieberman 2004; Lieberman and Bramble 2007; Lieberman et al. 2007,
Introduction 57
2009; Steudel-Numbers 2006; Steudel-Numbers, Weaver, and WallScheffler 2007; Steudel-Numbers and Wall-Scheffler 2009). In a nutshell, large body size has been argued to have allowed early hominins to move across the landscape more efficiently and to confront other large-bodied animal species more effectively using simple weapons. Once again, there are also important changes in body size that also occurred after the appearance of Homo erectus that have implications for evolutionary trends in hominin ecology and demography. This time, however, hominin body size reaches a maximum during the Middle Pleistocene and undergoes a substantial reduction from the Upper Pleistocene onward. This trend constitutes a major exception to Cope’s (1887) rule, which observes that animal species generally increase in body size over the course of their evolutionary history and assumes larger body size to hold universal selective advantages. Although this pattern has not received as much attention as it deserves, it has perplexed scholars in certain corners. Christopher Ruff (2002), for example, in returning to a variant of the expensive tissue hypothesis, argues that body size reduced as more effective weaponry was developed, reducing the need for physical size and strength during hand-to-hand combat with wild prey animals (see also Ruff, Trinkaus, and Holliday 1997). It is also certainly the case that certain regional hominin populations, especially Neanderthals in Ice Age Europe, underwent reductions in lower limb length, according to Allen’s (1877) rule, which may have also resulted in shorter statures (Trinkaus 1981; Ruff 1991; Holliday 1997). Although climatic adaptation works well as an explanation for Neanderthal postcranial anatomy, it does little to explain the broader global pattern of body size reduction during the Upper Pleistocene. This book once again takes several approaches in building referential frameworks for using body-size patterning to make inferences concerning shifts in hominin ecology. First, variation in modern human forager body size and shape offers an important opportunity to test the effects of various variables and for thinking about prehistoric dynamics. This examination finds that, while climate and nutrition both have the effects one might suppose, population density is an equally strong variable in determining body size. This finding concords with well-known principles of ecogeography, such as the tendency of large-bodied animal species to reduce in size when available food sources are limited (MacArthur and Wilson 1967; Van Valen 1973; Damuth 1981; Brown, Marquet, and Taper 1993; Lomolino 2005). This principle is most vividly demonstrated by the “island rule” governing evolutionary trends exhibited by species such as the pigmy elephant and even the Hobbit-like Homo floriesiensis specimens found on the island of Flores in Indonesia (Van Valen 1973).
58 Chapter 1
Next, in applying these principles to modern human foragers and early hominins, I argue that large population densities generally reduced the availability of high-ranked food resources and increased the frequencies of starvation periods, both of which substantially favored individuals with small body size and concomitant low rates of metabolism. Thus, I draw the inference that the large body sizes associated with Lower and Middle Pleistocene hominins related to low population densities, along with easy and regular access to high-ranked food resources. In contrast, the decreasing body sizes of Upper Pleistocene hominins indicates increasing population densities, further reliance on more marginal food sources, and more frequent periods of starvation. This book argues that patterns of Lower and Middle Pleistocene brain size increase reflected a period of subsistence diversification in which hominins broadened their repertoire in terms of ways of getting foods, making tools, and moving around the landscape. It also makes the case that the Upper Pleistocene witnessed the initial period of substantially increasing population densities, resulting in decreasing body sizes. Finally, I argue that both the achievement of modern brain size and decreasing body size related to a dynamic period of hominin prehistory in which our ancestors’ settlement systems, subsistence strategies, technologies, and social structures all started to change rapidly. This shift laid the foundation for later developments, such as the mosaic appearance of the archaeological features historically used to define behavioral modernity and the ultimate spread of modern human populations out of Africa.
Conclusion This chapter has presented an historical review of the hunting-andscavenging and modern human revolution debates, concluding that both have essentially stalled. In the case of the hunting-and-scavenging debate, it proved impossible to test the implications of major theoretical models, such as the Washburn-Isaac synthesis, based on prevalent forms of archaeological evidence. The modern human revolution scenario was called into question thanks to a deeper understanding of the MSA archaeological record in sub-Saharan Africa and the slow realization that Europe’s last Neanderthal populations did, if rarely, produce the archaeological signatures of behavioral modernity. Beyond this, both debates have reached positions where the development of novel theoretical models is extremely limited thanks to the nature of the available archaeological record and the ways in which it has been documented, at least up this point. However, research conducted in the interest of addressing these debates has produced a vast body of information concerning hominin evolutionary prehistory with which
Introduction 59
to approach the enterprise of theory building along other lines, and it has taught us many valuable lessons in terms of the articulation of archaeological method and theory. The point of this book is not to take sides on these issues or to declare winners and losers. Productive scientific research capable of producing novel understandings of the natural world, including our own evolution as a species, cannot be based on dogmatic adherence to any particular model nor can it be satisfied with our current state of knowledge. Instead, this book seeks to revisit important patterns in the archaeological and paleontological records of hominin evolution in the interest of developing new theoretical problems and methods for addressing them. In some respects, this approach amounts to new ways of looking at old evidence, which some may criticize as outmoded. However, I believe that asking new questions of our existing bodies of knowledge concerning hominin evolutionary prehistory is the only way to generate new insights, while developing better ways of documenting the prehistoric record and drawing inferences from it.
Chapter 2
Stone Tool Technology and the Organizational Approach
S
tone tools are the most ubiquitous, the oldest, and therefore the most important archaeological data source. At the same time, they are also in many ways the most unfamiliar to us as people living in the modern industrial world and most ambiguous as potential sources of archaeological inferences. This situation stems from the fact that most societies have not produced stone tools for many generations or often millennia, and examples of modern stone tool use are both rare and exceptional (McCall 2012). Unlike ceramics, which continue to be produced using both modern and traditional technologies around the world, and animal bones, which may be identified using comparative collections, stone tools are inherently foreign to the modern experience. Thus, they are beyond our capability of using “common sense” to understand in archaeological terms or to make accurate inferences about past human activities (Kuhn 1995). For these reasons, archaeologists have found it difficult to construct useful inferences that articulate significant theoretical questions based on lithic assemblage data. In dealing with the inherent unfamiliarity of stone tools, the field of lithic analysis has undergone a long process of maturation, eventually developing sophisticated bodies of actualistic knowledge using both experimental and ethnoarchaeological research (for reviews, see Whittaker 1994; Andrefsky 2006, 2009; Odell 2001; McCall 2012). This chapter examines the historical development of lithic analysis,
Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 61–92 © 2015 Left Coast Press, Inc. All rights reserved. 61
62 Chapter 2
especially as it relates to Paleolithic archaeology. This review documents the transition from the primary use of stone tools as chronological and culture-historical markers to their study in the interest of reconstructing past patterns of behavior within mainstream processualist archaeology. It also examines current trends in analysis of Paleolithic stone tools assemblages, which have become increasingly focused on examining hominin cognitive capability, knapping skill, and social structures of teaching and learning. While I readily acknowledge the value of stone tools as a data source concerning cognition, skill, and sociality, I also believe that there is great and generally underexploited potential to use lithic assemblages as sources of information about the organization of hominin economic activities, mobility patterns, and settlement systems. Using them as such, however, requires a great deal of creativity and inventiveness. In this chapter, I argue that the understandable analytical focus on the most striking formal characteristics of certain stone tool types, such as the symmetry, thinness, and redundant forms of Acheulean handaxes, has overshadowed subtle but equally important aspects of assemblage patterning. Through the comparative study of assemblage composition, we may learn about meaningful variations in knapping strategies over both space and time, as well as the ways in which stone tools were transported across the landscape and discarded differentially at various locations. These types of information provide ways of understanding how hominins adapted their knapping strategies and tool use activities to contexts of site use dynamics in order to cope with problems raised by dynamics of mobility. Thus, by using stone tools as a way of understanding how early hominins used sites and how they moved around the landscape, we may address some fundamental issues regarding the organization of their economic systems. This section employs the organizational approach to the analysis of African ESA and MSA stone tool assemblages (Binford 1973, 1977b, 1978, 1979, 1980; Shott 1986, 1996; Parry and Kelly 1987; Kelly 1988; Andrefsky 1994, 2009; Nash 1996; Odell 1996, 2001; Carr and Bradbury 2011; McCall 2012). As Binford (2001) has discussed, the organizational approach seeks to understand the relationships between forager technological systems and other interrelated components of economic and social life ways. This perspective is based on the fact that forager economic systems operate as complex integrated wholes, and all significant elements of these systems are autocorrelated. Binford argues that forager mobility patterns, settlement systems, and economic strategies strongly influence the location and timing of technological activities and the designs of tools and weapons. Therefore, lithic assemblages can be
Stone Tool Technology and the Organizational Approach 63
used as sources of information concerning a wide range of hominin behavioral dynamics of key importance to current theoretical problems. This chapter discusses the origins of the organizational approach and the ways in which it may be used to better understand hominin foraging activities.
Stone Tools: From Culture-Historical Markers to Sources of Archaeological Inference For incipient Paleolithic archaeologists, such as Jacque Boucher de Perthes (1864), stone tools served primarily as chronological markers or even simply as manifestations of a human ancestral presence in deep time. Boucher de Perthes used the presence of Acheulean handaxes in the gravels of the Somme River (in northwestern France) (Figure 2.1) in association with the bones of extinct Pleistocene megafauna to prove the antediluvian age of our ancestors. This approach to the archaeology of stone tools has its origins in the incipient field of geology, which sought to date geological strata on the basis of the presence of diagnostic faunal “type fossils” (for instance, Lyell 1837). Almost a century after Boucher de Perthes, stone tools served much the same role for Louis Leakey (1951) in his pursuit of our first tool-making ancestors at Olduvai Gorge. Indeed, archaeological training with specialization on culturehistorical periods of specific regions necessitates a strong knowledge of diagnostic artifact types. Modern fieldwork still frequently involves the initial estimation of age based on a limited and important set of
Figure 2.1 Acheulean handaxe type specimen recovered by Jacques Boucher de Perthes from Abbeville, France, in 1867
64 Chapter 2
stone tool type fossils. Most of us who have done fieldwork have made such age estimates based on the presence of type fossil artifacts, be they handaxes, Paleoindian points, or beer cans. Using more sophisticated quantitative methods, François Bordes (1961) shifted the focus from the documentation of the presence of various diagnostic artifact types to analysis of artifact type frequencies within assemblages (Figure 2.2). Bordes’s method for this was the use of cumulative frequency graphs for the visual display of differences in assemblage composition. His analytical goals, however, were still fundamentally culture-historical in their orientations. In assessing French Mousterian assemblage variability, Bordes was able to identify six distinguishable patterns of assemblage composition, which were traditionally described as “facies,” once again borrowing a term from the field of geology (Lyell 1837). The cumulative frequency approach was particularly useful for Bordes in his studies of the Mousterian because, unlike other Paleolithic cultural sequences, the facies of the Mousterian did not show consistent secular chronological or evolutionary patterning. Instead, they tended to alternate unsystematically within archaeological
Figure 2.2 François Bordes knaps a blade core using indirect percussion (photo by John Whittaker)
Stone Tool Technology and the Organizational Approach 65
contexts, with certain facies occurring repeatedly within the same stratigraphic sequences and different sites showing different orderings of facies. Based on these patterns, Bordes (1961) argued that the Mousterian facies were manifestations of Neanderthal tribal identity, similar to the ways in which material culture among modern non-Western societies was presumed to mark ethnic boundaries (Barth 1969). Thus, Bordes saw equivalency in the kinds of social structures and historical patterns he inferred from Mousterian lithic assemblages and those manifested by modern humans (especially those living in early 20th-century Europe), which, he concluded, indicated the modernity of Neanderthal patterns of cultural behavior. In many ways, Bordes’s (1961) cumulative frequency approach represented a high-water mark of the analysis of Paleolithic stone tool assemblages as cultural markers. It systematized practices of stone tool typology and offered methods for distinguishing lithic facies characterized by the presence of the same type fossils but exhibiting substantially different patterns of assemblage composition. For these reasons, the Bordes approach to typology and quantitative assemblage description formed the groundwork for future interpretations of Mousterian lithic variability in both functional and reductive terms (Binford and Binford 1966; Binford 1973; Rolland and Dibble 1990; Dibble 1995). It is also the case that the Bordes’s approach was developed at the end of the culture-historical period of archaeological investigation, and ambiguities in the interpretation of resulting data were to serve as a springboard for the development of new research questions and analytical methods. Before the 1970s, there was little interest in using stone tools to understand actual patterns of economic behavior. To the extent that there was interest, analysis was largely done through bald speculation. For example, stone tool types such as “handaxes” and “choppers” clearly imply some judgment concerning their function, although it seems that the assumptions underlying these loaded terms were never taken terribly seriously (for instance, Kleindienst and Keller 1976). Furthermore, early research questions within Paleolithic archaeology were focused on the establishment of relative chronologies, cultural sequences, and regional stone tool industry characteristics. It was only with the successive accumulation of knowledge concerning prehistoric sequences and the development of chronometric dating techniques that interest increased in manufacture and use of stone tools. This situation changed substantially with advent of radiometric dating techniques in the Atomic Age of the 1950s and the subsequent emergence of the New Archaeology, or the processualist paradigm. Largely freed from the bonds of constructing culture-historical sequences on the basis of presence and/or frequency of certain artifact types,
66 Chapter 2
archaeologists could turn their attention to addressing fundamental theoretical questions, such as the causes and processes of cultural change over time (Binford 1962; Trigger 1989). This shift in focus necessitated the development of methods for understanding what people in the past were actually doing, which was (for the most part) unprecedented up to that point (Binford 1968, 1981, 1983; Fritz and Plog 1970; Plog 1974; Schiffer 1972, 1976). Of course, this has been no easy task, and it is one with which the field of archaeology continues to struggle. In confronting the fact that “common sense” speculation was not a sound basis for the interpretation of archaeological remains, archaeologists began to pursue a wide range of research activities to develop Binfordian (1981) “middle range,” or actualistic, knowledge. For archaeologists interested in lithics, this epistemological coming of age necessitated the confrontation of our inherent cultural unfamiliarity with stone tools. Early attempts at building actualistic knowledge focused on the experimental replication of various stone tool types. In North America, the self-educated and self-taught flintknapper, Don Crabtree (1966, 1972), made enormous contributions to archaeologists’ understanding of the mechanics of stone tool production and the processes responsible for resulting products. Bordes also conducted important replicative experiments dealing with Middle and Upper Paleolithic technologies (for example, Bordes and Crabtree 1969) (Figure 2.2). In addition, while his publications on such subjects were limited, he mentored and influenced a generation of archaeological knappers1 (Whittaker 1994). As knapping experimentation became more prevalent and sophisticated, archaeologists began to understand a number of key properties of stone tools in terms of their reductive nature, the transformational systematics of retouched tools, and the potential utility debitage relative to formal tools (Odell 1981; Toth 1985; Rolland and Dibble 1990; Whittaker 1994; Dibble 1995). Such studies were to have profound influences on the ways in which archaeologists viewed stone artifacts, as well as the ways in which assemblages formed. Processualist archaeology also developed an interest in the ways in which stone tools were used and the determination of tool function as a way of inferring past human behavior. The first attempts at this built on previous speculation concerning tool function and used the frequencies of certain artifacts as a source of information about the activities conducted at archaeological sites. The analysis of stone tool use was combined with attempts to reinterpret or renovate older culture-historical typological schemes on the basis of functional interpretations, with the Binfords’ (1966; L. R. Binford 1973) work on the Mousterian serving as the clearest manifestation of this research trajectory. The assumptions underlying tool function were quickly recognized as problematic (Bordes et al. 1969; Cahen and Van Noten 1971; Odell 1981), and the result was the construction of
Stone Tool Technology and the Organizational Approach 67
“functional” typologies, which began to integrate early evidence from usewear studies (for instance, Hester, Gilbow, and Albee 1973; Tringham et al. 1974; Zier 1978). In this baroque period of processualist archaeology in late 1970s, functional typologies offered tantalizingly simple and straightforward analytical tactics for linking artifact types with patterns of behavior and for making broad generalizations about common activities conducted at sites. While use-wear studies were to make great contributions to archaeological analysis with greater experimental methodological sophistication, such attempts at functional typology were doomed from the start. Even during this baroque processualist period, archeologists were already objecting to both analytical inconsistencies and a lack of rigor in the development of appropriate experimental controls (for example, Keeley 1974). In short, use-wear patterns were falling prey to the same kinds of speculative assumptions concerning their causes. More important, many began to awaken to the fact that stone tool types did not correspond with predictable patterns of tool function. This fact was astutely recognized by one of the field’s use-wear analysis pioneers, George Odell (1981), who argued that (1) stone tools belonging to a single type were frequently used for many different tasks, (2) stone tools of different types were often used for the same task, and (3) the reductive nature of stone tools severely complicated the situation, since not all the functions of a tool were represented by the use-wear on the final form deposited in the archaeological record. Even Binford (1978, 1981), daunted by his own recognition of the ambiguities involved in the inference of tool function during the Mousterian, began a campaign of ethnoarchaeological research among the Nunamiut of Alaska to understand the organization and operation of technological systems within foraging societies. It is also telling that he chose to turn his attention to animal bones (Binford 1981, 1984), which he regarded as having clearer inferential implications in terms of hunting activities, carcass transport strategies, and other economic decisions that could be related to larger organizational patterns of foraging behavior. While Binford’s (1977, 1978, 1979, 1980) ethnoarchaeological research on the organization of Nunamiut technological systems had great consequences for lithic analysis, Binford became increasingly circumspect with respect to the importance of the identification of stone tool function in the archaeological record.
Sequences of Reduction and Chaînes Opératoire In the 1980s and 1990s, stone tool researchers began to fully realize the implications of lithic reduction as a fundamental process responsible for the formal characteristics of stone tools in the archaeological record.
68 Chapter 2
In updating the Mousterian functional debate between Bordes (1961, 1969) and the Binfords (1966; L. R. Binford 1973), Dibble (1995; Rolland and Dibble 1990) offered a third argument—that retouched tool type frequencies were merely the result of tools at different stages of reduction. This tendency for the formal characteristics of stone tools to change as they are retouched and reduced was first recognized by William Henry Holmes (1894) more than a century ago, but it was revived in the analysis of lithic artifacts with the waning of culturehistorical typological approaches and the increasing realization of stone tool functional flexibility and ambiguity. This reawakening of concern for sequences of reduction occurred thanks to the efforts of the “second wave” of archaeologist flintknappers (Muto 1971, Bradley 1975; Collins 1975; Flenniken and Raymond 1986; Whittaker 1994), who became familiar with stone tools at the gut level in ways that went beyond strict attention to the formal characteristics of artifacts. This analytical trend was paralleled in French Paleolithic circles by development of the chaîne opératoire approach, which eschewed formal typologies of retouched tools in favor of the reconstruction of operational sequences of knapping gestures involved in stone tool manufacture. This perspective emerged from the work of Leroi-Gourhan (1964), who adhered to structuralist theoretical goals common to French anthropology in the mid-20th century (for example, Levi-Strauss 1949). Leroi-Gourhan recognized that the dominant analytical methods for studying Paleolithic archaeological remains were oriented according to more geological than anthropological goals, centering on the excavation of long stratigraphic sequences mainly in caves and the isolation of evolutionary changes in stone tool type frequencies over time. In contrast, Leroi-Gourhan was interested in discovering the details of the cognitive and social structures of Paleolithic peoples, which required new types of excavation techniques, archaeological sites with finer-grained resolution and better preservation, and alternative analytical approaches to artifacts (for instance, Leroi-Gourhan and Brezillion 1972). As with the sequence of reduction within the American tradition, the chaîne opératoire approach owed much to the knapping progeny of Bordes, who was capable of linking diagnostic features of debitage with the knapping gestures that produced it, as well as the sequential ordering of core reduction processes (Tixier, Inizan, and Roche 1980; Geneste 1988, 1989; Pelegrin, Karlin, and Bodu 1988; Pelegrin 1990; Texier 1989, 1996; Boëda et al. 1990; Roche and Texier 1991; Inizan et al. 1992; Sellet 1993). The chaîne opératoire approach also made extensive use of refitting as an analytical procedure in order to infer specific knapping gestures within core reduction sequences. Using these methods, lithic analysts could reconstruct the specific decisions faced by
Stone Tool Technology and the Organizational Approach 69
prehistoric knappers during the core reduction process, allowing for the examination of the technical goals of various knapping procedures. This methodological perspective also suited Leroi-Gourhan’s (1964) original analogy of knapping sequences to the syntactical structures of language in terms of the ordering of gestures in order to produce a pre-intended set of final products. Thus, the chaîne opératoire approach moved beyond the consideration of formal tool characteristics and began to consider lithic assemblages from the perspective of entire operational sequences, ranging from the collection of raw materials to the discard of lithic objects and their entry into the archaeological record. The explication of the relationship between the sequence of reduction and chaîne opératoire approaches has become a kind of cottage industry within lithic analysis circles, with some seeing them as essentially the same (such as Shott 2003) and others recognizing substantial epistemological differences (for example, Sellet 1993; see also Audouze 1999; Bleed 2001; Odell 2001; Shott 2007; Andrefsky 2009; Bar-Yosef and van Peer 2009; McCall 2012). While it is certainly correct that the overarching analytical practices inherent to both traditions are quite similar (Shott 2003), what separate the two the most are their theoretical goals and research problems. Stemming from its processualist roots (for instance, Collins 1975), the sequence of reduction approach has fostered research on economizing behavior, tool transport, and artifact curation. In contrast, the chaîne opératoire approach, with its interest in cognitive structures, has focused more on the assessment of practical knapping skill, technical decision making, and the social contexts of prehistoric technology (Audouze 1999; Bar-Yosef and van Peer 2009). Thus, while sharing analytical underpinnings and experimental actualistic frames of reference, these two perspectives continue to diverge in their theoretical applications.
Using Chaîne Opératoire to Infer Practical Skill and Social Structures: A Critical Perspective It is certainly the case that the chaîne opératoire approach now dominates Paleolithic archaeological thought on the analysis of stone tools. While methodological impetus for this may be traced to the recognition of the shortcomings inherent within earlier typological approaches (for example, Bisson 2000), I would argue that theoretical interests concerning skill, cognitive capabilities, and complex social structures also account for its popularity. The chaîne opératoire framework offers an explicit analytical framework for making direct inferences concerning the cognitive capabilities and social structures of early hominins and comparing those with modern humans. However, although certain examples of this line of research have been more successful than others, I contend that most have
70 Chapter 2
been at least somewhat misleading and have distracted the field from more productive lines of research on stone tools. Arguments based on chaîne opératoire analyses concerning early hominin skill in a variety of times and places have become quite widespread especially within the last decade. Arenas of debate include questions of the sophistication of Oldowan knapping and the so-called pre-Oldowan period, the manufacture of Acheulean handaxes, and the use of the Levallois technique by Neanderthals during the Middle Paleolithic of Eurasia. Although studies conducted within the chaîne opératoire approach often offer rich and detailed data sets concerning lithic assemblage characteristics, my main criticism is that they lack an appropriate actualistic framework for making inferences about cognitive capabilities and social structures. In fact, while we may be capable of assessing practical skill from archaeological assemblages in certain cases, I would go so far as to claim that building frames of reference for understanding directly the relationship between archaeological stone tool assemblages and the cognitive capabilities of early hominins may not be possible. Many studies within the last decade have sought to assess the skill and cognitive capabilities of the earliest stone tool-making hominins in East Africa at sites dating between 2.6–2.0 ma (de la Torre et al. 2003; Delagnes and Roche 2005; Stout and Semaw 2006; Stout et al. 2010; Hovers 2009; Roche, Blumenschine, and Shea 2009). Of these, de la Torre and colleagues (2003) and Delagnes and Roche (2005) make extensive use of chaîne opératoire analytical methods in analyzing the early Oldowan assemblages from Peninj (Tanzania) and Lokalalei 2c (Ethiopia), respectively. Both studies are successful in demonstrating that even Oldowan assemblages are characterized by the effective use of direct percussion knapping techniques, as well as the fact that early Oldowan knappers were capable of reading platform angles and applying proper amounts of percussive force. Furthermore, Delagnes and Roche (2005) extensively refitted the Lokalalei 2c assemblage, finding relatively long sequences of core reduction. This evidence effectively vanquished the “pre-Oldowan” hypothesis (Roche 1989; Kibunjia 1994), which viewed lithic technology before 2.0 ma as lacking the knapping sophistication present in later Oldowan lithic assemblages. These assemblage characteristics may be compared with those produced by Kanzi, the world’s foremost knapping bonobo (Pan paniscus; Schick and Toth 1993; Toth et al. 1993; Savage-Rumbaugh, Fields, and Spircu 2004). Kanzi is a highly intelligent bonobo with special proclivities for language, which was why he was selected for the knapping experiments in the first place. As smart as Kanzi is, however, he has never managed to learn the concept of platform angles, preferring instead to either throw cores against a hard surface (such as the concrete floor of his enclosures)
Stone Tool Technology and the Organizational Approach 71
or haphazard percussion relying on random fracturing. When Kanzi succeeds at removing flakes through direct percussion, it is largely the result of the chance striking of an appropriate platform or some form of non-Hertzian fracture. Thus, we may say with some certainty that Oldowan knappers possessed a higher level of practical knapping skill than what our best and brightest bonobos are capable of today. From this point, however, I feel that the inference of cognitive sophistication becomes problematic. De la Torre and colleagues (2003) argue that Oldowan hominins at Peninj were effectively employing a Levallois-like knapping strategy and that they employed mental templates in the core reduction process. They conclude: “This assumption makes it necessary to explicitly recognize the presence of mental abstraction and planning templates—and, therefore, a great cognitive potential—in the minds of hominids” (de la Torre et al. 2003: 222). Similarly, Delagnes and Roche summarize the value of their methodological approach: “Lithic technology is a powerful device for bridging the huge temporal and anthropological gap between ourselves and the earliest tool-makers” (2005: 469). Unfortunately, no actualistic frames of reference exist for inferring the cognitive potential of early hominins or for comparing those to modern humans on the basis of stone tool technology. With regard to the claims of de la Torre and colleagues (2003), there are two sources of information with which to consider the implications of the Peninj Oldowan stone tools for early hominin cognitive complexity: modern primate tool use behavior and modern human knapping. As Wynn and colleagues (2011) point out, with recent discoveries in terms of the ubiquity, diversity, and operational ordering of tool use behavior among other primate species, there is no qualitative difference from what is now known of Oldowan knapping. Interestingly, Wynn and colleagues also make extensive use of the chaîne opératoire concept in their comparisons of modern primate and Oldowan hominin technological behavior. It is also the case that many modern ethnographic cases of stone tool production also resemble Oldowan patterns of knapping, at least in terms of many superficial formal features of lithic assemblages (McCall 2012). These cases are mostly characterized by long sequences of flake production without any sort of mental templates with respect to core forms or reduction strategies. Instead, long flake sequences serve to provide a large and diverse population of flakes from which to select tools for the resolution of immediate technical problems. In modern cases of stone tool production, it would be extremely difficult to draw any conclusions about the cognitive sophistication. Even explicit attempts at building actualistic knowledge concerning the acquisition of knapping skill and various cognitive and social dynamics demonstrate the difficulties and ambiguities inherent within
72 Chapter 2
this enterprise. Perhaps the best organized and most compelling of these attempts is Stout’s (2002) study of Langda adze manufacture in Irian Jaya. Here, Stout offers ethnoarchaeological descriptions of Acheulean-like bifacial knapping at adze workshops directed by skilled and experienced knappers and assisted by apprentices. He provides vivid accounts of the ways in which individuals learn bifacial knapping skills through the mentor-apprentice relationship, which relies on many modern human social and cognitive constructs (not the least of which is language itself). Based on these observations, Stout concludes that the skillfully thinned and symmetrical handaxes endemic to the Middle Pleistocene Old World could not have been made without the mentorship of learned knappers necessitating complex cognitive capacities, modern human language, and social structures of teaching and learning. My main problem with this approach concerns likely differences in the organization of knapping activities within village-centered Langda workshops and mobile Acheulean foragers. Langda adze knappers acquire, process, and transport lithic raw materials to workshops at great cost (Stout 2002), descending nearly a vertical kilometer from the highlands into adjacent river basins and spending considerable effort reducing boulders into adze blanks. Access to lithic raw materials for unskilled apprentices is limited because of their cost. Thus, when apprentices begin knapping, it is important for the learning curve to be extremely steep in order to minimize the waste of expensive stone.2 It is also the case that the adzes, which are the final products of a series of workshop activities, are primarily social symbols of wealth and are judged in value by their aesthetic qualities. Bifaces are laboriously ground into their final adze forms, and a great deal of the skilled knapping activities basically serve to reduce the amount of grinding involved. Although there is no direct information about where, when, and how Acheulean knappers learned their skills, we may be fairly certain that it occurred under much different organizational circumstances. As I argue in the next chapter, it seems likely that handaxes were usually produced at locations of raw material abundance where experimentation by novices would be less costly. Furthermore, as functional elements of individual toolkits, it seems unlikely that aesthetic qualities bore the same importance, and it is doubtful that handaxes were rejected based on purely aesthetic flaws. In contrast with the workshop-based apprenticeship model, I do not feel that it takes great imagination to envision a scenario in which Acheulean hominins learned bifacial knapping skills through a combination of direct observation of more experienced individuals combined with a great deal of hands-on experimentation at locations with abundant lithic raw materials. This approach would necessitate neither formal social structures nor even language in the modern human sense. To be fair to Stout (2002), I think he is quite right about many important aspects
Stone Tool Technology and the Organizational Approach 73
of Acheulean archaeological patterning, and he has offered a thoughtprovoking case study in terms of how bifacial knapping skills may be learned. The larger point here is that the ways in which these relationships played out among our early hominin ancestors and resulting implications for cognition and social behavior are inherently ambiguous. In short, although the chaîne opératoire approach has provided many important insights concerning past practical knapping skills, the ways in which these data have been used to address cognitive and social phenomena have suffered from a lack of referential knowledge. No matter how attractive such prospects may be, stone tools cannot provide any shortcuts to understanding the relative modernity or sophistication of early hominin behavior.
The Organization of Technology: An Alternative Approach In keeping with the philosophical underpinnings of processualist archaeology, new approaches to the archaeology of stone tools began to emerge from ethnoarchaeological research on modern foragers. Foremost in this research trajectory was Binford (1977b, 1978, 1979, 1980, 1986; Binford and O’Connell 1984), who attempted to resolve problems emerging from his work on Mousterian functional variability through ethnoarchaeological research on the Nunamiut of Alaska and later the Alyawara of Australia. What resulted from this research was the organizational approach to forager technology. Binford (1973) had already begun to view forager technologies as organized systems whose components (and archaeological distributions) related to broader foraging dynamics and ecological contexts. However, this early perspective on technological organization (that is, functional variability) was flawed, because it saw Mousterian lithic technology as being organized in terms of functional specialization. In other words, it viewed assemblages as forming with various frequencies of certain tool types by virtue of the conduct of activities for which those were particularly useful. In later considerations of Paleolithic stone tool technology, it is clear that even Binford (1983, 1984, 1987) had come to see these problems with his earlier view concerning functional variability. In his ethnoarchaeological research, Binford (1977b, 1978, 1979, 1980) wisely turned his attention away from tool function and toward the location, timing, and qualitative characteristics of tool manufacture and maintenance episodes. He astutely realized that these issues were directly related to forager mobility patterns, settlement systems, and broader economic strategies and that they were far less problematic than the diagnosis of tool function. Binford argued that technological systems were organized in terms of the scheduling of manufacture and maintenance activities, the design of tools, and the spatial structuring
74 Chapter 2
of activities related to those tools. The organization of technological systems, in turn, was driven by the need to resolve both immediate technical problems and the anticipation of future needs. For these reasons, forager economic systems, and especially mobility patterns, structured where, when, and what kind of technological activities occurred, as well as the characteristics of resulting archaeological assemblages. Binford contended that archaeological assemblage characteristics could therefore be used to reconstruct the situational dynamics that influenced technological activities, in addition to the longer-term economic strategies stemming from the nature of environmental resources. In characterizing forager technological strategies, Binford (1977b, 1978, 1979, 1980) divided tools into the now-famous categories of expedient (that is, those made from locally available raw materials for the resolution of immediate technical problems) and curated (those manufactured in anticipation of future needs, made from more expensive raw materials, and retained by foragers for long periods of time). In illustrating the concept of expediency, Binford (1978) relates an anecdote in which a Nunamiut hunter lost his good knife in a lake after killing a caribou and dealt with this problem by butchering the animal using flakes quickly taken from quartz pebbles found on the shoreline. In presenting the concept of curation, Binford (1977b) discusses the role of hunting rifles among the Nunamiut, which are expensive weapons retained for generations and meticulously maintained, since they must work properly when they are needed. Binford (1978) estimates that 70% of the Nunamiut annual food income is derived from a single seasonal period of caribou migration. The costs of a malfunctioning rifle during such a migration period would be catastrophic. Considered by themselves, there is nothing terribly interesting in the distinction between expedient and curated tools; they are merely descriptive categories for naming different types technological strategies. What gives these terms analytical value is their tendency to correlate with other aspects forager economic systems and mobility patterns. Expedient tool use offers evidence concerning the nature and location of certain kinds of technological activities for the resolution of immediate technical problems. For example, Parry and Kelly (1987) demonstrate a clear link between expedient stone tool use in North America and sedentary lifestyles in which lithic raw materials are predictably available. Under such circumstances, they question why individuals would invest in the production of more elaborate and expensive tools when expediently produced flakes are perfectly suitable. Curated tools, by virtue of their production and maintenance in anticipation of future circumstances, reflect in terms of their design the specific economic activities conducted by forager groups over various
Stone Tool Technology and the Organizational Approach 75
time scales. More than this, however, they reflect the anticipation of the broader conditions of tool use beyond simply the tasks for which tools are used. Rather technological organization reflects much broader concerns for where, when, and under what circumstances a tool will be needed. These conditions are intimately shaped by how forager societies organize their settlement systems and mobility patterns in order to exploit food resources at various spatial and temporal scales. The structure of foraging resources is, in turn, determined by dynamics of geography, environmental productivity, and seasonality (Kelly 1983, 1995; Shott 1986; Binford 2001). Thus, the design of curated tools, as well as the location and timing of episodes of manufacture and maintenance, are directly related to the nexus of forager economic life ways, settlement systems, mobility patterns, and ecological contexts. Examinations of curated tool designs have also demonstrated clear relationships with other aspects of economic and mobility dynamics. The dominant framework for the consideration of curated tool design is that of Peter Bleed (1986), who distinguished between the concepts of reliability and maintainability. Reliable tools are those designed with various combinations of elements to prevent failure, including redundant features and back-up systems. In a sense, they are “overdesigned” to reduce the probability of failure under circumstances where proper tool function and risk reduction are prioritized. Such features of technological reliability can be seen today in the design of technologies such as passenger jet aircraft, where there are obviously extremely high costs associated with system failure. Bleed’s (1986) primary forager example is once again that of Nunamiut rifles, where the cost of malfunction during crucial periods of caribou migration are extremely high. In contrast, maintainable tools are those which have higher likelihoods of failure but that can be quickly repaired in the field without the investment of much time, energy, or the use of additional technology. Bleed’s (1986) example of a maintainable technological is that of Kalahari forager hunting kits, which include tools and raw materials for quickly repairing elements of the poison bow-and-arrow systems. Unlike the Nunamiut case, where massive herds of caribou are clumped in both space and time, game is the Kalahari is more randomly dispersed. If a hunting opportunity is missed owing to a malfunctioning weapon, the cost is low and more opportunities are likely to be available. Furthermore, as Lee (1968, 1979) has reported, hunting success rates for Ju/’hoansi (formerly !Kung) hunters are typically quite low, with more than 90% of trips ending unsuccessfully. Finally, Lee reports a significant majority of calories coming from plant food resources. Under such circumstances, weapons may fail without dire consequences, and
76 Chapter 2
the benefits of portable, multifunctional, and easily repairable hunting kits are great in the context of homogenous Kalahari environments. Differing technological strategies have also been tied to various forager settlement systems and resource targeting patterns (Oswalt 1976; Binford 1980, 2001; Kelly 1983, 1995; Torrence 1983; Bleed 1986; Shott 1986; Nelson 1991). In the last of his series of papers outlining the organizational perspective, Binford (1980) connected certain kinds of curated technologies, or what would later be defined by Bleed (1986) as examples of reliable technology, with the foraging strategy of logistical collecting. Binford saw logistical collecting as stemming from the specialized targeting of food resources that are predictably clumped in space and time. Under such conditions, foraging groups may exploit predictably clumped resources most efficiently by organizing specialpurpose task groups that travel to resources, often over long distances, and transport them back to distant residential camps. Such logistical trips involve the targeting of resources known a priori, and therefore they necessitate tools and weapons with specialized designs and features of technological reliability. Logistical collectors also typically make relatively few annual residential moves (or they may be sedentary) and instead, to paraphrase Binford (1980), move resources to the consumers (Figure 2.3a). In contrast, maintainable tools and weapons typically correspond with a true foraging resource targeting strategies in which foragers move
Figure 2.3a Example of residential mobility from the Ju/’hoansi (formerly !Kung) foragers of the Kalahari (adapted from Binford 1980)
Stone Tool Technology and the Organizational Approach 77
through environments with randomly distributed resources, making what Binford (1980) referred to as “daisy loops” (Figure 2.3b). This strategy relies on random encounter methods, requiring multifunctional tools and weapons capable of dealing with a wide range of unpredictable technical problems. For reasons already discussed, evenly and randomly distributed food resources reduce the costs associated with technical failure, helping to promote the development of maintainable technological systems. It is also the case that this foraging resource targeting strategy involves frequent residential moves, to paraphrase Binford (1980) again, moving consumers to the resources. Through this kind of systematic comparative research, scholars using the organizational approach began
Figure 2.3b Example of logistical mobility from the Nunamiut (adapted from Binford 1980)
78 Chapter 2
to elucidate the connections between the design and production of forager technological systems, the scheduling of technological activities, the characteristics of resulting archaeological assemblages, and broader economic/ecological dynamics that are our main subjects of interest. The organizational approach articulated well with the sequence of reduction perspective, providing alternative analytical foci to the traditional approaches based on retouched tool type frequencies. Concern for technological organization fostered further research on the relationships between mobility patterns, settlement systems, and strategies for economizing lithic raw materials through various core reduction strategies (Parry and Kelly 1987; Kelly 1988; Kelly and Todd 1988; Andrefsky 1994; Kuhn 1994, 1995). Other research examined issues of efficiency in terms of time and energy budgets relative to constraints of mobility and economic behavior (Torrence 1983, 1989; Jeske 1992; Kuhn 1995). When integrated with studies of tool and weapon design principles (Oswalt 1976; Bleed 1986), these studies began to make progress in understanding how prehistoric foragers adapted their lithic production to cope with problems of raw material availability influenced by mobility patterns and settlement systems, as well as anticipated technical needs stemming from recurring economic activities. Figure 2.4 shows a conceptual flow chart showing the interrelationships of variables responsible for structuring organizational variability with the archaeological record of technological systems. An important example of this type of approach is Steven Kuhn’s (1995) monograph, which presents an organizational perspective on Mousterian stone tool technology. In considering how issues of raw material economy and time budgeting influenced Mousterian knapping activities, Kuhn develops models of Neanderthal foraging activities and mobility patterns, isolating key differences with Upper Paleolithic foraging systems. This line of research was most innovative in moving beyond strict attention to formal tool types, focusing instead on more subtle aspects of assemblage patterning. Using modern forager variability as a source for his models, Kuhn examines data concerning the characteristics of core reduction sequences, the intensity of tool retouch, and the transport of lithic raw materials in order to make inferences about the ways in which Neanderthals moved around the landscape and structured their resource acquisition strategies. Although this research does not speak to the issue of the cognitive or cultural sophistication of Neanderthals directly, it presents a much clearer picture of Mousterian foraging systems, and it demonstrates meaningful differences with subsequent Upper Paleolithic life ways. This approach offers a tangible referential framework for constructing inferences about Mousterian foraging behavior based on lithic assemblages. Furthermore, it offers useful information with which
Stone Tool Technology and the Organizational Approach 79
Figure 2.4 Flowchart diagramming the organizational interconnections between environmental dynamics, mobility and settlement systems, and technological characteristics
to build theoretical models for understanding Mousterian foraging patterns in ecological terms, as well as explanations of change over time.
Methods for Studying the Organization of Technology The organizational approach seeks to make inferences about mobility patterns and resource acquisition strategies based on the designs of tools, the scheduling of episodes of tool manufacture and maintenance, patterns of lithic raw material collection, the transportation of tools, and the discard of lithics into the archaeological record at specific locations. In this respect, several distinct concerns are represented. On the one hand, design theory studies seek to understand the relationships between various tool design strategies and the economic contexts in which they operated. On the other hand, studies of assemblage composition may offer information concerning the spatial segregation of economic tasks, the structure of site use activities and shifts over space and time, and the nature of mobility systems. As somewhat separate concerns, studies of technological design and organization examine both the immediate and long-term patterns of technological problem solving.
80 Chapter 2
Analyzing Tool and Weapon Design In many respects, the characterization of prehistoric tool and weapon designs turns out to be among the most difficult of analytical tasks. The reason for this has to do with the fact that the relationship between various elements of stone tool assemblages and the whole tools used by prehistoric peoples is often quite murky. The main problem here is the fact that organic remains rarely preserve biasing considerations of design toward stone tools that may or may not have been actual extractive interfaces with environmental resources (Kuhn 1995). Some flukes of preservation, such as the wooden tools discovered at Schöningen, Lehringen, Clacton, and Kalambo Falls (Movius 1950; Clark 2001; Thieme 2005), demonstrate that stone tool assemblages do not necessarily give us a good picture of the tools and weapons hominins used to extract food resources from the environment. Instead, stone tools were more commonly used for secondary processing tasks, such as animal butchery and the manufacture of other (mostly wooden) tools. Even when stone tools were the terminal tackle of weapons, such as stone projectile points, inferences of the relationships between weapon design and function are often problematic. For example, Shea’s (1997, 2006) work on Middle Paleolithic points demonstrates the difficulty of answering a seemingly basic question: did Neanderthal’s use stonetipped spears? More recently, a spate of South African use-wear, damage morphology, and residue studies have made great strides in assessing issues of MSA weapon design and function patterns (Lombard 2005, 2006, 2007; Mohapi 2007; Pargeter 2007; Lombard and Pargeter 2008; Wadley and Mohapi 2008; Villa and Lenoir 2009, 2010; Wadley 2010). Such studies clearly frame the potential for these lines of research to elucidate strategies of weapon design. Note, however, that such studies represent huge amounts of disciplined effort and, in spite of this work, substantial ambiguities about the specifics of weapon design remain (McCall and Thomas 2012). In this section I consider the implications of handaxe design, arguing that these tools were versatile and multifunctional and that they were designed with these qualities in anticipation of unpredictable constellations of technical problems. Likewise, there are also significant implications of the origins of stone-tipped spears; this design shift from sharpened wooden spears represented a trade-off in enhancing certain kinds of functions while eliminating others. Analyzing the Sequential Positioning of Debitage The sequence of reduction and chaîne opératoire analytical perspectives are most useful in their capability of identifying the location of tools and debitage within sequences of core and tool reduction. Historically, the
Stone Tool Technology and the Organizational Approach 81
organizational approach developed hand-in-hand with the sequence of reduction perspective on stone tools. In addition, as Sellet (1993) has observed, chaîne opératoire analytical constructs may be quite effective in meeting the methodological needs of the organizational approach. One of the novel and appealing aspects of the organizational approach is its attention to the location, timing, and scheduling of technological activities in order to make inferences concerning mobility and resource acquisition strategies. In this respect, chaîne opératoire methods are particularly useful, because the place of lithic pieces within sequences of core and tool reduction are inherently related to the issues of timing and scheduling of technological activities. Refitting, for example, is one of the mainstays of chaîne opératoire approaches and may offer several different kinds of key information with respect to the goals of the organizational approach. Refitting offers the most direct way of reconstructing actual core reduction sequences and for recognizing the place of individual debitage pieces within those sequences. While refitting has been criticized on occasion because of its use for inferring paleo-decision-making processes within structuralist archaeological traditions (Bar-Yosef and van Peer 2009), it offers many other kinds of information not having to do with what was actually on the minds of ancient knappers. As I have discussed previously (McCall 2010a), the frequency of refits within an assemblage relative to the total size of the assemblage (refitting rate) varies according to many important cultural and geological dynamics of site formation. When compared over space and time, refitting rates are highest in later prehistoric contexts with longer residential occupations, such as during the Upper Paleolithic of Europe. In contrast, ESA and Lower Paleolithic sites almost universally have very low refitting rates, a situation that likely points to nonresidential patterns of site use and/or severe geological disturbance of sites. In offering the opportunity to recognize the place of individual debitage pieces within sequences of core reduction, refitting facilitates the identification of what pieces may be missing from those sequences. First, identifying debitage resulting from various stages of core reduction allows for the characterization of the timing of knapping activities and for comparing assemblages in sequential terms. Second, missing pieces often represent stone tools that were transported away from their production site, in many cases as elements of composite technological systems or other elements of curated “personal gear” (Binford 1977b). Understanding the stages of core reduction represented by the debitage within lithic assemblages and what pieces were carried away from sites allows the construction of inferences concerning issues of site use dynamics, the scheduling of technological manufacture/maintenance episodes, the mobility and settlement systems that structured technological activities,
82 Chapter 2
the goals of knapping activities, and the design qualities of tools. Together, these constitute key elements of the organizational approach. Methods other than refitting are also useful in assessment of the sequential ordering of debitage and are therefore quite important to studies of technological organization. Dorsal flake surface cortex morphologies are one such source of information. Cortex is the rind on the surface of lithic nodules that may form through biochemical, geochemical, and/ or weathering processes. What is important is that cortex marks the exterior of nodules and therefore, in general terms at least, occurs in higher frequencies earlier in sequences of core reduction. There are many different methods of classifying or quantifying cortex patterns (Whittaker 1994; Odell 2004; Andrefsky 2006). Among the most common are these: (1) the distinction between “primary flakes,” whose dorsal surface is completely covered by cortex, “secondary flakes,” whose dorsal surface is partially covered by cortex, and “tertiary flakes,” whose surface is completely devoid of cortex (Jeter 1977; Rodgers 1977; Sullivan and Rozen 1985); (2) the so-called Toth type approach, named for their inventor, Nicholas Toth (1982, 1985), which divides cortex patterns into six categories, effectively adding cortical and noncortical platform characteristics to the previous classification systems; (3) cortex may also be quantified as a percentage of a flake’s dorsal surface (Sullivan and Rozen 1985; Amick and Mauldin 1989; Roth and Dibble 1998; Braun et al. 2008a; Marwick 2008; Lin et al. 2010). Opinions vary as to the relative effectiveness of these methods in terms of determining the actual sequential position of flakes (for example, Dibble et al. 2005), especially when variations in raw materials and techniques are considered. In addition to the techniques with which cores were reduced, raw material size and shape strongly affect cortex patterns. In addition, many raw material types (especially those occurring in primary sources) may not have any cortex at all. In spite of these potential difficulties, Paleolithic cortex studies already have a long history of contribution to understanding patterns of technological organization. For example, in his seminal reassessment of Oldowan stone tools at Koobi Fora, Toth (1985) used his cortex classification system in combination with experimental knapping to identify what stages of core reduction were overrepresented or underrepresented at various sites. He found that the earliest stages of core reduction were present in higher-than-expected frequencies relative to those from later stages, suggesting that flakes from later stages were systematically transported away from the sites of their production. This study was instrumental in demonstrating that even Oldowan hominins transported lithics around the landscape and in providing key information concerning patterns of Oldowan technological organization (see also Braun et al. 2008a). This approach depends on the capability
Stone Tool Technology and the Organizational Approach 83
of classifying debitage in terms of its place within sequences of core reduction. Striking platform faceting patterns are another source of information about the sequential place of debitage pieces. Once again, the assumption underlying this approach is that, as cores are reduced, flakes become increasingly likely to exhibit multifaceted platforms (Sullivan and Rozen 1985; Parry and Christenson 1987; Amick and Mauldin 1989; Bisson 1990; de la Torre 2003; Bradbury and Carr 2004; Odell 2004; Andrefsky 2006). Platform faceting morphologies are also dependent on conditions of raw-materials collection and the techniques used to reduce cores. In fact, multifaceted platforms are frequently used to diagnose certain complex core reduction strategies, such as the Levallois technique (for instance, Bordes 1961; Boëda 1995; Deacon and Deacon 1999; Tryon, McBrearty, and Texier 2005). Recently, de la Torre and colleagues (2003) have used patterns of platform faceting to make inferences concerning knapping activities for Oldowan assemblages at Peninj, Tanzania. While I disagree with their argument that the Peninj patterns of platform faceting link it with Levallois-like knapping strategies, this study does underscore the utility of platform facets for making inferences about the sequential ordering of debitage. Many researchers have commented that there is no single “magic bullet” in terms of debitage analysis strategies3 (for example, Odell 2001, 2004; Andrefsky 2006, 2009; Braun et al. 2008a). The limitations of each of these methods for assessing the sequential positioning of debitage should be borne in mind and multimethod approaches utilized. It is also the case that the characteristics of debitage in terms of sequential ordering are not, by themselves, self-explanatory. As with the inference of technological organization from the design strategies of tools and weapons, ethnoarchaeological observations of site-use dynamics and resulting archaeological patterning form a key referential framework (Binford 1977, 1978, 1979, 1980, 1981, 1986; Yellen 1977; Binford and O’Connell 1984; Simms and Heath 1990; Enloe, David, and Hare 1994; O’Connell 1995; Greaves 1997; Bamforth, Becker, and Hudson 2005; McCall 2007, 2012). However, it is clear that these various debitage analysis methods are capable of going far beyond the traditional attention paid to the formal characteristics of retouched tools and cores, providing crucial information with which to make inferences concerning the organization of technological systems. Analyzing Patterns of Tool Reduction and Inferring Curation Sequence of reduction methods have also made great contributions to the study of technological organization in the analysis of retouched or “formal” tools. In the North American tradition, this focus derived
84 Chapter 2
from the prevalence of bifacial tools, especially projectile points, over the course of the continent’s prehistory (Holmes 1894; Kelly 1988; Whittaker 1994; Amick and Mauldin 1989; Carr and Bradbury 2011; see also Close 2006 for a critical perspective). From the initial explication of the organizational approach (for instance, Binford 1973, 1977b), there has been a sense that retouched tools related to the concept of curation in particularly direct and recognizable ways. From the time of Holmes (1894), it has been understood that various tool forms were reduced through processes of retouch as a tactic for resharpening dulled edges, repairing breaks, and other forms of tool recycling. With respect to the concept of curation, it has been assumed that reduction occurred as individuals retained retouched tools over time and transported them around the landscape. Thus, retouched tools generally fit the bill in terms of the Binfordian definition of curation, and intensity of retouch offers a measurable characteristic that can be systematically measured in terms of interval-level data. Recently, this perspective has been taken to its extreme through the calculation of mathematical indices of reduction as proxies for curation. In rejecting key elements of the Binfordian definition of curation as virtually impossible to recognize in archaeological terms, Shott (1996, 2003) argues that curation should be redefined as the difference between the initial potential utility of a tool and its final potential utility on being discarded into the archaeological record. This view holds that (1) the extent to which a tool has been retouched and reduced can be measured through various techniques, (2) the original formal characteristics of that tool can be estimated in various ways based on its final form, and (3) these two measures can be used to calculate an index that may act as a proxy for a tool’s extent of curation. From this vantage point, various researchers have proposed a broad array of methods for calculating reduction indices with an eye to the assessment of dynamics related to tool curation (Kuhn 1990; Clarkson 2002; Hiscock and Clarkson 2005; Shott and Sillitoe 2005; Shott and Weedman 2007; Marwick 2008; Horowitz 2010; Horowitz and McCall 2013). These indices are generally calculated using some combination of tool width, thickness, edge angle, flake scar ridge count, and length/size of retouched surfaces and are aimed at answering the seemingly basic question of how retouched a tool really is. I harbor some skepticism about the usefulness of these indices in the absence of other forms of information, such as the characteristics of associated debitage and the broader characteristics of assemblages as wholes (Horowitz 2010; McCall 2012; Horowitz and McCall 2013). In brief, I feel that the intensity of retouch that a tool exhibits says nothing directly about the nature and timing of that tool’s manufacture and
Stone Tool Technology and the Organizational Approach 85
maintenance episodes or its discard. One tool may be used intensively and retouched frequently in a short space of time and thus discarded with extensive retouch shortly after its manufacture. Another tool might be used and/or retouched infrequently but retained and transported for a long period of time after its manufacture. These two tools might score identically in terms of various retouch indices, but they obviously have very different properties with respect to the concept of curation and technological organization. In my view, patterns of retouch are not, by themselves, sufficient sources of information to assess curation. Instead, the curation of retouched tools is best understood through the relationship between their formal characteristics, the characteristics of assemblages with which they are associated, and the variability of assemblages through space and over time. Nonetheless, I fully agree with the proposition that the morphologies and frequencies of retouched tools and cores hold great potential for assessing patterns of technological organization. One way in which formal tools have been related to assemblage characteristics is the calculation of ratios of various types of cores, tools, and unmodified flakes. For example, many North American archaeologists have used biface/core ratios to assess mobility and settlement systems (Johnson 1986; Parry and Kelly 1987; Parry and Christenson 1987; Bamforth and Becker 2000; Bamforth 2003). Similarly, core/flake and tool/flake ratios have sometimes been used in the analysis of Paleolithic assemblages for similar purposes (Toth 1985; Roth and Dibble 1998; Villa and Soressi 2000; de la Torre et al. 2003; McCall 2009, 2010a, 2010b). The calculation of these types of ratios also holds benefits in terms of normalizing data for the purposes of comparing assemblages between different contexts. My experience has suggested that such ratios may be useful tools in assessing knapping dynamics evident with various archaeological contexts as a method of examining technological organization. The ratio of cores to flakes, which I have conventionally calculated as the percentage of cores from the total assemblage to avoid the possibility of dividing by zero (McCall 2006a, 2010a), may act as an indicator of the intensity of knapping activities and the frequency of the removal of endproducts. If cores are more extensively reduced, this condition results in a higher frequency of flakes relative to cores. Likewise, if cores were removed from archaeological contexts (that is, transported elsewhere in anticipation of future technological activities), this fact may also result in a high frequency of flakes relative to cores. In contrast, higher frequencies of cores relative to flakes may occur if knapping sequences are shorter. Here, both contexts in which transported cores were discarded and those from which large numbers of flakes were removed would exhibit high
86 Chapter 2
frequencies of cores relative to flakes. Thus, primary knapping contexts, wherein raw material was collected and tested (quarry sites), will tend to have high ratios of flakes relative to cores. Sites where transported cores were ultimately deposited and/or sites from which large quantities of debitage were removed tend to have high ratios of cores relative to flakes. While exact values of core/flake ratios vary according to the dynamics of raw material size and quality, the knapping techniques employed, and myriad aspects of site formation, they offer a productive tool for examining the types of knapping activities conducted in both sequential and organizational terms. The ratio of bifaces to nonbifacial cores also has a long history of use in the examination of mobility strategies and settlement systems, especially with regard to North American prehistory. This history is due to the fact that bifaces are usually understood as curated tools manufactured within the context of technological strategies for coping with challenges raised by high-frequency and/or long-distance mobility (Parry and Kelly 1987; Parry and Christenson 1987; Kelly 1988; Kelly and Todd 1988; Bamforth and Becker 2000; cf. Bamforth 2003). Contexts with high frequencies of bifaces relative to nonbifacial cores are typically viewed as locations where curated tools were discarded and where little knapping activity occurred. Contexts with high frequencies of nonbifacial cores relative to bifaces are usually understood as locations where repeated primary knapping activities occurred and where few curated tools were discarded. For example, in their oft-cited paper, Parry and Kelly (1987) argue that the disappearance of bifaces from the North American archaeological record and the concomitant dominance of expedient core-flake knapping practices resulted from a transition from mobile hunter-gatherer lifestyles to sedentary agricultural societies. As I argue in the next chapter, biface/core ratios also have strong implications for the organization of ESA and Lower Paleolithic technological systems in terms of tool manufacture, transport, and mobility patterns (see also McCall 2006a, 2009, 2010a, 2010b). Once again, however, biface/core ratios are useful only in certain archaeological contexts, excluding those in which bifaces were not routinely manufactured and subject to the same recurrent cautions concerning raw material constraints. The ratio of retouched tools to flakes works according to a logic similar to that used for the ratio of bifaces to cores. Again, high frequencies of retouched tools relative to unmodified flake are usually perceived as curated tools, whereas high frequencies of unmodified flakes relative to retouched tools are seen—rightly or wrongly— as an indication of primary knapping activities. Contexts with high tool/flake ratios are generally viewed as locations where curated tools were discarded and where few knapping activities occurred. In contrast, contexts with low
Stone Tool Technology and the Organizational Approach 87
tool/flake ratios are seen as locations where cores were reduced and from which tools (ultimately discarded elsewhere) were transported. The interpretation of tool/flake ratios, therefore, generally mirrors that of biface/core ratios and is useful for archaeological contexts in which bifaces were not manufactured. Analyzing Patterns of Lithic Raw Material Transport Perhaps the most direct evidence concerning prehistoric mobility patterns comes from the transport of lithic raw materials itself. Archaeological sites frequently have lithic raw materials from multiple sources, and, since rocks cannot walk on their own, raw materials from distant sources are assumed to have been transported onto sites as elements of human movement. This source of information, however, is not without its share of controversy. The significance of raw material transport for the inference of mobility patterns was first recognized by Binford (1978, 1985; Binford and Stone 1986), who argued that raw material collection was “embedded” within other economic activities requiring movement around the landscape. In overview, the concept of embeddedness recognizes that it is primarily food resource acquisition that drives forager movement around the landscape. In contrast, the direct collection of raw materials through special long-distance trips is impractical due to the opportunity costs that such collection would represent relative to other economic activities. Thus, lithic raw material collection is connected with the organization of other economic activities and movements around the landscape. Binford (1978: 256) exemplifies this fact through an anecdote from his ethnoarchaeological research among the Nunamiut in Alaska: A fishing party moves in to camp at Natvatrauk Lake. The days are very warm and fishing is slow, so some of the men may leave the others at the lake fishing while they visit a quarry on Nassaurak Mountain, 3.75 miles to the southeast. They gather some material there and take it up on top of the mountain to reduce it to transportable cores. While making the cores they watch over a vast area of the Anaktuvuk valley for game. If no game is sighted, they return to the fishing camp with the cores. If fishing remains poor, they return to the residential camp from which the party originated, carrying the cores. Regardless of the distance of Nassaurak Mountain from the residential camp, what was the procurement cost of the cores? Essentially nothing, since the party carried home the lithics in lieu of the fish which they did not catch. They had transport potential, so they made the best use of it; the Eskimo say that only a fool comes home empty handed!
Binford also adds a quote from Nunamiut hunter, Jessie Ahgook, who stated: “Catch things when you can, if pass good stone for tools, pick
88 Chapter 2
‘em up, if pass good wood for sled runner, catch ‘em then. Good man never think back and say, ‘If I had just pick ‘em up last summer!’” (1978b: 258). These stories vividly illustrate the concept of embeddedness and its potential role in the study of technological organization.4 This perspective was vehemently challenged by Richard Gould (1980a, 1980b, 1985; Gould and Saggers 1985), which has come to be known as the “righteous rocks” debate (see also McCall 2012). Based on his archaeological and ethnoarchaeological research in central Australia, Gould argued that lithic raw materials were transported across the landscape for symbolic reasons having to do with connections to place, kinship affiliations, and lines of ancestral descent. In particular, he pointed to an archaeological example in which exotic lithic raw materials appeared in moderate frequency despite the fact that local lithic raw materials were of superior quality. Gould also offers ethnographic accounts of stone tool transport in which individuals were reminded of connections to important places with special social or religious significance. Indeed, the belief that the appearance of exotic lithic raw materials at archaeological sites represents a form of symbolic activity remains remarkably pervasive, especially in South African and European Paleolithic circles (Deacon 1989; Johnson 1989; Wurz 1999, 2008; Coulson, Staurset, and Walker 2011; Schwendler 2012). Beyond such studies, which explicitly attach symbolic meaning to exotic stone types, there are countless others that simply ignore the possibility of embedded raw material collection altogether, instead coming to any number of other conclusions concerning direct procurement, trade, elite control of lithic resources, and so on, all of which are equally unlikely to have occurred among Pleistocene foragers and especially members of earlier hominin species. At a basic level, this debate is impossible to resolve with any absolute certainty, because (in the absence of Mr. Peabody’s “wayback” machine) we cannot make direct observations on the phenomenon of interest—how, where, and why prehistoric peoples collected raw materials in the way that they did. For my part, however, I would argue that embeddedness clearly remains the most parsimonious explanation for raw material acquisition and transport patterns. For one thing, while we may be impressed on a gut level with the beauty and potential value of exotic stones (for example, Coulson, Staurset, and Walker 2011), we have no assurance that prehistoric people valued them in the same way. We may wrongly bring our modern sensibilities about the aesthetic qualities of stone to bear on this issue. Except for modern knappers (like me) operating under circumstances of periodic mobility around the country, the concept of embedded lithic raw material is rather foreign and remote. In contrast, direct procurement strategies
Stone Tool Technology and the Organizational Approach 89
dominate our daily lives, and the thought of making a special trip to collect pretty raw materials seems quite familiar and appealing to our current sensibilities. More to the point, there is also a substantial empirical basis in terms of ethnographic and ethnoarchaeological observations demonstrating that embedded raw material procurement has been a predominate practice, especially among foragers and other small-scale societies. In a recent paper (McCall 2012), I list a number of studies in which lithic raw material procurement is embedded within other economic practices. These cases include mobile foragers, such as those living in Australia and Siberia (Binford and O’Connell 1984; Binford 1986; Beyries 1997), who collect raw materials during both residential and logistical foraging trips. They also include sedentary agricultural and horticultural groups, such as those of New Guinea (White 1967; White and Thomas 1972; Strathern 1969; Sillitoe and Hardy 2003), who collect raw materials from fields during planting activities. In contrast, cases of direct lithic raw material procurement are exclusively associated with forms of specialized craft production, whereby full-time craft specialists collect stone either as a raw material for final products or as a source of tools for producing them. Such cases include Ethiopian hide processing (Weedman 2000, 2002, 2006; Arthur 2010) and New Guinea polished adze production (Hampton 1999; Stout 2002). In these cases, direct raw material procurement is driven by the recurrent demands of craft specialists and frequently involve specialized collection at considerable cost. In addition, archaeological studies have also demonstrated substantial differences in raw material transport between past peoples with various mobility patterns, settlement systems, and resource acquisition strategies. For example, the predominant exploitation of locally available lithic raw material in combination with expedient knapping tactics during the Oldowan industry has been argued to indicate the production and use of stone tools at special activity areas (that is, for butchery faunal resources) and the limited transport of stone tools as curated items (Toth 1982, 1985; Potts 1988, 1991; Schick and Toth 1993; contra Braun et al. 2008b). Likewise, the presence of low frequencies of lithics made on exotic raw materials, especially in the form of (curated) retouched stone tools, has been tied to patterns of residential mobility in which foragers carried tools across the landscape as elements of “personal gear” (Binford and O’Connell 1984; Binford and Stone 1986; Kelly and Todd 1988; Kuhn 1994, 1995; Ambrose 1990, 2002, 2006; Andrefsky 1994; Amick 1996; Hiscock 1996; Odell 1996; Blades 2003; Brantingham 2003; McCall 2007, 2012). In such cases, the presence of exotic lithic raw materials, the nature of tool forms in which they appear at archaeological sites, their frequencies, and their
90 Chapter 2
transport distances all provide important information about the nature of residential mobility patterns. There are also some rare cases in which extreme patterns of logistical mobility apparently resulted in high frequencies of debris made on exotic lithic raw materials, as well as extremely long transport distances. For example, the high frequency of silcrete available at sources >40 km distant within the Howiesons Poort levels at the MSA site of Klasies River has been argued to indicate the employment of logistical trips targeting specific food resources over long distances from residential camps (Ambrose and Lorenz 1990; Ambrose 2002, 2006; McCall 2006b, 2007; contra Minichillo 2006). In addition, as Schild (1987) describes, several Magdalenian sites show surprisingly high frequencies of so-called chocolate flint sources more than 200 km distant (see also Kuhn 1995). This pattern has been argued to indicate that Magdalenian foragers made extremely long logistical trips targeting seasonal caribou migrations, returning with core preforms made on flints available at the end-points of such trips, in much the same way as was suggested by Binford’s (1978) anecdote just cited. In such cases, the transport of lithic raw materials may provide valuable clues concerning the ways in which past people moved around the landscape, structured their technological activities, and organized their resource exploitation strategies. In sum, although some researchers continue to doubt the value of exotic raw material transport for inferring mobility patterns stemming from concerns about actual embeddedness of lithic raw materials, I believe that this source of information is crucially important to studies of technological organization (McCall 2006b, 2007, 2012; McCall and Thomas 2012). Furthermore, studies of raw material transport are considerably strengthened when tied to other aspects of technological organization, including the sequential approaches to the analysis of debitage and formal tools discussed in this chapter. Although raw material transport studies are often time-consuming and costly in terms of the documentation of the geological contexts of archaeological sites and potential raw material sources, their potential benefits far outweigh any such costs.
Conclusion In spite of their inherent unfamiliarity and general absence from the cultural systems of the modern world, archaeologists have made great strides in learning about stone tools and using them as important sources of archaeological inference. Today, much of the field has turned its attention to the documentation of various formal tool morphologies in increasing detail using advanced technologies for the
Stone Tool Technology and the Organizational Approach 91
purposes of understanding the cognitive capabilities, the levels of skill of past knappers, or the culture-historical/evolutionary relationships between tool types. Although I do not denigrate these lines of research, I believe that stone tool assemblages hold great and largely unrealized potential to provide information concerning the kinds of economic activities people performed, the common technical problems they faced, the ways people moved around the landscape, and the nature of their broader cultural systems. In this way, lithic assemblages can offer key bodies of information with relevance to theoretical issues such as those inherent to the hunting-and-scavenging and modern human revolution debates. As the next two chapters demonstrate, studies conducted within the organization of technology framework allow us to address issues resulting from our central theoretical debates by moving beyond simplistic arguments about the sophistication or “modernity” of past knappers. As such, the analytical methods used in these chapters are not complicated; they do not rely on advanced technology, they are routinely collected as elements of most lithic analyses, and they are commonly published in reports. Rather than focusing on the atomistic descriptions of the most eye-catching tool types and the invention of scenarios involving their roles in past societies, this approach requires creativity in considering more subtle aspects of assemblage patterning in ways that utilize our actualistic knowledge concerning knapping techniques and forager economic systems. If nothing else, ethnoarchaeological studies have consistently shown our tendency to focus on aspects of the archaeological record that are striking at the expense of subtler phenomena that often hold the richest information relative to our theoretical goals. Indeed, given the inherent unfamiliarity of stone tools as a type of material culture in the modern world, it is also likely that we often choose to focus on aspects of stone tools that are most appealing to us for aesthetic reasons rather than examining features that may be most meaningful in terms of the questions we wish to ask of the archaeological record.
Chapter 3
The Organization of Early Stone Age Lithic Technology
T
he Acheulean industry has been one of the main foci of Paleolithic archaeology since the very inception of the discipline (for example, Boucher de Perthes 1864; Lubbock 1865; de Mortillet 1881). There is little doubt that it was the striking formal characteristics of Acheulean handaxes that caught the attention of 19th-century natural historians and incipient Paleolithic archaeologists, who were otherwise quite unfamiliar with stone tool technology. For researchers such as Boucher de Perthes, Acheulean handaxes were unequivocal markers of the presence of human ancestors in geological deposits dating to ancient epochs, effectively demonstrating the antiquity of human ancestors on Earth. In addition, for early students of the Paleolithic, aspects of the archaeological patterning of handaxes provided a gateway to more sophisticated field methods and analytical perspectives, helping to shape the origins of the modern discipline. For the better part of the last century, archaeological research on the Acheulean industry has generally focused either on issues of culture history/geography or the assessment of cognitive sophistication and knapping skill on the basis of handaxe morphologies (Breuil 1926; Movius 1948; Clark 1959, 1965; Coon 1962; Bordes 1968b; Gowlett 1984; Wynn and McGrew 1989; Kohn and Mithen 1999; Mithen 2003; Stout 2002; Lycett and Gowlett 2008; Lycett and von Cramon-Taubadel 2008; Hodgson 2009). I also have little doubt that these research
Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 93–140 © 2015 Left Coast Press, Inc. All rights reserved. 93
94 Chapter 3
trajectories resulted from the same aesthetic qualities of handaxes that so effectively caught the attention of the earliest Paleolithic archaeologists. Handaxes are tantalizing in the sense that they seem to imply that their early hominin makers were very much like us in terms of their intelligence, knapping skills, and qualities of social/ethnic identity; perhaps also in the possession of modern linguistic abilities and complex forms of social organization. While such studies are based on inherently provocative aspects of the Acheulean archaeological record, I firmly believe that they are problematic with respect to the construction of referential frameworks and are therefore doomed to retain a speculative character. Although great strides have been made in terms of assessing knapping skill from archaeological assemblages (Stout 2002; de la Torre et al. 2003; Delagnes and Roche 2005; Stout and Semaw 2006; Bamforth and Hicks 2008; Bleed 2008; Stout and Chaminade 2008; Stout et al. 2008; Davidson 2010), the ways in which individuals learned skills and the linkages with various levels of cognitive sophistication remain ambiguous. It is clear that Acheulean hominins possessed high levels of technical skill in producing elaborately thinned, symmetrical, and redundantly shaped handaxes, which continue to fascinate archaeologists today. However, it is equally clear to me that we bring a distinctively presentist bias to our views of skill, largely ignoring the possibility that Acheulean early hominins may have had radically different ways of learning to knap, which may have occurred in social contexts that are outside our modern imagination of human norms. In contrast, there are many aspects of Acheulean archaeological dynamics that also hold great potential in reconstructing the organization of foraging lifeways. As I discuss shortly, there have been a number of studies that have approached handaxes in functional terms, often in ways that articulate with the hunting-and-scavenging debate. By using inferred patterns of handaxe use, functional studies have sought to address the economic activities of early hominins in terms of carcass processing and tool manufacture behaviors. Although such studies added some much-needed perspective on the role of handaxes in early hominin prehistory, they frequently fell prey to the well-known pitfalls of functional analysis/typology and the reductive dynamics of stone tools discussed in the previous chapter. In addition, as with studies of handaxe morphology aimed at assessing skill and cognition, functional studies of handaxes largely ignored other aspects of assemblage composition with great potential for examining technological organization. Finally, a surprising number of functional studies of handaxes have come to what may be somewhat euphemistically described as “off-beat” conclusions, some of which have achieved surprising popularity in the Paleolithic literature.
The Organization of Early Stone Age Lithic Technology 95
Far fewer investigations have considered the broader patterning of the archaeological record of the Acheulean and contemporaneous stone tool industries in holistic ways and from the perspective of technological organization. Some striking aspects of assemblage patterning, such as the tendency of handaxes to either appear at sites in extremely high densities or to be largely absent, have been recognized from the inception of Paleolithic archaeology, though largely interpreted in culture-historical terms (for instance, Breuil 1926). In contrast, beginning in the 1970s a few studies began to consider patterns of Acheulean tool design, assemblage composition, landscape distribution, and dynamics of site use through the lens of technological organization (Jelinek 1977; Ohel 1979, 1987; Draper 1985; Binford 1987). Such studies made great strides toward recognizing the ways in which early hominin mobility patterns and site use dynamics influenced tool designs (especially handaxes), as well as the ways in which the scheduling of knapping activities and tool transport influenced patterns of archaeological distribution. Yet, they never quite succeeded in going beyond relatively simplistic considerations of tool function and generally failed to incorporate known dynamics of lithic reduction associated with the Acheulean industry. To wit, while the organizational approach has achieved widespread popularity in the analysis of technological systems in other times and places (see Carr and Bradbury 2011 and McCall 2012 for recent discussions), it has largely died out within modern research on the ESA and Lower Paleolithic. This chapter follows from my earlier work (McCall 2006a, 2010b, 2012) in attempting to revive applications of the organizational approach to the Acheulean and contemporaneous stone tool industries. Based on the analyses of Middle Pleistocene lithic assemblages in sub-Saharan Africa presented here, I argue that early hominins employed strategies of mobility, resource acquisition, and site use radically different from those known from both subsequent periods of Paleolithic prehistory and modern forager groups. I make the case that early hominins did not routinely occupy residential camps of the sort used by modern humans but rather moved around the landscape as social collectives within “routed foraging” mobility systems (Binford 1984), consuming resources at or near their sites of acquisition and sleeping in the nearest protected location with the onset of nightfall. In modeling this mobility pattern, I present data on primate and large-bodied carnivore foraging strategies, suggesting that the Acheulean early hominins shared important aspects with both. In short, the lithic data presented here show major organizational differences from subsequent periods in terms of the scheduling, transport, and discard of tools. Furthermore, these organizational dynamics can be easily linked with the striking patterns of tool design and reduction associated with Acheulean handaxes.
96 Chapter 3
Previous Research in Acheulean Tool Function By virtue of their impressive formal characteristics, Paleolithic archaeologists have speculated about the functions of Acheulean handaxes since the beginnings of this field of study. Even the term handaxe originated as a basic outcome of this functional speculation. It was only with the field’s coming of age in the 1960s and 1970s that issues of function were pursued in systematic fashion. Initially, functional studies of the Acheulean focused on their association with other archaeological features, especially the bones of large animals assumed to have been the prey of early hominin hunters, and concluded that handaxes were specialized tools for heavy-duty butchery tasks. The most famous of these were the arguments of Shipman and colleagues (1981) that the handaxes at Olorgesailie were used to butcher the remains of various pachyderms and also an extinct species giant gelada baboon (Theropithecus oswaldi), whose bones are associated with extremely dense concentrations of handaxes (see also Binford and Todd 1982). In linking handaxes with specialized carcass-processing activities, Isaac (1977) argued that the patterns of the Acheulean archaeological record supported large game hunting as the key element of the Washburn-Isaac theoretical synthesis. Although Binford (1977a, 1981, 1983, 1984, 1985, 1987) stridently disagreed with almost every aspect of Isaac’s (1977) interpretation of the archaeological remains at Olorgesailie and their implications for early hominin hunting/sophistication, he largely agreed that handaxes were specialized tools for carcass processing (see especially Binford 1987). Specifically, he saw the heavy-duty function associated with handaxes as facilitating the dismemberment of dried and desiccated carcasses associated with the “marginal scavenging” economic pattern. Binford argued that early hominin scavengers manufactured handaxes as tools for violently hacking through the dried skin and connective tissues of carcasses abandoned in the heat and dryness of the arid zones of subSaharan Africa. Furthermore, Binford couched his critiques of arguments for specialized large game hunting at Olorgesailie in terms of its animal bone assemblages and geological processes of site formation, rather than focusing on the technological dynamics associated with handaxes. Thus, arguments for handaxe as specialized heavy-duty butchery tools gained popularity across a diverse theoretical landscape. The prominence of the view of handaxes as specialized heavy-duty butchery tools may also be seen in the prevalence of one of their alternative names: large cutting tools, or LCTs. As near as I can reconstruct, this term was coined by J. Desmond Clark (1964: 92) and was intended to circumnavigate the functional connotations inherent with the term handaxe. Unfortunately, it may well be that the LCT terminology merely
The Organization of Early Stone Age Lithic Technology 97
replaces one set of functional assumptions with another, based perhaps on only slightly more evidence. Although handaxes are sometimes large, and I have no doubt that they were frequently used for cutting tasks, I also believe that they were used for a wide variety of other activities, which may have involved equally important design considerations. Thus, I see little value to replacing the term handaxe with LCT, since both would seem to hold as-of-yet unwarranted implications for tool function. Other more generalized views of handaxe function began to emerge in the 1970s (Jelinek 1977; Ohel 1979, 1987; Schick and Toth 1993; Whittaker and McCall 2001; McCall and Whittaker 2007; Nowell and Chang 2009; Tryon and Potts 2011). Based largely on increasing experimentation in terms of both manufacturing and using handaxes (Newcomer 1971; Jones 1980; Keeley 1980; Keeley and Toth 1981; Schick and Toth 1993; Backwell and d’Errico 2001), these perspectives began to recognize the wide range of tasks for which the large bifaces of the Acheulean industry may be used. From this perspective, Kathy Schick and Nicholas Toth (1993: 258) deemed handaxes the Swiss army knives of the Paleolithic. Experimental studies demonstrating the wide range of potential handaxe functions continue to proliferate. Very early in this debate, use-wear analysis pioneer Lawrence Keeley (1980) posed some interesting problems concerning handaxe functions: he observed that sharp flakes of the sort common to the Acheulean industry (and the Oldowan industry, as well as most other periods of the Stone Age) were frequently used for butchery tasks and exhibited very similar patterns of meat and bone polish, as indicated by his (albeit small) sample of Hoxne handaxes. Keeley felt that it was probable either that (1) some handaxes had other functions not manifested in his analysis or (2) that a set of nonfunctional dynamics was the main influence on handaxe design, or both. Indeed, Keeley (1980) ultimately argued for a range of functions based on the use-wear patterns of the Hoxne handaxes. This observation is remarkably incisive in recognizing both the generalized utility of handaxes and the probable influence of organizational rather than purely functional dynamics on their design. Subsequent use-wear and micro-residue studies have provided some confirmation of this line of speculation. For example, Keeley and Toth (1981) later found multiple types of use-wear traces on handaxes from Koobi Fora, including butchery and the modification of both hard and soft plant tissues. In addition, Manuel Domínguez-Rodrigo and colleagues (2001) have found phytolith residues presumably accumulated during woodworking activities on the cutting surfaces of handaxes from the early Acheulean site of Peninj. Similarly, Sorresi and Hays (2003) document both butchering and woodworking use-wear on
98 Chapter 3
Mousterian of Acheulean Tradition (MAT) handaxes from southwestern France. Thus, although butchery is a common function of handaxes indicated by use-wear studies, and there are certain cases in which butchery seems not to have been their exclusive purpose (Keeley 1980; Mitchell 1998), there is considerable use-wear and micro-residue evidence supporting handaxe multifunctionality. More broadly stated, there are still surprisingly few use-wear and residue analysis studies on which to base our conclusions about handaxe function, and these are plagued by problems of postdepositional damage, raw material unsuitability, and the erasure of use-wear patterns by processes of tool reduction (Schick and Toth 1993; Whittaker and McCall 2001; Nowell and Chang 2009). It would seem unwise to accept handaxes as specialized butchery tools based on this set of evidence. This point brings me to the more off-beat models of handaxe function, some of which have gained surprising popularity. The most famous of these is the proposition that handaxes were used as projectile weapons, thrown into herds of fleeing animals in order to wound individuals and slow their flight. While the idea of handaxes as projectile weapons seems to have been proposed first by Jeffreys (1965), its most systematic support was offered by O’Brien (1981). This model suggested that handaxes were designed with specialized aerodynamic properties to facilitate throwing either overhand or, in cases of very large handaxes, in a manner similar to a modern discus. In her experiment, O’Brien (1981) showed that handaxes thrown in both ways had a tendency to land edge-on in a way that would be damaging to any unfortunate creature that happened to be in their path (see also Samson 2006). As John Whittaker and I have discussed (Whittaker and McCall 2001; McCall and Whittaker 2007), this model would seem highly unlikely in light of both the variation represented in modern hunter-gatherer projectile weapon technologies and the complex dynamics of Acheulean archaeological site formation. In addition, my experimentation with Whittaker suggests that handaxes thrown like discuses tend not to land edge-on, and it is impossible to throw handaxes this way with any degree of accuracy. Furthermore, the majority of handaxes would probably not do much damage to a large gregarious ungulate of the type suggested by O’Brien, except perhaps to make it angry. This Great Handaxe Hurling Debate might be nothing more than an amusing side note to the Acheulean handaxe literature were it not the case that several prominent cognitive theorists have incorporated handaxe hurling into their models. Among these, William Calvin (1993, 2002) is perhaps the most prominent, using his certitude that handaxes were specialized projectile weapons to argue for a close connection between
The Organization of Early Stone Age Lithic Technology 99
the coordination of the complex brain functions involved in throwing and the evolution of the human brain. In this context, the putative throwing of handaxes has become a legitimized component of certain kinds of narratives concerning brain evolution, early hominin cognitive sophistication, and hunting specialization. While Whittaker and I both agree that throwing has much to do with human brain evolution and that this is a productive line of research (McCall and Whittaker 2007), a significant element of Calvin’s evidence has been based on highly dubious conjectures about handaxe function. More generally, this type of argument shows how attractive such speculative models of artifact function can sometimes be to evolutionary theorists lacking a nuanced knowledge of Paleolithic prehistory, as well as the general public. A more widely cited scenario of handaxe function is that of Marek Kohn and Steven Mithen (1999), who argue that handaxes were manufactured by males as sexual signals. They propose that males who were capable of knapping large, effectively thinned, symmetrical, and otherwise aesthetically pleasing handaxes were able to increase their mating success by demonstrating to potential female mates their skills as foragers and tool-makers. As April Nowell and Melanie Chang (2009) have detailed, this scenario is riddled with both logical and factual problems in terms of the actual archaeological record. It is true, of course, that archaeologists tend to focus their attention on the most formally striking handaxes and to preferentially describe and illustrate them relative to less attractive examples (McPherron 2000; Whittaker and McCall 2001). Logically, this scenario is also completely speculative about the nature of social structures and cognitive capabilities of early hominin handaxe makers; it also makes dangerous assumptions concerning sex roles relative to stone tool manufacture largely based on modern predispositions.1 However, although I suspect that most Paleolithic archaeologists do not firmly believe in the Kohn and Mithen (1999) model, it has gained a foothold in the Paleolithic literature and is often accepted rather uncritically by evolutionary theorists outside the field of archaeology (for example, Bickerton 2009). Although we might take certain evolutionary scenarios involving handaxes more seriously than others, what they have in common is the combination of observations about the outstanding qualities of handaxes and the synthesis of speculation about the behaviors of our Lower and Middle Pleistocene ancestors into evolutionary “just so” stories. In this respect, they share a number of fallacious views of the archaeological record of handaxes and lack appropriate referential frameworks with which to build archaeological inferences. Such eccentric scenarios clearly illustrate the fact that handaxes continue to appeal to our imaginations and, while their creativity is welcome, they frequently distract from
100 Chapter 3
the exploration of subtler aspects of the archaeological patterning of handaxes that likely hold more accurate clues concerning early hominin behavior. Debates concerning handaxe function also illustrate the general difficulty of this line of investigation and its complexity relative to issues of manufacture and reduction. Despite a half century of research on the topic, handaxes just seem to resist simple assignment to discrete functional categories. For this reason, it seems that evolutionary scenarios based on putative specialized functions of handaxes are unlikely to be valid. Unfortunately, there are currently very few theoretical accounts of handaxes that are not based on one type of discrete function or another. Instead, there has been a general stagnation in thinking about this problem stemming from the nearly exclusive focus on the formal characteristics of handaxes at the expense of a more holistic understanding of their archaeological patterning. While functional analyses of handaxes have some obvious potential in speaking to the conduct of certain types of activities, theoretical scenarios based on various assumed specialized functions have greatly inhibited our abilities to recognize our misconceptions and to replace our ignorance with productive learning.
Organizational Approaches to Acheulean Handaxes Certain aspects of the archaeological patterning of handaxes beyond their striking formal characteristics have caught the attention of Paleolithic archaeologists from the time of the origins of the field. Perhaps the most important of these patterns is the tendency of Acheulean-era sites either to contain large numbers and high densities of handaxes or to basically lack them entirely. For example, Burkitt (1925) puzzled over the Clactonian industry in Western Europe, which he recognized as likely overlapping in time with various stages of the Acheulean industry. He also recognized the contemporaneity of such sites as Hoxne, which is rich in handaxes and is only a short distance from the type site of Clacton. Likewise, Breuil (1926) also argued for contemporaneous Lower and Middle Paleolithic traditions that were characterized by high frequencies of bifaces or that lacked them altogether. He documented the frequent proximity of Acheulean and nonbifacial sites across Western Europe. Thus, this patterning in terms of Acheulean-era assemblage composition variability was known during even early attempts at culture-historical description. In Africa, sites contemporaneous with the Acheulean industry but largely lacking handaxes were also quickly recognized and frequently assigned to the Developed Oldowan industry (Tobias 1965; M. Leakey
The Organization of Early Stone Age Lithic Technology 101
1967; Isaac 1969; Gilead 1970). Here, the work of Mary Leakey at Olduvai Gorge was central to the recognition of stark differences in assemblage patterning between more-or-less contemporaneous sites. Once again, the identification of this industry rested primarily on the recognition of contemporaneous and proximate sites that either had high frequencies of handaxes or were characterized by much “cruder” and more expedient core forms. Interestingly, while the Clactonian nomenclature has now largely died out, the Developed Oldowan industry (now abbreviated as DO) continues to be a common subject of investigation (Braun et al. 2008a; Semaw et al. 2009; de la Torre 2011). Over the following half century, this patterning was generally explained in terms of what Binford (1987) calls “cultural geography.” Scholars such as Burkitt (1925) and Breuil (1926) saw the discrete characteristics of stone tool industries as stemming from racial and/or ethnic differences between contemporaneous Paleolithic hominin groups—a view later adopted by Bordes (1961) in his consideration of the temporal overlap and interdigitation of the Mousterian facies. Discrete contemporaneous stone tool industries were taken to indicate multiple different racial/ethnic groups occupying landscapes at the same time. Furthermore, in the case of Bordes’s work on the Mousterian, the existence of distinct ethnic, cultural, or racial divisions in the past was taken as a sign of the modernity and sophistication of past hominin species. The cultural geography perspective was also adopted and taken to an extreme by C. Garth Sampson (1974) in his consideration of the Acheulean and Developed Oldowan industries in Africa. Sampson went as far as to argue that these two industries were made by different species of hominins, with the Acheulean representing an advanced species of human-like hunters and the Developed Oldowan the product of an evolutionarily retarded australopithecine-like species of scavengers. Once again, as Shea (2010) observes, such views of Lower and Middle Pleistocene cultural and/or species geography have not completely fallen out of fashion in spite of substantial contradictory evidence. With the development of the New Archaeology in the late 1960s and 1970s, there was increasing skepticism about explanations of stone tool industries in cultural geographic terms (see Binford 1987 for discussion). In this respect, Arthur Jelinek (1977) was among the first Paleolithic archaeologists to seriously challenge the view of Lower and Middle Pleistocene industries as representing ethnic, racial, or species differences in prehistory. Jelinek’s position was based on the realization of a number of important technological dynamics associated with handaxes. First, he recognized that various handaxe morphologies did not likely represent intentionally fabricated design types but instead represented different stages and trajectories of reduction. Second, Jelinek understood that,
102 Chapter 3
while handaxes were useful objects in their own right, they were also a core form variant and were the sources of large quantities of sharp unmodified flakes. Because of ethnographic evidence, Jelinek argued that such sharp flakes may have been more frequently used than handaxes were and that flake production may have been a primary goal inherent within processes of handaxe manufacture. Finally, in making a functional argument similar to that proposed by Binford (1973) for the Mousterian, Jelinek suggested that the frequencies of different tool forms (including the presence or absence of bifaces) related to the types of activities conducted at various archaeological sites.2 Thus, Jelinek’s work served as an important basis for examining the meaning of Lower and Middle Pleistocene assemblage variation. Around the same time, Milla Ohel (1979) offered the first true reinterpretation of contemporaneous biface and nonbiface assemblages in terms of technological organization. In discussing the relationship of the Clactonian and Acheulean industries of England, Ohel proposed that the handaxe-dominated Acheulean assemblages and Clactonian sites lacking handaxes represented alternative elements of a single technological system. Here, Ohel went further in arguing that Clactonian sites were actually “preparatory areas” at which handaxes were roughed out and where other expedient tool forms were produced and used. Acheulean sites, in contrast, resulted from the use, reduction, and discard of handaxes at locations characterized by other forms of site use. In providing evidence for this scenario, Ohel was able to demonstrate that the flake assemblages at Clactonian sites are dominated by debris resulting from early stages of core reduction and that Acheulean sites tend to have less flake debris, resulting from late stages of core reduction. Thus, Ohel’s work confronted the view of Clactonian and Acheulean assemblages as representing distinct ethnic or cultural traditions and instead offered an account based on the organizational processes of tool manufacture, transport, and reduction. In a later paper, Ohel (1987) also began to consider handaxes more explicitly from the perspective of technological organization and tool design. Borrowing the design theory terminology from Bleed (1986), Ohel argued that handaxes were maintainable tools in the sense that they could be easily repaired in the field if dulled or damaged, they were useful for an extremely wide range of potential functions, and they had very long use-lives relative to other varieties of stone tools. Ohel suggested that hominins roughed out handaxe forms at lithic raw material sources and transported them around the landscape as curated tools, using them for a wide range of tasks and reducing them progressively over the course of their mobility rounds. This perspective made great progress in recognizing the links between handaxe design parameters, reduction
The Organization of Early Stone Age Lithic Technology 103
dynamics, mobility patterns, and the anticipation of a wide range of future tasks through curation of maintainable and multifunctional tools. Today, it is clear that Ohel’s organizational approach to the Acheulean and contemporaneous nonbiface stone tool industries was quite ahead of its time. This prescience was witnessed in the reactionary responses to these papers (see comments on Ohel’s 1979 paper in Current Anthropology and also White 2000 for a more recent skeptical discussion), as well as the extremely low rate of citation over the course of the following decades. Part of the reason for the limited impact of Ohel’s research may have had to do with its limited scope, focusing almost entirely on the relationship between the Clactonian and Acheulean industries and ignoring similar phenomena in Africa and Asia. It is also the case that the Ohel was concerned mostly with the examination of the reality of the Clactonian industry as a culture-historical construct and really scratched only the surface of the implications of these organizational phenomena for mobility patterns, settlement systems, and economic activities. A much more widely recognized organizational approach to Acheulean handaxes is, of course, that of Binford (1987; see also Binford and O’Connell 1984). Recognizing the pervasiveness of contemporaneous Lower and Middle Pleistocene biface and nonbiface industries and eschewing explanations based on cultural geography, Binford proposed a model of technological organization based on obvious functional differences between handaxes and the “small tools” common to nonbiface industries. Binford argued that handaxe-dominated sites formed at locations where hominins scavenged the desiccated carcasses of large prey animals and that this activity was primarily the performed by males. In contrast, he suggested that small tool-dominated sites formed at locations where other smaller-scale economic activities occurred, such as the processing of various plant food materials. Again, Binford sees this pattern in terms of sex differences, arguing that such sites represented the activities of females. Thus, Binford explained major structural differences in Lower and Middle Pleistocene assemblage composition through sex-based divisions of foraging activities and site use patterns. Binford’s (1987) argument stressed the importance of patterns of assemblage composition relative to the formal characteristics of handaxes, as well as the likelihood that sites frequently occurred at locations that could not be defended as Washburn-Isaac home base sites. It also pointed out the untenable nature of conclusions having to do with past ethnic or cultural differences. Other aspects of this model, however, are somewhat more problematic in their inattention to aspects of lithic technological dynamics that were well known by this time. As discussed in the previous section, Binford’s model unduly rests on the assumption
104 Chapter 3
of the functional specificity of handaxes as heavy-duty butchery tools. It also curiously ignores dynamics of handaxe reduction, which add a great deal of complexity in terms of the characteristics of handaxe assemblages, and it neglects the fundamental importance of unmodified debitage both as an indicator of the nature of lithic manufacture activities in terms of use for various cutting tasks. In many ways, Binford’s (1987) model fails to adopt many of the concepts of technological organization based on forager ethnoarchaeology that he himself created and defined. Instead, it seems frozen in time within bounds of the earlier “functional argument” concerning the Mousterian facies variability (Binford and Binford 1966; Binford 1973). Binford is quite justified in his criticism of previous research on the culture history and cultural geography of Lower and Middle Pleistocene industries, and he is quite correct in pointing out that early hominin lithic assemblages may have resulted from patterns of behavior that are well outside the range of variation represented by modern foragers. Furthermore, he makes a strong case for the methodological potential of the organizational approach in studying these problems. Yet, Binford’s (1987) particular model of handaxe curation and use seems out of step with his other more sophisticated ethnoarchaeological research on forager technological organization. Little additional research on handaxes has been conducted within the organizational framework over the last two decades. For one thing, there was substantial debate from the late 1980s onward concerning the operationalization and application of concepts such as curation in sensible archaeological terms (for instance, Shott 1986, 1996; Nash 1996; Odell 1996). In addition, the focus of handaxe research dramatically shifted from concern for various technological issues, including design, manufacture, use, reduction, and discard, to more popular questions of cognitive sophistication, skill, and the existence of various forms of cultural identity. This situation is so extreme that some of the most sophisticated modern thinking on handaxes as elements of technological systems primarily takes the form of a rebuttal to highly dubious explanations of handaxes as markers of ethnic identity or sexual symbols (McNabb, Binyon, and Hazelwood 2004; Nowell and Chang 2009; Shea 2010). This chapter intends to show continuing relevance of the organizational approach to understanding the archaeological record of Acheulean handaxes and in developing better evolutionary theory from it. Functional specialization is an attractive form of explanation of certain aspects of handaxe patterning precisely because it is simplistic and it comfortably supports a linear fabrication model of handaxe manufacture. This uncomplicated view is ideal for those who wish to
The Organization of Early Stone Age Lithic Technology 105
make inferences about Acheulean cognitive and cultural dynamics on the basis of the formal characteristics of handaxes. From this standpoint, the dynamics of reduction that constitute the core of Iain Davidson’s (2002) “finished artefact fallacy” are merely obstacles to overcome in striving to look into the minds of Acheulean hominins. In contrast, rather than trying to control for problems associated with handaxe reduction dynamics, the organizational approach uses them a crucial source of information about the operation of Acheulean technological systems. By making use of broader data sets including variation in Acheulean assemblage composition, as well as the intensity and trajectories of handaxe reduction, the organizational approach has tremendous capability of providing information about how Middle Pleistocene hominins moved around the landscape and used various types of archaeological sites.
Examining Acheulean Technological Organization with Archaeological Data In further outlining the organizational approach to Acheulean lithic technology and moving it forward, I offer two case studies: first, the well-studied case of Olorgesailie, Kenya (Leakey 1947; Isaac 1977; Potts, Behrensmeyer, and Ditchfield 1999). Here, Isaac’s (1977) summary monograph includes a wealth of data resulting from excavations of numerous Acheulean localities, offering an ideal data set for exploring issues of Acheulean assemblage variability at the landscape scale, as well as associated dynamics of handaxe reduction. The second case study is that of the Acheulean archaeological sites along the southern portion of the Namib Desert coast of Namibia, in the isolated Sperrgebiet diamond mining area. Here, archaeological sites were excavated under the direction of Gudrun Corvinus (1983) in the late 1970s and early 1980s, resulting in the collection of a large number of contemporaneous assemblages ideal for the assessment of variability. Both aggregates of sites date to the Middle Pleistocene and therefore may be considered broadly contemporaneous. This fact is also important for the purposes of comparison and for considering the Middle Pleistocene Acheulean as a widespread phenomenon across sub-Saharan Africa. Case Study 1: Olorgesailie Olorgesailie is an astonishing set of Middle Pleistocene archaeological localities located in south-central Kenya near the border with Tanzania, most famous for its extremely large and dense accumulations of handaxes in certain places. Olorgesailie was discovered by the British geologist John Walter Gregory in 1919 and was subsequently first excavated under the
106 Chapter 3
direction of the Leakeys in the mid-1940s (Leakey 1947). Later, Louis Leakey ceded directorship of excavations at Olorgesaile to Glynn Isaac, who conducted major fieldwork there during the mid- 1960s as the basis of his doctoral dissertation research. This research ultimately culminated with the publication of Isaac’s (1977) seminal monograph on the site’s chronology, geological context, and the formal characteristics of its stone tool assemblages. Olorgesailie was subsequently reinvestigated by Richard Potts and colleagues (Potts 1989, 1994; Potts, Behrensmeyer, and Ditchfield 1999) in the early 1990s in order to help resolve ambiguities of site formation, as well as to establish the environmental contexts of the site’s geological members and certain specific localities. By virtue of its striking archaeological record of Middle Pleistocene handaxe technology, its well-understood geological and environmental context, and its long history of archaeological research, Olorgesailie represents an ideal case study in which to examine the organization of Acheulean technology. The Handaxe/Nonhandaxe Dichotomy at Olorgesailie From the beginning, it was Olorgesailie’s astonishing accumulations of handaxes that attracted the attention of elite scholars such as the Leakeys (1947), Isaac (1977), and Potts (1989, 1994; Potts, Behrensmeyer, and Ditchfield 1999). Understanding the implications of such large and dense accumulations of handaxes has represented a major research problem in terms of hominin evolutionary dynamics. For example, Isaac (1977) linked the Olorgesailie handaxe patterning with large game hunting activities, arguing that dense concentrations were specialized butchery implements and that their tendency to occur in large concentrations resulted from specialized butchery in domestic activity areas. Isaac also recognized that many localities in the same (contemporaneous) geological members with dense handaxe sites largely lacked handaxes— the handaxe/nonhandaxe dichotomy discussed earlier in this chapter. One concentration of handaxes stands out as particularly large and dense: the striking DE/89 locality in the Main Site complex (see Figures 3.1 and 3.2). At DE/89, 260 m have been excavated, exposing more than 600 bifaces. In Horizon B, 30% of all lithic artifacts are bifaces. Interestingly, this massive handaxe concentration is also associated with large numbers of bones of at least 43 individuals of extinct giant gelada baboon (Theropithecus oswaldi; Shipman 1981; Binford and Todd 1982; Potts, Behrensmeyer, and Ditchfield1999). Initially, Patricia Shipman and colleagues (1981) argued that this archaeological pattern resulted from the specialized hunting and butchery of giant gelada baboons using handaxes, although this interpretation has been
The Organization of Early Stone Age Lithic Technology 107
Figure 3.1 Handaxe concentration at the Main Site complex of Olorgesailie
Figure 3.2 (a) Handaxe collected by Jacques Boucher de Perthes from Abbeville housed at the Muséum d’Histoire Naturelle de Toulouse; (b) replica handaxe made on basalt by the author for use in butchery experimentation
challenged on the grounds of taphonomy, the absence of cut marks of the giant gelada bones, and the general unlikelihood of hunting such large and dangerous animals using simple technologies (Binford and Todd 1982; Potts 1989; Potts, Behrensmeyer, and Ditchfield 1999). However, later interpretations of the DE/89 patterning were nonetheless
108 Chapter 3
largely taken to corroborate Isaac’s (1977) initial argument for handaxes as specialized butchery tools and large handaxe concentrations as the result of butchering and food-sharing activities in domestic contexts. There are other large concentrations of handaxes at Olorgesailie. For example, the assemblages from the Mid and Meng localities in the Main Site complex are composed of 30% and 50% bifaces, respectively. Likewise, the assemblages from the H6 locality and the AM horizon of H9 locality are composed of 20% and 14% bifaces, respectively. In contrast, the remainder of the assemblages reported by Isaac (1977) have less than 10% bifaces, which is all the more striking given the fact that the Leakey and Isaac excavations were specifically targeting dense concentrations of handaxes and largely ignoring nonbiface sites. In addition, the excavations directed by Potts, which searched out occurrences of fine-grained sediments in the Olorgesailie formation rather than concentrations of handaxes (Potts, Behrensmeyer, and Ditchfield 1999), demonstrate a significant nonbiface component in the region. These excavations resulted in the collection of many thousands of lithic artifacts (the exact number is somewhat unclear in resulting publications—for example, Potts, Behrensmeyer, and Ditchfield 1999) and only 20 handaxes. For instance, the Potts excavations at Site 15 resulted in the collection of 2,322 lithic artifacts and only 2 bifaces. Interestingly, this large and mostly nonbiface lithic assemblage is associated with a skeleton belonging to the extinct elephant species, Elephas recki, which shows clear signs of having been butchered by hominins. Among other things, the more recent Potts excavations would seem to show that handaxes are much rarer in the Olorgesailie region than what has been thought traditionally, excepting the large concentrations that have received so much attention. Thus, as with many other Acheulean localities, the divide by sites characterized by large concentrations of handaxes and those largely lacking them is evident. Site Formation Processes and Taphonomy at Olorgesailie The taphonomy of the Olorgesailie Acheulean sites has been an important issue beginning with Isaac’s (1977) monograph. In this book, Binford (1977a) presented the first of his serious critiques of the hunting and home base use models on the grounds of site formation. Although Binford presented wide-ranging criticisms of Isaac’s ambiguity in describing geological and taphonomic contexts for major sites, he specifically argued that the DE/89 locality (located in a braided fluvial paleo-channel of the Ol Keju Nyiro River; Figure 3.3) was the result of geological size sorting and differential winnowing of nonhandaxe artifacts, a point made by Isaac himself in his original monograph (see
The Organization of Early Stone Age Lithic Technology 109
Figure 3.3 Map of Olorgesailie localities discussed in the text
also Potts, Behrensmeyer, and Ditchfield 1999). This finding effectively challenged Isaac’s inferences of hunting and home base site use based on the associations of stone tools and animal bones, with Binford attributing the former to alluvial transport and the latter to natural processes of bone accumulation. Beyond this early volley in the hunting-and-scavenging debate, Binford’s critique was sufficient to call into question the hominin role in accumulating the dense concentrations of handaxes for which Olorgesailie became famous. This taphonomic critique and those that followed it served as catalysts for the later research of Potts and colleagues (Potts 1989; Potts 1994; Potts, Behrensmeyer, and Ditchfield 1999), who set out to establish geological processes of site formation and paleo-landscape conditions of various significant localities at Olorgesailie. This renewed fieldwork succeeded in resolving important aspects of this debate. First, it was able to demonstrate that fluvial transport and size sorting was not, in fact, a major dynamic of assemblage formation at the Olorgesailie localities, including even the DE/89 mega-concentration and contradicting the opinions of both Isaac (1977) and Binford (1977a). This research confirmed that hominin activities accounted for the handaxe/nonhandaxe dichotomy rather than being the result of alluvial geological processes. Second, Potts and colleagues were able to document quite different geological processes and paleo-landscape conditions associated with the various members of the Olorgesailie formation, a finding suggestive of correlations between the dynamics of stone tool accumulation and
110 Chapter 3
certain vegetation communities as manifested by paleosols, root casts, and isotopic evidence (see also Sikes, Potts, Behrensmeyer 1999). Potts and colleagues (1999) offer a much more detailed account of the geological processes and landscape conditions associated with Members 1, 6, and 7 of the Olorgesailie formation, which contain the bulk of the ESA archaeological materials. This line of research suggests that upper Member 1, lower Member 6/upper Member 7, and lower Member 7 accumulated at relatively short time scales (likely within hundreds of years), challenging views of Olorgesailie artifact accumulations as long-term palimpsests. This research also suggested that the Member 1 archaeological localities were associated with a homogenous grassland environment, while Members 6 and 7 were associated with alluvial and lacustrine features associated with the recession of Lake Magadi and the eventual stabilization of landscape features. Thus, the landscapewide artifact scatter within Member 1 may have been associated with a relatively homogenous environmental regime, whereas the highly clustered artifact concentrations are associated with patchier alluvial features connected with ephemeral fluvial systems and marshy riparian vegetation. Reanalysis of the Olorgesailie Lithic Data Although aspects of the Olorgesailie lithic data have been studied and discussed in great detail (such as in Isaac 1977; Noll 2000), one wonders if there may be aspects relevant to Acheulean technological organization that may have been overlooked up to this point. For example, Isaac’s (1977) seminal publication focused mainly on the handaxe mega-concentrations, variability in handaxe morphology, and the relationship of the Acheulean lithic technology at Olorgesailie with other ESA sites in sub-Saharan Africa and beyond. All these kinds of data served as proxy sources concerning the complexity and cultural relationships of early hominins in the Lower-to-Middle Pleistocene. In an opposite vein, Noll (2000) presents a reanalysis of the shape characteristics of the Olorgesailie handaxes, introducing important perspectives on the role of initial raw material characteristics (size, shape, geological source, and so on) in determining the ultimate shape of handaxes. In contrast with Isaac, Noll concludes that many of the strikingly redundant formal qualities of the Olorgesailie handaxes resulted from the commonality of initial raw material forms in combination with subsequent patterns of tool reduction, undermining views of handaxe shapes as related to complex cultural phenomena. Isaac’s (1977) data also hold interesting information concerning dynamics of assemblage composition that are extremely useful for
The Organization of Early Stone Age Lithic Technology 111
thinking about issues of technological organization. Recently I conducted a multivariate analysis of these data using a variety of factor analysis called principal components analysis (PCA) as a method for examining the co-occurrence of various tool forms within assemblages (McCall 2012). As I have discussed in greater detail elsewhere (McCall 2006b, 2007, 2010c, 2012), PCA and other varieties of factor analysis are useful for reducing large data sets and examining technological dynamics at the assemblage level in terms of both relative artifact type frequencies and the various properties of artifacts (size, shape, composition, various linear dimensions, and so forth). The case for the utility of factor analysis in studying dynamics of technological organization was made early on by Binford (1977b) in his foundational description of Nunamiut hunting technological systems (see also Binford and Binford 1966; Binford 1978b, 1979, 1980). In arguing that technological systems are organized to address both immediate technical problems and anticipated future contingencies, Binford makes the case for studying assemblage composition on the following successively related grounds: (1) systems of foraging behavior structure differ in the places and times where tools are produced, used, repaired, retooled, and ultimately discarded; (2) the structural differences in the location and timing of various kinds of technological activities result in assemblages of debris with substantially divergent properties; (3) for these reasons, variations in the composition of contemporaneous assemblages may be important sources of evidence concerning various aspects of prehistoric economic systems, especially patterns of mobility, settlement, and site use; (4) this information may, in turn, be used to examine the ways in which foragers interacted with their environmental contexts, including issues of resource structure and seasonality. Indeed, the PCA of Isaac’s (1977) Olorgesailie data shows some suggestive patterns in terms assemblage composition (Tables 3.1, 3.2, and 3.3). As one might expect, the first principal component (PC 2) is dominated by various handaxe forms, excluding what Isaac refers to as “classic” handaxes. In addition, most other core forms and large retouched tool forms also load strongly on PC 1. Interestingly, most of the same core forms load strongly on PC 2; in contrast, however, it is mainly small retouched tool forms that load strongly on PC 2. Last, small and very small flakes (by which terms Isaac refers to normal-sized debitage to distinguish them from large flakes, which he mainly views as biface blanks) load on PC 3, along with “classic” handaxes.3 These results make a great deal of sense when considered from the perspectives of stone tool reduction and technological organization. From the standpoint of handaxe reduction, it is interesting to note that “classic” handaxe and other biface forms do not load on the same PCs.
112 Chapter 3
Table 3.1 Rotated PC matrix for Olorgesailie lithic assemblage data Rotated Component Matrixa Component handaxes picks chisels cleavers knives broken handaxes trihedral picks choppers core scrapers large flake scrapers core bifaces other large tools small scrapers nosed scrapers other small tools spheroids large bifacial trimmed flakes large unifacial trimmed unifaces small bifacial trimmed flakes small unifacial trimmed flakes large broken trimmed flakes small broken trimmed flakes large flakes small flakes very small flakes flake fragments regular cores irregular cores casual cores core fragments
1
2
3
4
5
6
.212 .963 –.037 .991 .971 .439 .878 .885 .320 .959 –.039 .838 .033 .092 .460 .955 .896 –.052 –.107 .775 .903 .321 .768 .356 .251 .356 .359 .777 .527 .624
.356 .071 –.107 –.022 .093 .286 .285 .279 .237 .209 .841 .363 .928 .748 –.281 .102 .059 –.055 .695 .266 .085 .833 .093 .795 .875 .718 .622 .429 .697 .349
–.200 .196 –.257 .093 .140 .432 .153 .238 .649 .111 –.087 –.018 .146 .496 .629 .083 .164 –.033 –.192 .342 .059 –.202 .230 .277 .143 –.072 .646 .422 .422 –.063
.032 –.019 –.159 –.013 –.049 .177 –.032 –.058 –.108 .022 –.181 –.021 .083 .187 –.136 .026 –.109 .936 .536 .119 .055 –.062 .338 .086 .003 .055 .003 .021 –.018 .553
.179 .092 .880 –.026 .024 .674 –.084 –.073 –.065 .092 .062 .111 .065 –.025 –.131 –.056 .300 –.098 .023 –.165 .048 –.237 .372 .028 –.024 .177 .038 –.094 .040 .120
.771 .080 .132 –.011 .049 –.076 .276 .170 –.253 –.077 .201 –.317 –.051 –.083 .007 –.028 .059 .030 .139 .136 –.172 .155 –.107 –.196 .157 –.527 .045 .027 .099 –.310
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a Rotation converged in 9 iterations.
Reconsidering Isaac’s (1977) typology in terms of reduction, it is clear that “classic” handaxes represent initial forms and roughouts, whereas the other handaxe forms that load on PC 1 represent the outcome of reduction processes (see Isaac 1977: 123, Figure 40 A). In addition, it is evident that handaxes have a relationship with both other core forms and retouched tool forms (especially the large ones). This finding also links nonclassic handaxes with later stages of tool reduction, as well as
Total
15.402 5.078 2.027 1.841 1.571 1.170
Component
1 2 3 4 5 6
51.341 16.926 6.758 6.138 5.235 3.901
% of Variance 51.341 68.268 75.026 81.164 86.399 90.300
Cumulative %
Initial Eigenvalues
15.402 5.078 2.027 1.841 1.571 1.170
Total 51.341 16.926 6.758 6.138 5.235 3.901
% of Variance 51.341 68.268 75.026 81.164 86.399 90.300
Cumulative %
Extraction Sums of Squared Loadings
Total Variance Explained
12.145 7.204 2.708 1.800 1.722 1.510
Total
40.483 24.015 9.028 6.001 5.739 5.034
% of Variance
40.483 64.497 73.525 79.527 85.266 90.300
Cumulative %
Rotation Sums of Squared Loadings
Table 3.2 Eigenvalues and percentages of variation explained for PCA of Olorgesailie lithic assemblage data
The Organization of Early Stone Age Lithic Technology 113
114 Chapter 3
Table 3.3 List of variable clusters derived from principal components analysis of Olorgesailie stone tool typological data Principal Component 1 Pick handaxes Picks Cleavers Knives Choppers Regular cores Irregular cores Casual cores Core fragments Large flake scrapers Large trimmed flakes Broken large trimmed flakes Spheroids Other large tools Principal Component 2 Regular cores Casual cores Core bifaces Small simple scrapers
Small nosed and pointed tools Small trimmed flakes Broken small trimmed flakes Principal Component 3 “Classic” handaxes Small flakes Very small flakes Flake fragments Principal Component 4 Core scrapers Other small tools Principal Component 5 Chisel handaxes Broken handaxes Principal Component 6 Large flakes
with the transport of cores around the landscape. Furthermore, there is an apparent distinction between sites with high frequencies of retouched tools and cores (or “flaked pieces,” to use Isaac’s language), with some sites having high frequencies of handaxes, and others virtually lacking them. The other potential implication of this patterning is that co-occurrence of “classic” handaxes and debitage resulted from the quarrying of lithic raw material, the roughing out of biface preforms, and the flaking of other core forms. In general, high frequencies of debitage relative to cores and retouched tools may be taken as a signal of raw material abundance (for instance, Parry and Kelly 1987), such as at a quarry site (see also Isaac 1977 and Potts, Behrensmeyer, and Ditchfield 1999 for specific discussions pertaining to Olorgesailie). In this scenario, I would argue that “classic” handaxes were initial handaxe forms that were abandoned or perhaps rejected at early stages of production/reduction, while other handaxes were transported away from raw material sources and ultimately discarded in their further-reduced forms. This patterning may be further elucidated with more specific regression analyses. Figure 3.4 shows a relationship between the frequencies of handaxes and small flakes (that is, debitage). This strongly negative
The Organization of Early Stone Age Lithic Technology 115
1.00
% Bifaces from Total Core Assemblage
.80
.60
.40
.20
.00 .50
.60
.70
.80
.90
1.00
% Small Flakes from Total Assemblage
Figure 3.4 Graph showing the relationship between the percentage of small flakes from the total assemble and the percentage of bifaces from the core assemblage at Olorgesailie
correlation also demonstrates that handaxes were transported away from the location of their initial production and are found in increasing frequencies at sites with low frequencies of flaking debris. Furthermore, this negative relationship with frequency of debitage is exclusively associated with later reduction stage handaxes and not with other core forms. Isaac’s (1977) Olorgesailie data demonstrate that handaxes were discarded in reduced forms in separate locations from the places of their manufacture. This conclusion may not be earth shattering; transport related to the tendency of handaxes to accumulate in large concentrations has been widely recognized for a long time (Jelinek 1977; Ohel 1979, 1987; Binford 1987; Schick and Toth 1993; Potts, Behrensmeyer, and Ditchfield 1999; McPherron 2000; McCall 2010b, 2012). Yet, it lends credence to the earlier models of the organization of handaxe technology put forward by Ohel (1979, 1987) and others. It also underscores the importance of the argument advanced by Noll (2000), who has demonstrated that the redundancy of handaxe forms at the large Olorgesailie concentrations resulted from common characteristics
116 Chapter 3
of available lithic raw materials, in addition to similar trajectories of tool reduction. Finally, there is likely a major bias in Isaac’s (1977) Olorgesailie data: the Leakey and Isaac excavations specifically targeted the large concentrations of handaxes at the expense of other types of sites with lower frequencies of handaxes. As the later excavations of Potts and colleagues have demonstrated, the bulk of the Olorgesailie landscape comprises low-density lithic scatters that largely lack handaxes. If the results of the Potts excavations, which targeted fine-grained sediments rather than handaxe concentrations, were to be included in this analysis, it is very likely that they would further strengthen and clarify our knowledge with respect to the handaxe/nonhandaxe dichotomy, as well as the broader organizational dynamics associated with Middle Pleistocene technological behavior. Reconstructing Site Types, Dynamics of Technological Organization, and Mobility Systems Beyond the recognition of some key dynamics of handaxe transport, important questions remain about early hominin economic behavior and processes of site formation. To begin with, I have always been bothered by one question concerning the formation of large handaxe concentrations like those evident at Olorgesailie: archaeological handaxes, even those that have undergone substantial reduction, are very large, valuable pieces of lithic raw material with high levels of remaining utility; why were they discarded at the places and times where they were and not exploited further? While dynamics of transport have been known for a long time, the question of discard has remained problematic. I also strongly believe that this is question of great consequence since, as Binford’s (1977a, 1978, 1979) early explications of the organizational approach point out, the location and timing of tool discard are fundamentally structured by the dynamics of mobility and settlement systems that are the main subjects of this research. In many ways, patterns of handaxe discard defy many of our intuitive explanations. For example, one might expect handaxes to have been discarded at quarry sites where raw material suitable for manufacturing replacements was available. But as the previous analysis and information from many other major Acheulean site complexes make clear, this is simply not the case. Likewise, one might alternatively expect handaxes to have been abandoned at the locations of their use, especially if one favors specialized models of handaxe function such as heavy-duty butchery. In general, this also seems not to be the case. Large handaxe concentrations, such as DE/89, are not characterized by any direct evidence for use
The Organization of Early Stone Age Lithic Technology 117
behavior, though the taphonomy of such sites is frequently a serious problem. Furthermore, association with use locations would still not answer the most problematic aspect of this question of why individuals would abandon tools rich in remaining utility. In addressing these and related issues, it is useful to integrate information from Potts’s excavations, which hold finer-grained information concerning the dynamics of site formation. Reported in greatest detail in Potts and associates (1999), these excavations suggest a number of distinct types of sites, as well as dynamics of technological organization driving characteristics of assemblage characteristics. 1. Quarry Sites. Potts’s excavations succeeded in identifying several locations of raw material abundance with large accumulations of flaking debris, as well as evidence for handaxe manufacture. Perhaps the most significant of these is the Site AD1-1, which Potts and colleagues (1999: 769) characterize as “the richest artefact concentration in Member 1.” Potts and colleagues also report collecting 1,700 numbered artifacts along with thousands of unnumbered debitage pieces, flake fragments, and angular fragments. They do not report the frequency of handaxes associated with AD1-1, which was presumably not very high, and it is unfortunate that the lithics from this site have not been published in any detail. In terms of its geological context, AD1-1 is located adjacent to the volcanic boulders of the so-called Lava Hump (Figure 3.5), which is a major source of lithic raw materials. There were also no associated faunal remains. Based on Isaac’s (1977) original descriptions and the additional geoarchaeological accounts of Potts and colleagues (1999), it also seems likely that the I3 site was a significant raw material source. Site I3 also has a very large accumulation of flaking debris relative to its frequency of handaxes, which make up only 3% of the total assemblage. It is also interesting that I3 has the highest frequencies of both “classic” handaxes and core bifaces, which may be considered preforms or earlystage handaxe rough outs. I3 is also associated with virtually no faunal remains and is located on the southern edge of the Lava Tongue volcanic geological formation, which would also have served as a significant source of lithic raw materials. Finally, Potts and colleagues (1999) speculate that the foothills and uplands of Mt. Olorgesailie would have also been important sources of volcanic lithic raw materials. They point out the common occurrence of various volcanic stone tools on the rocky surfaces of the Mt. Olorgesailie uplands, which were not buried by the sediments of the Olorgesailie formation. While this geological circumstance makes it difficult to assess issues of chronology, Potts and associates argue that this upland area was
118 Chapter 3
Figure 3.5 Map showing the location of the Gemsbok Acheulean sites near Oranjemund, Namibia
commonly exploited as a lithic raw material source by early hominins in the course of other economic activities. 2. Activity Areas. Potts and colleagues (1999) also report several interesting lithic artifact accumulations in association with animal bone assemblages showing evidence for modification by early hominins. Perhaps the most interesting of these is Site 15, which contains the remains of an individual of the extinct elephant species, Elephas recki, as well as 2,322 lithic artifacts, including 2 bifaces. Potts and colleagues report clear instances of cut marks and percussion marks on the extinct elephant bones and note that a high frequency of the stone tools showed edge damage consistent with their use in this context. In addition, the lithic artifacts are tightly spatially clustered around the extinct elephant skeleton. While also not reported in great detail, Potts and colleagues (1999) report that many of the flakes used in the butchery activities were bifacial thinning flakes, likely removed from the handaxes and other
The Organization of Early Stone Age Lithic Technology 119
discoidal core forms present at that site. In addition, they report the presence of artifacts originating from 17 distinct lithic raw material sources, mostly ranging between 300 m to 2.5 km distant. Based on this evidence, Potts and associates arrive at the reasonable conclusion that hominins were attracted to this site in a fluvial channel braid by the presence of the elephant carcass. They used the lithic technology that they carried with them from diverse sources to butcher the elephant carcass, including through processes of removing additional flakes from bifaces and other cores. When early hominins left the site, they likely took the majority of handaxes and other productive cores with them, leaving the evident patterns of high frequencies of utilized debitage and reduced flakes, moderate frequencies of nonbiface cores, and low frequencies of handaxes. Once again, this site suggests that activity areas tended not to accumulate high frequencies of handaxes relative to other varieties of stone tools. 3. Large Handaxe Concentrations. The last major site type discussed here is the most important in terms of technological organization: the large accumulations of handaxes that have made Olorgesailie famous. While the most prominent of these is the DE/89 locality, there are several others (also primarily within the Main Site complex), such as the Mid, Meng, H/9, and Trial Trench M10 sites (not pictured in Figure 3.4). With the knowledge that these sites were the result of mostly hominin rather than geological processes, the major question remains of why they formed in the locations that they did. While many have argued that such large handaxe concentrations resulted from the repeated conduct of specialized activities, especially heavy-duty butchery, this explanation fails to hold much currency in the light of recent evidence. For example, although Shipman (1981) linked the DE/89 handaxes with the butchery of the unusual giant gelada baboon faunal assemblage, there is no direct evidence of any relationship between the two phenomena. As Potts and colleagues (1999) observe, there is no evidence of hominin modification of the Theropithecus oswaldi bones in terms of cut marks or percussion marks, nor is there any meaningful spatial association between the lithic artifacts and bones of the sort evident at Site 15. In contrast, Potts and colleagues suggest that the two phenomena may have occurred in the same locality by virtue of some ecological condition (perhaps associated with a riparian vegetation corridor) that favored both giant gelada baboons (or, more important, their deaths) and the activities of early hominins. This creative explanation is attractive in terms of what is known of modern baboon ecology and what we may believe in terms of early hominin economic behavior.
120 Chapter 3
While this explanation likely strikes at important environmental phenomena related to handaxe abandonment, it does little to explain why dead giant geladas accumulated in the same location as handaxes, since the giant geladas may not have had much say in terms of where their bones were ultimately deposited. One possibility is that nonhominin carnivores targeting the giant geladas were attracted to the same environmental condition that caused hominins to deposit their handaxes. This account is also problematic, however, since there is also little unequivocal evidence of large carnivore activity associated with the patterns of bone breakage at DE/89. Finally, it is also the case that the bones of other animal species occur at this locality in reasonably high frequencies, including terrestrial and aquatic fauna of all sizes. After decades of debate on the topic, it remains difficult to disprove the conclusion that the association of the Theropithecus oswaldi remains and handaxes at DE/89 was simply a taphonomic coincidence, and it seems possible that this line of investigation represents a red herring in attempting to understand large handaxe concentrations at this site and others. More important, the recent work of Potts and colleagues (1999) offers a much-improved viewpoint on the landscape characteristics associated with large handaxe concentrations such as DE/89. Their research demonstrates that such localities generally occur on the fringes of geological zones of raw material availability (for example, the Lava Hump, the Lava Tongue, and the foothills of Mt. Olorgesailie) and are often located at ecotones (for instance, the riparian vegetation corridor at DE/89). These facts are important for two major reasons: (1) As Potts and colleagues argue, it suggests that early hominins may have abandoned handaxes on the borders between regions with available lithic raw material resources and those lacking them. Thus, when early hominins anticipated movement into zones of raw material availability, they dropped their handaxes at strategic locations on the landscape, effectively caching them for the purposes of future use. This measure would have saved individuals longer trips to raw material sources by creating artificial lithic accumulations in the form of handaxe concentrations. (2) The location of handaxe concentrations at ecotones would have maximized their utility relative to locations of various economic activities, essentially splitting the difference between various ecological zones. In addition, ecotones are themselves hotspots of food resource availability for a wide range of animal species and are often targeted by modern foragers by virtue of this fact. In short, these findings suggest that large handaxe accumulations were locations where early hominins dropped their handaxes, perhaps
The Organization of Early Stone Age Lithic Technology 121
anticipating movement into regions of more widely available lithic raw material availability. They may or may not have been associated with substantial economic activity, and they were apparently not specialized activity areas or home bases. Finally, this information helps resolve the troubling question posed at the beginning of this section of why early hominins would have abandoned handaxes—valuable tools with large amounts of remaining lithic raw material and potential utility—at such enigmatic locations. Case Study 2: The Namibian “Gemsbok” Acheulean The patterns of technological organization evident with the Acheulean assemblages at Olorgesailie may also be found with other broadly contemporaneous assemblages in sub-Saharan Africa. Another (far less famous) instance of this type of patterning is the so-called Gemsbok Acheulean in the southwest corner of Namibia (Figure 3.6). This area in Namibia’s Sperrgebiet—the diamond mining “forbidden zone” controlled by the Consolidated Diamond Mining corporation—was intensively surveyed for archaeological and paleontological materials during the 1970s,4 and a number of major Acheulean sites were mitigated through excavation under the direction of Gudrun Corvinus (1983). These sites are likely somewhat younger than those at Olorgesailie, argued to date between 700 and 400 ka. The area was also the location of the discovery of an important early hominin fossil: the calotte of the
Figure 3.6 Small handaxe typical of the Gemsbok Acheulean from the Namib Desert, Namibia
122 Chapter 3
so-called Orange River Man. This specimen likely belonged to a female individual of the species Homo heidelbergensis, perhaps dating to the later end of the time range just mentioned (Hannah Marsh, personal communication 2007). Although exhibiting many characteristics of later Acheulean assemblages in sub-Saharan Africa (such as smaller/thinner handaxes and a mix of centripetal core reduction techniques), the handaxe/nonhandaxe dichotomy is still strongly evident. Furthermore, Corvinus (1983) characterizes the lithic debitage associated with the excavated archaeological sites in greater detail than did Isaac (1977) at Olorgesailie. These data are largely germane to the assessment of issues of reduction within a sequential framework and provide even more information about the place of handaxes, especially those occurring in dense concentrations, within sequences of reduction. Data Analysis Corvinus’s (1983) excavations resulted in the collection of 7 putatively Acheulean-aged assemblages with sample sizes large enough to be considered here. This sample of sites is not large enough to warrant the use of multivariate statistics, as I have done with the Olorgesailie samples. In addition, it even makes simple correlations between lithic variables problematic, at least in terms of the relative frequencies of artifact types between sites. Interestingly, however, many of these sorts of bivariate relationships relevant to the understanding of technological organization are strong enough to achieve statistical significance in spite of an effective sample size of 7. One of the main types of information recorded by Corvinus (1983) is the dorsal cortex morphologies of the debitage. Figure 3.7 shows the relationship between the frequency of cortical flakes from the total flake assemblage and the frequency of handaxes from the total core assemblage (F = 9.6; p = 0.03). At this point, it is worth questioning if this relationship might be true for all cores and cortical flakes (that is, not just for handaxes). Figure 3.8 shows the relationship between the frequency of cortical flakes from the total flake assemblage and the frequency of nonhandaxe cores, showing no relationship at all (F = 0.7; p = 0.64). Combined, these analyses show a fairly clear negative relationship between the frequencies of handaxes and cortical flakes associated with early stages of core reduction—as we might well expect based on the analysis of the Olorgesailie data present. Again, this pattern likely demonstrates that handaxes were transported away from the locations where they were initially produced and where large concentrations of early-stage core reduction debris accumulated; these may also be interpreted as locations near lithic raw material sources or
The Organization of Early Stone Age Lithic Technology 123
% Bifaces from Total Core Assemblage
.60
.40
.20
.00 .40
.50
.60 % Cortical Flakes
.70
.80
Figure 3.7 Graph showing the relationship between the percentage of cortical flakes and the percentage of bifaces from the core assemblage at the Gemsbok sites
quarry sites. Thus, Corvinus’s flake cortex data add an important detail in terms of examining patterns of Acheulean technological organization. In addition, the same relationships between handaxe concentrations and tool reduction may also been found through bivariate analysis of the Corvinus (1983) data. Figure 3.9 shows the relationship between the frequency of retouched flakes from the total flake assemblage and the frequency of handaxes from the total core assemblage (F = 6.3; p = 0.05). Although not quite achieving statistical significance thanks to issues of sample size, this relationship confirms the association between handaxes and various retouched flake tools found with the multivariate analysis of the Olorgesailie data. In addition, a repeat of the same bivariate analysis with nonhandaxe cores again demonstrates no relationship with retouched tools (F = 0.1; p = 0.94). Once more, this outcome supports the finding that handaxe concentrations are associated with retouched flake tools that have undergone substantial reduction and were presumably transported around the landscape, away from their original location of manufacture.
124 Chapter 3
.50
% Cores from Total Assemblage
.40
.30
.20
.10
.00 .45
.50
.55
.60 .65 % Cortical Flakes
.70
.75
Figure 3.8 Graph showing the relationship between the percentage of cortical flakes and the percentage of cores from the total assemblage at the Gemsbok sites
In short, this cursory analysis of Gemsbok Acheulean lithic assemblage composition shows the same salient characteristics as the Olorgesailie assemblages, likely resulting from common patterns of technological organization. It also adds an important detail concerning the relationship between handaxes and sequences of core reduction as evidenced by flake cortex patterns.
Implications for Acheulean Technological Organization Note that the Gemsbok Acheulean sites seem to be missing one important site type recognized by the recent fieldwork activities of Potts and colleagues (1999) at Olorgesailie: activity area sites such as Site 15. First, it is possible or even likely that such locations have been overlooked by virtue of the variable but generally poor preservation of faunal remains along the southern Namib coast. Second, as I argue shortly, it is possible that such sites occurred outside the geological zones selected for excavation (in other words, those bearing diamonds). Furthermore,
The Organization of Early Stone Age Lithic Technology 125
% Bifaces from Total Core Assemblage
.40
.30
.20
.10
.00 .00
.01 .02 % Retouched from Total Flake Assemblage
.03
Figure 3.9 Graph showing the relationship between the percentage of retouched flakes and the percentage of bifaces from the core assemblage at the Gemsbok sites
it is likely that such activity areas were located on stable or deflating geological surfaces, resulting in massive palimpsests of archaeological remains spanning perhaps hundreds of thousands of years. After all, the information on which much of the interpretation of Site 15 is based relies on its context within fine-grained sediments of the Olorgesailie Formation. No equivalent geological context exists for most of this region of coast southwestern Namibia. With that said, there are some clear commonalities in the organization of Acheulean handaxe technology that may be observed between Olorgesailie and the Gemsbok Acheulean sites. The following list provides a preliminary model of Acheulean lithic production, transport, use, reduction, and discard. 1a. It is clear that handaxes were roughed out in locations of raw material availability and transported into zones of relative raw material scarcity. At Olorgesailie, this distribution is apparent in
126 Chapter 3
the location of quarry sites replete with various kinds of knapping debris, including that likely resulting from handaxe manufacture, located adjacent to major lava outcrops and in the volcanic uplands around Mt. Olorgesailie. Among the Gemsbok Acheulean sites, the nonhandaxe sites are characterized by high frequencies of earlystage debitage consistent and are located adjacent to raw material sources in the form of beach cobble deposits along the middle and lower fossilized beach zones. 1b. Although this chapter has focused on handaxes, it is also clear that expedient knapping and flake use took place at raw material occurrences. For example, the Lava Hump Site at Olorgesailie contains massive amounts of flake debris, including large quantities of angular fragments, likely resulting from expedient knapping. Likewise, based on the results of principal components analysis presented here, it seems likely that the combination of flakes and early-stage handaxes at PC 3 resulted from a combination of handaxe thinning and expedient flaking. In addition, the nonhandaxe Gemsbok sites also show reasonably high frequencies cores and flakes providing evidence of expedient knapping through the use of hard-hammer direct percussion. In this sense, such sites mirror the core reduction patterns evident at Oldowan sites, a fact that perhaps explains their frequent attribution to the Developed Oldowan industry in situations where they are demonstrably contemporaneous with neighboring Acheulean handaxe sites. 2a. It is also clear that handaxes were deposited in great quantities at the boundaries between geological zones of raw material availability and those largely lacking knappable raw material. At Olorgesailie, these locations tend to be associated with the fringes of volcanic outcrop surfaces, with DE/89 serving as the most famous example. At the Gemsbok Acheulean sites, large handaxe concentrations are located mainly along the uppermost fossilized beach crests and the boundary between beaches with abundant knappable cobbles and the stable interior surfaces where raw material was much less abundant. As Potts and colleagues (1999) suggest, this behavior may be seen as a strategy of dropping or perhaps even caching handaxes, effectively extending the availability of raw material deposits into regions of raw material scarcity or unpredictability. It may also be viewed as a continuation of the Oldowan stone caching model proposed by Potts (1988) for Olduvai and other contemporaneous sites. 2b. Other nonhandaxe lithic artifacts were also transported and deposited at these large handaxe concentrations and likely also at sites with low frequencies of handaxes. This can be seen in the composition
The Organization of Early Stone Age Lithic Technology 127
of PC 1 from the analysis of the Olorgesailie data, which includes wide range of retouched tools showing extensive reduction and various nonhandaxe core forms. In contrast, PC 2 includes numerous retouched/reduced tool forms and cores but largely lacks handaxes. This pattern is again consistent with certain descriptions of the Developed Oldowan industry (for example, Sampson 1974), which have focused on elevated frequencies of “crudely” retouched tools (in implicit contrast with “finely” retouched handaxes). It is also what Binford (1987) refers to as the “small tool” tradition, which he linked to female economic activities. Although I am skeptical of Binford’s sex-based division of labor hypothesis, the exact dynamics of technological responsible for this pattern remain unclear. Such “small tool” assemblages are harder to identify on the basis of the Gemsbok Acheulean data, but it is easy to see that the frequency of handaxes is directly correlated with other retouch tools, other core forms, and later-stage core reduction debris. This fact confirms the patterning indicated by PC 1 from the Olorgesailie data analysis and makes it clear that not just handaxes were being transported from raw material sources around the landscape. 2c. Based on both the Olorgesailie and Gemsbok Acheulean data, it seems quite possible that the locations of large handaxe concentrations corresponded with major econotones as well as boundaries between geological zones. This correlation is clearest at Olorgesailie, where good information about paleoenvironmental conditions has been derived from fine-grained geoarchaeological studies of archaeological sites (Potts, Behrensmeyer, and Ditchfield 1999). For example, DE/89 was located in a corridor of riparian vegetation surrounded by grassland plains adjacent to the Lava Tongue volcanic outcrop. While there is far less direct evidence concerning paleoenvironmental conditions from the Gemsbok Acheulean region, it seems likely that the handaxe concentrations were located at the nexus of active beaches, estuarine marshes, and the more arid interior. Such ecotonal locations would have been strategic in terms of handaxe caching, maximizing the availability of handaxes for a wide range of potential economic purposes in dense zones of resource availability. 2d. I also speculate that there was a generative feedback quality to the formation of large handaxe concentrations. It seems probable to me that as handaxe accumulations grew larger over time, they would have attracted further handaxe deposition in future. Dropping handaxes at preexisting locations of handaxe concentration would have served a number of purposes: it would have allowed individuals to deposit handaxes as elements at known locations,
128 Chapter 3
which would have been easy to find and/or remember; in addition, larger accumulations of stone would have been more versatile and useful than smaller ones, effectively mimicking natural raw material sources on the landscape in terms of the capability of being used to produce a wide range of tool forms (including expedient flakes). Thus, as individuals encountered existing large accumulations of handaxes, it might have attracted them to drop their handaxes in those locations, which would go a long way toward explaining why handaxes tended to be concentrated in such large and impressive assemblages. 3a. There is substantial evidence that handaxes were extensively used and reduced through secondary flaking. This study has shown that reduced handaxe forms tend to occur together in lithic assemblages and that handaxes were used at activity areas, such as Site 15 at Olorgesailie, as both tools and as sources of sharp flakes. In addition, the work of McPherron (2000), Noll (2000), and others has clearly shown that the bulk of variation in the formal characteristics of handaxes was derived from dynamics of reduction in combination with constraints of initial raw material characteristics. This set of facts, in combination with the occasional finding of handaxes quite far from their raw material sources, suggests that handaxes had extensive use-lives and were curated for significant periods of time. Thus, I argue that handaxes were elements of Binford’s (1978) “personal gear” and, as such, were carried around the landscape extensively and ultimately deposited at strategic locations when individuals were confident in their ability to find replacement raw materials. 3b. Other nonhandaxe lithic forms were also transported, used, and reduced before being deposited into the archaeological record. This fact is evident in the dynamics of flake transport known from many ESA contexts (for instance, Toth 1985), as well as the reduction of flakes into various retouched tool forms. Among other things, it shows that handaxes were not always a universal technology carried by every individual. This pattern may suggest differential “tooling up” behavior based on the expectation of various future conditions. In other words, there were apparently circumstances in which flakes were transported instead of handaxes, perhaps in anticipation of conditions that would have favored flake use. This model is, of course, preliminary and far from comprehensive. However, it has implications in terms of both our understanding of early hominin behavioral variability and our methodological approaches for making inferences about it.
The Organization of Early Stone Age Lithic Technology 129
Implications for Early Hominin Settlement Systems, Mobility Patterns, and Economic Activities In general, bifacial technologies from a wide range of times and places have been linked with patterns of long-distance and/or frequent mobility by virtue of (1) their qualities in terms of potential reduction, (2) their multifunctionality, (3) their versatility in terms of their capability of being reduced into a range of final forms, and (4) the large quantity of useful debitage produced in their manufacture (Parry and Kelly 1987; Kelly 1988; Kelly and Todd 1988; Nelson 1991; Jeske 1992; Hiscock 1994, 2009; Whittaker 1994; Odell 1996; Montet-White 2002; McCall 2007, 2012; Shott 2007). Obviously, the range of bifacial technologies belonging to the human career are incredibly diverse, and to consider them as identical phenomena would clearly be fallacious. As Bamforth (2003) has cogently observed, Paleoindian projectile points are not the same thing as handaxes, and we should not expect all their organizational systematics to operate in exactly the same manner. With that said, I continue to believe that there is much to be gained from a comparative view of bifacial technologies through the lens of technological organization, and I also remain convinced of a link between dynamics of mobility and bifacial technologies. Furthermore, I think this link is perhaps clearest in the organization of Acheulean handaxe technology. This chapter has documented evidence for the curation and transport of handaxes, as well as their deposition into certain strategic contexts. It is now time to begin the more difficult task of linking these dynamics with prehistoric mobility patterns and settlement systems within the organizational framework. To start with, it may be useful to consider some of the technological properties of handaxes as forms of bifacial technology. As discussed, handaxes are inherently multifunctional objects capable of a wide range of technical tasks. There is ample evidence in terms of both experimentation and use-wear studies that handaxes really were the Swiss army knives of the ESA (Schick and Toth 1993). It is also fairly evident that handaxes did indeed act as cores in the field in terms of serving as sources of sharp flakes, adding significant dimensions to the existing multifunctionality of handaxes themselves. Thus, handaxes and the sharp flakes they produced were capable of addressing a broad constellation of technical problems that we may imagine faced early hominins. This set of functional dynamics is a clear example of what has been referred to previously as a “one-size-fits-all” technological strategy (Ambrose and Lorenz 1990; McCall 2007) with clear elements of Bleed’s (1986) concept of maintainability (see also Ohel 1987). This type of technological strategy has been commonly linked with conditions of unpredictability in terms of foraging tasks stemming from
130 Chapter 3
environments with randomly and evenly distributed resources (Binford 1979, 2001; Bleed 1986; Kelly 1988; Odell 1996; Greaves 1997). In providing a vivid ethnoarchaeological example of this phenomenon, Greaves (1997) describes the use of bows and arrows among the Pumé of Venezuela. In this case, the Pumé take very few tools into the field during daily foraging trips—mainly their bows and a few arrows—in order to limit the bulk of tools carried, which would be cumbersome in the context of mobility. These few tools are then used for a remarkably wide range of tasks; arrows are used for cutting and probing activities, and bows are even used as clubs and digging sticks. Furthermore, Greaves shows that the number of tasks for which bows and arrows were used varies directly with the distance traveled during foraging trips. In contrast, more tools were not carried on longer trips even when wider ranges of activities were anticipated. This strategy relates to the environments in which the Pumé live, which are forests characterized by randomly and evenly distributed resources. In this way, the Pumé carry a small range of highly multifunctional tools because they cannot predict the tasks they may be performing in the field. Likewise, I think it is reasonable to infer that handaxes, as a multifunctional technology, were associated with early hominin foragers moving through environments in which economic tasks were unpredictable. The paleoenvironmental evidence from Olorgesailie provides some context and evidence for this viewpoint. Potts and colleagues (1999) suggest that the major archaeological sites at Olorgesailie, even those located in marshy lowlands and ephemeral fluvial channels, were flanked by arid grasslands or savannas. These environments are known to be characterized by randomly and evenly distributed resources; African foragers living in these environments, such as the Hadza and Ju/’hoansi, have served as classic ethnoarchaeological cases of multifunctional and/or maintainable technologies (for example, Bleed 1986). They seem like good candidates for the kinds of environments that may have fostered the production of Acheulean handaxe technology. Multifunctionality and maintainability are not, however, the only technological strategies associated with handaxes. Bifacial reduction has long been recognized as a technique for economizing lithic raw materials, and handaxes would seem to operate in this manner as well. In general, bifacial thinning works through the striking of platforms very close to the margins of cores. Combined with other bifacial thinning techniques, such as the manipulation of platform angles, the grinding of platforms, and the use soft hammers, this approach results in the removal of thin, curving, and spreading flakes with small striking platforms that minimize the amount of loss to marginal edges (Whittaker 1994). Even in early African Acheulean contexts where hard-hammer percussion was
The Organization of Early Stone Age Lithic Technology 131
used exclusively (such as Koobi Fora), marginal percussion combined with the discoidal core reduction represents an effective strategy for maintaining the effective cutting edge of handaxes while removing useful flakes. Furthermore, since handaxes (especially large specimens such as those typical of the Olorgesailie sites) represent large units of lithic raw material that have been essentially pretested for quality and internal flaws, the techniques used for handaxe reduction imply that these tools could have had very long use-lives. In short, handaxes could have lasted for long periods of time in contexts in which replacement raw materials were unavailable or unpredictable—a quality that handaxes would seem to share with bifacial technologies from other times and places. The raw material economizing character of handaxes is further enhanced by their capability of producing sharp flakes efficiently. From the time of Jelinek (1977), it has been recognized that processes of handaxe manufacture result in large quantities of usable debitage, which is especially true when the techniques of bifacial thinning are employed. The thin, spreading flakes with curved profiles typical of bifacial thinning are highly efficient in maximizing the cutting edge of a flake while minimizing its volume (Prasciunas 2007). Thus, bifacial thinning is an efficient debitage strategy in many distinct respects, in addition to offering an effective technique for maintaining the use-lives of bifaces themselves. Therefore it seems likely that handaxes were a multifacted technological strategy involving the transport of cores around the landscape that could efficiently produce debitage while also serving as highly effective tools in their own right.
Implications for Mobility Patterns and Settlement Systems As discussed in this book’s introduction, the nature of early hominin settlement systems, site use patterns, and settlement systems have been the subject of great debate within the field of paleoanthropology. Once upon a time, most models of early hominin site use focused on the use of home bases, the employment of modern forager-like mobility patterns, and the implications of these for the evolution of increasingly human anatomy and cultural behavior (Washburn and Lancaster 1968; Isaac 1978a; Marshall and Rose 1996). More recently, thanks in part to more sophisticated excavation techniques, new analytical methods, and more extensive actualistic research of various sorts, arguments for home base use by ESA early hominins have waned (Binford 1981, 1983, 1984, 1987; Potts 1988, 1991; Wynn and McGrew 1989; Wynn et al. 2011; Blumenschine, Whiten, and Hawkes 1991, 2012a, 2012b; Sept 1992; Potts, Behrensmeyer, and Ditchfield 1999; O’Connell, Hawkes, and Blurton Jones1999; O’Connell et al. 2002; McCall 2006a, 2009). In
132 Chapter 3
terms of the case studies presented here, none of the Olorgesailie sites has any indications of a home base site use pattern, especially in light of the information provided by the finer-grained fieldwork of Potts and colleagues (1999). Even the famous FLK 22 site, which has long been taken as the most secure example of a home base site and living floor, has been reconsidered. Recently, Blumenschine and colleagues (2012a) have shown that this site was the result of repeated use by hominins as a feeding location and that the high incidence of carnivore tooth marks alongside the remains of dead hominins indicates that this was an extremely dangerous location unfit for home base use. After more than a half century of research on the topic, most evidence seems to suggest that early hominins used sites and moved around the landscape in dramatically different ways than do modern human foragers. For these reasons, I believe that it may be worth discarding our expectations based on early Man the Hunter-era conceptions of human forager mobility. But if modern human patterns of site use and mobility fail to offer good models of early hominin activities, what alternatives are there? The first obvious answer is other nonhuman primate species— an approach that was pioneered by Washburn (for instance, Washburn and DeVore 1961). This approach makes sense because we humans are primates and our ancestors presumably had mobility and site use patterns similar to those known from other modern primates. In addition, given the close similarity between modern humans and the great apes, it may be an effective strategy to think of both early hominin and modern human mobility patterns as specialized variants of a more general ape baseline. One of the key characteristics of nonhuman primate mobility that Washburn (Washburn and Avis 1958; Washburn and DeVore 1961; Washburn and Lancaster 1968) identifies as different from that of modern foragers is the lack of home base site use patterns. Across the tremendous variability in modern forager mobility and settlement systems, it is universal that groups maintain locations at which the majority of its members sleep, where various resources may be pooled, and where young, old, and otherwise lesser mobile individuals may stay during the day. Such home base sites may be used for dramatically varying periods of time but, among mobile hunter-gatherer groups, these time periods typically range between the scales of weeks and months (Kelly 1995; Binford 2001). Thus, among modern hunter-gatherers, foraging trips typically involve leaving a home base site, moving to some set of resources in the nearby environment, collecting these resources, and returning with them to the home base. This type of site use pattern is basically unknown among nonhuman primate species. For example, chimpanzees (which have perhaps the most complex and variable mobility patterns of other primates) typically
The Organization of Early Stone Age Lithic Technology 133
occupy nesting sites at night to reduce risk of predation (Goodall 1986; Boesch and Boesch-Achermann 2000; Reynolds 2005). During the day, they separate and move through the environment, either as individuals or small groups, while engaging in foraging activities. At the end of the day, they ultimately build new nesting sites together. A similar mobility pattern is practiced by various species of baboon (see also Altmann and Altmann 1970), which typically occupy some protected locale at night (such as sleeping cliffs), move through the environment while foraging during the day, and once more occupy a protected sleeping location at the end of the day. In these cases, groups tend not to occupy the same location for multiday periods; when it does occur, it is generally a coincidence. This type of mobility pattern has been referred to as “routed foraging” (Binford 1984: 259; see also Blumenschine, Whiten, and Hawkes 1991; Mithen 1991; Potts 1991, 1994; McCall 2006a; Langbroek 2012). As Washburn observed long ago, modern human forager home base site use represents a fundamental structural modification of primate mobility and settlement systems. One might well ask, if home base use is so effective for modern humans, why don’t other primate species use them as well? This is a problem tackled in greater detail later. For now, it is important to point out that the type of foraging patterns documented among chimpanzees and baboons have been shown to be highly efficient in dealing with certain types of spatial/temporal resource structures (Boesch and BoeschAchermann 2000; Segal 2007; Bates and Bryne 2009; Schreier and Grove 2010). In general, the routed foraging mobility system optimizes the ability of primates to move quickly through foraging environments, and, by moving consumers to the locations of foraging resources, it eliminates the costs inherent in the transportation of food resources to home base sites. Thus, the more appropriate question may be why our ancestors, at some point in the past, adopted the home base site use pattern, which holds certain significant costs and structural inefficiencies. At this point, we should consider Acheulean handaxe technological organization through the framework of nonhuman primate routed foraging mobility systems. Handaxes make a great deal of sense as a technology employed by individuals who do not have the option of retooling at home base sites, which is norm for modern foragers. Individuals may have solved the problems of tool use-life, utility, and raw material economy raised by the routed foraging mobility system by carrying a tool capable of resolving a wide range of technical problems and also acting as a large tested core capable of producing useful flakes. In this model, I argue that hominins would have roughed out handaxes at raw material sources, reduced them while transporting them for extended periods during routed foraging-type movements, and then
134 Chapter 3
strategically deposited them at caches when a return to areas of raw material availability were immanent. In short, handaxes may have been such a pervasive and long-lasting technology because they related to basic elements of hominin mobility and site use. Routed foraging mobility patterns, however, are not sufficient by themselves to explain the manufacture of handaxes. For one thing, it seems likely that if routed foraging mobility was common to Acheulean hominins, it would also have been in place during the Oldowan, before the development of handaxe technology. In addition, many regions of the Lower and Middle Pleistocene world with rich archaeological records, such as Southeast Asia, essentially lack handaxes. Other conditions must have been involved in stimulating the development of handaxe technology during the Acheulean in Africa and western Eurasia. Another important consideration is the length of foraging trips and the size of early hominin territories. Most nonhuman primate territories tend to be relatively small, especially for primates living in tropical regions with dense foraging resource concentrations. For example, chimpanzees and baboons tend to have territories of 25 km2 or less, though some extreme outliers are known for both species (Altmann and Altmann 1970; Goodall 1986). If we imagine Oldowan hominins (and their ancestors) living in chimpanzee-sized territories, it seems likely that problems of raw material availability would have been much less of an issue, since raw material sources would never have been very far from the locations of tool use. In fact, Potts (1988, 1991) has argued that Oldowan sites were stone caches in which hominins deposited stone from raw material sources located nearby (within several kilometers) at locations of commonly recurring economic activities. Thus, the putative small territory sizes of certain early hominins did not necessitate handaxes as tools for dealing with longer-distance mobility within routed foraging systems. For a number of reasons, it seems quite likely that hominins progressively developed larger territory sizes over the course of the Pleistocene. To begin with, we know that all modern foragers, including those living in environments with dense resource concentrations, live in much larger territories than do other primate species (Kelly 1995; Binford 2001). For example, Ju/’hoansi and Hadza, who live in arid grassland/ savanna environments typical of the regions of sub-Saharan Africa in which major archaeological sites are located, have group territories ranging between around 100 and 500 km2, with some distinctively large outliers. Furthermore, hunter-gatherers with lower population densities living in extreme arid or Arctic conditions, such as the Australian Aranda and Nunamiut of Alaska, may have group territory sizes of ranging from 700 to 4000 km2 or more. Indeed, very low population densities and
The Organization of Early Stone Age Lithic Technology 135
large territory sizes were likely pervasive among Pleistocene hominin groups. Why do modern human foragers have such large territories? There is, of course, a substantial literature on this topic, mostly stemming from the human behavioral ecology theoretical framework (see Binford 2001 for a lengthy review). Foragers have large territories for many reasons: since humans tend to eat higher-quality diets than other primate species and since such high-quality food resources tend to be spatially dispersed, large territories are necessary to maintain a secure subsistence base; foragers use large territories to buffer against the risk of resource failure, since larger territories hold more “back-up plan” options. Nonhuman primates generally focus on the consumption of low-quality foods that are densely and ubiquitously packed into small territories, while human foragers focus on higher-quality resources that are sparsely distributed and often clumped within large territories, while also being more subject to temporal variability and potential failure. Carnivory also clearly plays a significant role in determining territory size. One specific use of large modern human forager territories is the hunting and/or scavenging of prey animals. In comparison with plant food resources, prey animals are rarer and more sparsely distributed, in addition to moving at various spatial scales according to seasonality and other related factors. By way of comparison, other large-bodied carnivores have group territory sizes that are fairly comparable with those of human foragers in equivalent environments. For example, African lions (Panthera leo) have group territory sizes of around 300 km2 (though smaller in regions of high ungulate biomass, such as the Serengetti, and larger in regions of low ungulate biomass, such as the Kalahari; see papers in Skinner and Chimimba 2005). Similarly, spotted hyena (Crocuta crocuta) territory sizes may range between 40 km2 in the game-rich Ngorongo Crater and 1,000 km2 in the Kalahari (Mills and Hofer 1998). Alaskan grey wolves (Canis lupus), living in environments with extremely low ungulate biomass, may have group territory sizes exceeding 6,000 km2 (Mech and Boitani 2003). Thus, it seems highly probable that the transition to significant carnivory among early hominins would have stimulated the dramatic expansion of territory sizes, as well as the development of more extreme mobility patterns. In certain respects, large-bodied carnivores may actually provide better points of comparison for early hominin mobility patterns and territorial dynamics than other nonhuman primates. Human foragers and, at some point in the past, our hominin ancestors began making longer foraging trips than other primate species, resembling more closely in certain ways those of other large-bodied carnivores. For this reason, I contend that the origins of new subsistence strategies based
136 Chapter 3
on the exploitation of higher-quality and more sparsely distributed resources induced the modification of existing mobility systems of the sort practiced by nonhuman primates by substantially lengthening daily foraging trips and expanding group territory sizes. In this way, we should perhaps consider viewing early hominin mobility systems as hybrids of those practiced by nonhuman primate species and those of large-bodied carnivores, not simply a more basic version of those practiced by modern human foragers. In this model, I argue that handaxes functioned as an element of a technological strategy designed to cope with problems raised by the lengthened/elaborated earlier hominin mobility patterns in providing tools with long use-lives capable of both being used for a wide range of tasks and operating as effective cores. In this sense, they may be viewed as a modification of existing Oldowan stone tool technologies designed to solve such problems raised by increased mobility and larger group territory sizes. Although this model clearly requires further analysis and testing, I offer it is a way point in examining the organization of early hominin technological systems.
Implications for Handaxes in Other Spatial and Temporal Contexts As noted, a central feature of the archaeological patterning of Acheulean handaxes is their broad geographic and temporal distribution. For this reason, any model of Acheulean technological organization must recognize and incorporate the substantial variability implied by the ranges of times and environments in which handaxes may be found, as well as contemporaneous contexts in which they are not found. Although it is far beyond the scope of this book to provide a comprehensive comparative analysis of ESA and Lower Paleolithic industries across the entire tropical and temperate Old World, there are some archaeological phenomena that clearly relate to this model of handaxe technology that are worth briefly considering here. First, Lower and Middle Pleistocene handaxe technologies found outside core regions of sub-Saharan Africa represent diverse ecological settings in which the Acheulean industry may be found. Some regions, such as the arid zones of southwestern Asia, may not have been tremendously different from the arid grasslands and savannas of subSaharan Africa. Acheulean faunal assemblages from important sites in the region have similar animal species, and it is fairly clear that environmental conditions were at least comparable. In fact, it has been argued that early hominins initially moved out of Africa as a part of the expansion of this African ecosystem (for example, Antón, Leonard,
The Organization of Early Stone Age Lithic Technology 137
and Robertson 2002). Other regions, however, may have represented substantially different ecological contexts. This condition is most true of the temperate zones of Ice Age Europe, which were characterized by much cooler, wetter, and seasonal conditions. It is worth considering how such differing environments may have conditioned issues such as mobility, settlement, and technological organization. I think it is certainly worth considering that handaxe technology may have worked differently in the temperate environments of Europe than in it did in the arid zones of sub-Saharan Africa. On the one hand, certain aspects of these higher latitude environments may have further stimulated the necessity of handaxe technology. For example, among modern human hunter-gatherers (and virtually every other terrestrial animal species), there are strong direct relationships between mean annual temperatures, above-ground plant productivity, and group territory size (Binford 2001). It seems likely that Acheulean hominins living in temperate Europe would have had even larger territories and moved in more extreme ways than those in the arid tropical regions of Africa and southwestern Asia. One might imagine a scenario in which hominins moved within an extreme system of routed foraging during winters when plant food resources were absent, making use of handaxe technology for reasons similar to the ones I have proposed in this chapter. On the other hand, greater seasonality and more rainfall would have had profound effects on foraging strategies and mobility systems, which must remain the subject of future discussions elsewhere—but it is necessary to point out the evident possibility that handaxe technology worked in dramatically different ways in these distinct ecological zones. Similarly, it is worth thinking about the regions of southeastern Asia that (for the most part) lack Acheulean handaxe technology—in other words, the issue of the so-called Movius line (Movius 1948). This perplexing phenomenon has been explained in numerous ways over the last half century. Some researchers have focused on cultural geography or taxonomic differences between hominins with differing technical capabilities—in short, arguing that different species of hominin occupied southeastern Asia and made different types of technology. Others, taking a specialized view of handaxe functionality, have argued that either the hominins living in southeastern Asia did not engage in the kinds of activities requiring handaxes or they used alternative materials to manufacture cutting tools, such as bamboo (Schick and Toth 1993; Brumm 2010). Last, Movius (1944) himself even argued that southeastern Asia mostly lacks geological deposits with high-quality lithic raw materials necessary to produce handaxes. To date, none of these explanations has managed to gain overwhelming support, and the debate continues (see Lycett and Bae 2010 for a review).
138 Chapter 3
According to the model presented in this chapter, I offer the possibility that some subtle differences in ecology between the two sides of the Movius line may account for this interesting archaeological pattern. In short, I wonder if certain environmental dynamics in southeastern Asia might have induced patterns of mobility and settlement systems that did not necessitate handaxe technologies. Specifically, it could be the case that the copious rainfall, high level of plant productivity, and low seasonality brought about by monsoon weather systems endemic to southeastern Asia may have allowed hominins to live in smaller territories and travel shorter distances within routed foraging mobility systems. Thus, southeastern Asian hominins may have had virtually identical subsistence habits to those of Africa and Europe, while simply moving around the landscape according to slightly different resource acquisition strategies. Finally, my model has interesting implications for the changes in handaxe technology seen in the terminal Acheulean industries of subSaharan Africa at the close of the Middle Pleistocene, such as the Fauresmith industry of southern Africa. To start with, the characteristics of terminal Acheulean handaxes are rarely considered when researchers speculate about issues such as the specialization of handaxe function as, for example, heavy-duty butchery tools or projectile weapons (McCall and Whittaker 2007). However, when terminal Acheulean small handaxes are considered within the context of the ESA-to-MSA transition (see Underhill 2011 for a recent review), they have been thought to mark shifting functions toward more precise or delicate cutting tasks (for instance, Leakey 1950). Once again, this is an unsatisfying conclusion that offers few useful analytical implications. When handaxes are viewed through the lens of lithic reduction and technological organization, the small handaxes of the terminal Acheulean may represent some important types of patterning. I propose the possibility that small handaxes represent the outcome of longer periods of curation and more extensive tool reduction, resulting from more extreme patterns of mobility. In other words, I suggest that terminal Acheulean handaxes may have been carried for longer periods and greater distances from raw material sources; during these periods of transport, the axes were used and reduced more, resulting in smaller final handaxe forms deposited into the archaeological record. Here, more sophisticated approaches to bifacial thinning involving platform preparation and the use of soft hammers—techniques common to the production of the finely thinned handaxes of the “classic” Middle Pleistocene Acheulean—would have facilitated the further reduction of handaxes into ever smaller forms. This scenario would help explain some problematic aspects of the archaeological record of the terminal Acheulean, such as variability in
The Organization of Early Stone Age Lithic Technology 139
the pervasiveness of the small handaxe type fossil over time and space (Underhill 2011). It also provides an interesting alternative vantage point on the interdigitation of handaxes, the use of the Levallois technique, the production of points, and other elements of technological mosaicism in the ESA-to-MSA transition (Tryon and McBrearty 2002; van Peer et al. 2003; Tryon, McBrearty, and Texier 2005; McBrearty and Tryon 2006; Tryon 2006). As I discuss in the next chapter, it may be the case that the trend toward this technological extreme in the terminal Acheulean foreshadowed some of the major technological reorganizations represented within the ESA-to-MSA transition. My greater point is that a monolithic view of Acheulean technology that does not address its substantial spatial and temporal variability does us little good at this point. From my vantage point, incorporating spatial and temporal variability holds the key to scientific theory building in terms of understanding Acheulean technology and the forms of behavior in which it operated. Furthermore, theoretical scenarios based on beliefs that the Acheulean industry worked the same way over such large spans of space and time amount to deeply flawed empirical generalizations. The same sentiment is true concerning attempts to deal with Lower and Middle Pleistocene technological variability naming a multitude of contemporaneous cultures or industries, such as the Clactonian, the Developed Oldowan, the East Asian Chopping Tool complex, or whatever other terms we wish to apply to ESA/Lower Paleolithic sites that lack bifaces. Culture historical generalizations of this sort have served only as impediments to understanding the broad range of early hominin behavior and the ways in which such behavior varied according to environmental and demographic conditions. A brighter future lies in facing such spatial and temporal variability head on and using it as the basis for better theory building.
Concluding Thoughts on the Archaeology of Acheulean Handaxes This chapter has reviewed historical approaches to the archaeology of Acheulean handaxes and examined them through the perspective of the organization of technology. Because of their appealing formal qualities in terms of symmetry, shape redundancy, and elaborate bifacial thinning, handaxes have inspired for two centuries the imaginations of archaeologists thinking about our early hominin ancestors. Archaeologists have viewed handaxes in many such manners: as highly sophisticated and specialized tools; as the result of knapping techniques requiring complex social/linguistic structures of teaching and learning; as markers of cultural, ethnic, or species affiliation; as potential stores
140 Chapter 3
of symbolic information and/or aesthetic content; and even as devices for male sexual signaling to potential female mates. Although diverse, these views share the general implication of framing early hominins as culturally and cognitively sophisticated and generally human-like in their social and economic behavior. Thus, such scenarios have been articulated in major theoretical debates on hominin evolution, including those underlying the hunting-and-scavenging debate. However, after two centuries of study, there remains a basic dissatisfaction with views based on the putative cognitive or cultural constructs of early hominins. Here, Binford (for example, 1983) might point out that we fundamentally cannot know what was on the minds of past peoples, and this assertion is likely more realistic of Acheulean hominins than it is for hominins of more recent prehistory. Even if we eschew such confrontational rhetorical tactics, substantive conclusions concerning the sophistication of early hominins based on handaxes are not at all self-evident, and referential frameworks for making inferences about such issues remain (almost) totally lacking. On the one hand, it is common to hear archaeologists wonder how the striking archaeological patterning of Acheulean handaxes could not imply human-like cultural or cognitive sophistication. On the other hand, such views are based more on presentist assumptions concerning the relationships between symbolic systems and modern human material culture than on any real archaeological evidence. For me, at least, these views of Acheulean handaxe technology remain ambiguous at best. One of the central themes of this book is the attempt to free ourselves from the stagnant debate over the question of early hominin cultural/ cognitive sophistication and to shift our focus toward understanding early hominin behavior on its own terms and its implications for associated economic practices, social systems, and evolutionary dynamics. In terms of Acheulean handaxes, there are clearly aspects of their archaeological patterning that hold a great deal of useful information in terms of foraging strategies and mobility systems that have historically played second fiddle to flashier questions of cognition and culture. This chapter has demonstrated that Acheulean archaeological assemblages hold great unrealized potential in speaking to issues of foraging behavioral ecology. However, to better understand the lifeways of our early hominin ancestors, we must jettison our baggage in attempting to either prove or disprove their humanity or modernity.
Chapter 4
The Organization of Middle Stone Age (MSA) Lithic Technology
T
he African MSA has taken on a great deal of significance and has received dramatically increased attention over the last several decades. This situation is the result of several developments in our understanding of modern human origins: first, it is now known from multiple lines of evidence that anatomically modern humans emerged as a species early during the MSA; second, there have been many striking discoveries of symbolic objects and other archaeological phenomena once taken as markers of “behavioral modernity” during the later MSA; third, there has been an increasing recognition that early modern human populations throughout the MSA had diverse and complex economic practices, which were likely accompanied by equally diverse social systems. It is certainly the case that the pendulum of Paleolithic archaeological research has swung strongly toward the MSA of subSaharan Africa. This new attitude contradicts the traditional views of MSA technology, economic systems, and social behavior. Beginning with the initial definition of the MSA by Goodwin and van Riet Lowe (1929; see also Burkitt 1933), there has been a persistent sense that MSA forms of foraging behavior and technology were static over time and underdeveloped relative to the Eurasian Middle Paleolithic (see also Clark 1975). From this perspective, it became commonplace to consider the MSA as a manifestation of “archaic” or nonmodern behavior of the
Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 141–186 © 2015 Left Coast Press, Inc. All rights reserved. 141
142 Chapter 4
sorts typical of earlier hominin species and the Middle Paleolithic of Europe (for example, Binford 1984; Klein 1989; Klein and Edgar 2002; Mellars 1989; Bar-Yosef 2002; see McBrearty and Brooks 2000 for a longer discussion). Thus, studies of MSA lifeways languished and, when presented in print, generally served as a contrast for later manifestations of behavioral modernity in the Upper Paleolithic. Since the late 1990s, these views have largely melted away as archaeologists have considered the implications of the early symbolic objects and complex technologies of the later MSA. However, while certain aspects of our collective thinking about the MSA have improved, our conceptions of two phenomena remain underdeveloped. First, our understanding of the organization of MSA foraging behavior and its variability remains unfortunately dim. The MSA archaeological literature is currently populated by what I would characterize as atomistic studies of stone tool manufacture techniques, faunal assemblage characteristics, bone tools, symbolic objects, and so forth, while there is much less thinking about MSA foraging behavior in a holistic way. This characterization is especially true in terms of organizational issues such as the nature and variability of MSA settlement systems and mobility patterns, although this situation has begun to change. Second, much less significance has been ascribed to the transition from the Acheulean to the MSA in sub-Saharan Africa than to later MSA industries (cf. Tryon and McBrearty 2002; Tryon, McBrearty, and Texier 2005; Tryon 2006; van Peer et al. 2003; McBrearty and Tryon 2006; Herries 2011). Because of the proximity of this transition to the origins of anatomically modern humans in sub-Saharan Africa, the origins of the MSA would seem to be fertile ground for examining shifting economic strategies. This chapter departs from traditional perspectives and approaches in examining MSA lithic technology from the organizational perspective using case studies from the sites of Gademotta, Kulkuletti, and the Omo Kibish formation in Ethiopia. Using these case studies, I argue that early MSA lithic assemblage characteristics stemmed from settlement systems based on the occupation of residential sites. I also make the case that MSA hominins employed a residential foraging mobility strategy, making frequent residential moves and occupying residential sites for short periods of time relative to modern human foragers. In turn, I propose that these mobility and settlement patterns resulted from MSA hominins living in relatively low population densities and focusing their subsistence strategies on highly ranked food resources. In closing, I explore the implications of these patterns in terms of the significance of the transition from the Acheulean to the MSA, making two main arguments: (1) while hominins during both the Acheulean and early MSA focused on similar subsistence resources, MSA hominins foraged
The Organization of Middle Stone Age (MSA) Lithic Technology 143
more efficiently by employing a residential mobility strategy; (2) this shift in mobility patterns and settlement systems represented a major instance of early subsistence reorganization related to larger populations, the occupation of more marginal environments, and increasing foraging risk.
Historical Perspectives on MSA Stone Tool Assemblages It is interesting to begin by contrasting the levels of interest in the Middle Paleolithic stone tool industries of Eurasia with the MSA of subSaharan Africa. By the close of the 1960s, there were voluminous studies of Middle Paleolithic assemblages, and analytical approaches had become sophisticated to the point that some fundamental issues of lithic technology, such as those debated by Bordes (1961) and the Binfords (1966; L. Binford 1973), could be addressed for the first time. In stark contrast, by this time, there were only a handful of studies of MSA lithic assemblages, and they were generally framed as points of comparison for either earlier or later stone tool industries with more distinctive characteristics (for instance, Mason 1957, 1962). In addition, these studies remained strongly typological in their orientation and lacked much of the sophistication that characterized the contemporaneous debate over Eurasian Middle Paleolithic variability. It is also curious that studies of Middle Paleolithic and MSA stone tools were used to argue for diametrically opposed conclusions concerning the cognitive capabilities of Neanderthals in Eurasia and African hominins. For example, Bordes (1961) argued that the variability manifested in the facies of Mousterian was evidence of ethnic or cultural differences between Neanderthal groups, implying forms of social organization similar to those of modern humans. In contrast, the absence of any major industrial change over time during the MSA was taken as evidence for the archaic behavioral characteristics of hominins prior to any putative Upper Paleolithic or modern human revolution (for example, Klein 1989; contra Clark 1970). These contrasts seem even stranger when one considers the strong similarity between Middle Paleolithic and MSA stone tool industries. There are several likely reasons for this historical difference. First, both the Middle Paleolithic and MSA are characterized by very low frequencies of retouched tools suitable for typological study, usually only a few percentage points for any given total assemblage. In a way, this made them an ideal Rorschach test of sorts wherein European researchers, such as Bordes (1961), could express pride in their presumed Mousterian ancestors and wherein various scholars of the MSA in Africa could frame this period as a point of contrast with the later emergence
144 Chapter 4
of behavioral modernity. Second, in reviewing early scholarship on the MSA, one notices that conceptions of its chronology were very poorly developed until the 1990s, thanks to the limitations of available dating technologies. Into the 1970s, there was a general consensus that the MSA was roughly contemporaneous with the Upper Paleolithic of Eurasia (for example, Clark 1970; Klein 1970). This view had the effect of making the MSA seem underdeveloped for its age relative to the Paleolithic industries of Eurasia. Third, the number of MSA sites described at this time paled in comparison with those known from Eurasia, offering a limited basis for examining issues of both culture history and foraging behavior. Interest in studying MSA stone tool technology was vitalized owing to more widespread recognition of the exceptional nature of certain later MSA contexts, as well as the realization of the deep antiquity of the MSA relative to the Paleolithic sequences of Eurasia. Because of these shifts in perception, more sites were excavated, more data were collected, and new research problems emerged. Researchers began trying to provide better cultural contexts for these discoveries in terms of the technologies with which they were associated. Indeed, such studies have made great strides in terms of reducing our ignorance concerning the nature of MSA forager lifeways. Given this background, such studies have often had a disproportionate focus on the nature of tools and weapons in the Still Bay and Howiesons Poort industries, which date roughly between 80 and 55 ka. Interest in this time period has been especially strong because of the exceptional nature of the stone tool industries associated with it. The Howiesons Poort industry was characterized by the production and backing of blades in manners that are similar to the Upper Paleolithic of Eurasia. This fact caught the attention of scholars beginning with the excavations of Klasies River (Singer and Wymer 1982), which effectively demonstrated that Howiesons Poort industry was a part of the later MSA and not transitional with the LSA (contra Binford 1984; Parkington 1990). Discussions of the Still Bay industry had largely ceased before the excavations at Blombos Cave (Henshilwood et al. 2001a), which showed that this industry was also an element of the later MSA, slightly predating the Howiesons Poort. The Still Bay industry was characterized by the production of elaborately thinned lanceolate bifacial points, which have reminded various researchers of both the Solutrean points of Western Europe and Paleoindian points of North America (see McCall and Thomas 2012 for a review). These exceptional characteristics offered fertile ground for using stone tool technology as a source of information in thinking about the broader foraging strategies of later MSA human groups associated with the
The Organization of Middle Stone Age (MSA) Lithic Technology 145
manufacture of early symbolic objects. The most striking of the resulting studies have focused on the inference of patterns of weapon design based on aspects of tool use-wear, residues, and damage morphology patterns (Lombard 2005, 2006, 2007; Pargeter 2007; Mohapi 2007; Wadley and Mohapi 2008; Wadley 2010; Villa et al., 2009, 2010). On the whole, these studies suggest the sophistication of hunting weaponry during the later MSA and, therefore, the practice of relatively specialized hunting tactics of one sort or another. Some researchers argue that Still Bay points were designed according to specialized parameters and functioned as the termini of hunting spears, perhaps even were delivered using spear-thrower technology (Brooks et al. 2006). Likewise, Howiesons Poort backed blades are argued to have been replaceable components of composite projectile tips, again often thought to have been delivered using spear-thrower or even bow-and-arrow technology (Pargeter 2007; Mohapi 2007; Wadley and Mohapi 2008; Lombard and Phillipson 2010). Although I remain a bit skeptical of the specifics concerning projectile delivery technology (McCall 2011; McCall and Thomas 2012), these studies have done very well to put a more tangible face on the foraging technologies used during the Still Bay and Howiesons Poort periods. Furthermore, they have fostered more sophisticated research on the design of tools and weapons in other times and places during the MSA. In addition, others have focused on the nature of later MSA core reduction sequences. For example, Wurz (1999, 2002) has documented the ways in which various later MSA core reduction strategies operated at Klasies River. Likewise, Villa and colleagues (2005, 2009, 2010) have provided useful descriptions of the core reduction techniques used in the Still Bay, Howiesons Poort, and other terminal MSA industries. Wurz has largely focused on the implications of the sophistication of later MSA core reduction for the cognitive capabilities of early modern humans; Villa and colleagues were more broadly concerned with implications for tool design and hunting behavior (see also Högberg and Larrson 2011). Finally, there have also been many studies of Middle Paleolithic core reduction techniques, which bear some strong similarities with those used during the MSA, conducted within the chaîne opératoire analytical framework (for example, van Peer 1992; Boëda 1994, 1995; Delagnes 1995; Pelegrin 1995). These have also offered an important basis for approaching various MSA core reduction techniques, especially the Levallois technique. Despite the great progress of research on the MSA in recent decades, a few things have been overlooked within these lines of research. In focusing on the design of MSA hunting weapons, less attention has been paid to issues of technological organization and their implications for understanding mobility patterns, settlement systems, and resource
146 Chapter 4
acquisition strategies. As the previous chapter has demonstrated, stone tool assemblages offer unique chances to examine the articulation between the characteristics of lithic assemblages, patterns of stone tool production, the location and timing of knapping episodes, and the ways in which past forager groups used their landscapes. Although we have made great progress in understanding the tools with which later MSA foragers actually pierced their prey animals, our understanding of broader patterns of technological organization remains relatively underdeveloped. This chapter offers a preliminary exploration of this variety of patterning, especially as it pertains to the early MSA and the transition from the Acheulean. My findings suggest that the AcheuleanMSA transition may have been a major shift in mobility and settlement systems, establishing a strategic framework from which subsequent modern human foragers would vary according to the environmental and demographic conditions they were to encounter.
Previous Research on MSA Technological Organization The trajectory of research on MSA stone tool technology has, in my view at least, passed through three distinct stages: (1) an early period focusing on the identification of the type of fossils germane to the reconstruction of regional culture histories (see Figures 4.1 and 4.2);
Figure 4.1 Typical MSA Levallois point from the site of Erb Tanks, Namibia
Figure 4.2 Typical MSA retouched point from the site of Tsoana, Namibia
The Organization of Middle Stone Age (MSA) Lithic Technology 147
(2) a later phase aimed at understanding the nature of core reduction strategies and resulting debitage characteristics; (3) the most recent phase (which predominates currently) dedicated to learning about the design of MSA hunting weapons and their implications for subsistence practices. Curiously, there was never really a debate concerning MSA lithic assemblage variability equivalent to that concerning the Mousterian in Europe (for instance, Bordes 1961; Binford and Binford 1966; Binford 1973; Rolland and Dibble 1990; Dibble 1995). Likewise, and perhaps consequently, there has never been a sustained consideration of the nature of MSA lithic assemblages from the perspective of technological organization. In the beginning, this situation almost certainly resulted from a general lack of chronological information for the vast majority of MSA sites, which was not resolved until fairly recently. In addition, the nature of MSA assemblages themselves, in having few chronologically distinctive retouched tool types or core forms, resisted even relative chronological ordering in the absence of deeply stratified cave sites like those known from Europe. Furthermore, there was much less available information with which to consider issues of lithic technology. In Europe, many sites were known and described well enough to identify and debate the meaning of lithic assemblage variability, which (after all) involves only subtle variations in the frequencies of certain retouched tool types. An equivalent level of knowledge was out of reach at the vast majority of African MSA contexts and, to a certain extent, it still is. These issues of evidence no doubt influenced the priorities of research on the MSA and limited the nature of debates about technology; however, this situation has changed dramatically in recent decades. For these reasons and others, the organizational approach had a muted effect on MSA stone tool research in the later part of the 20th century, with some important exceptions. A key example is that of Stanley Ambrose and Karl Lorenz (1990), who offered an innovative view of later MSA technology in South Africa in organizational terms. Ambrose and Lorenz (1990) made use of a range of lithic data sources, focusing especially on the frequency of exotic lithic raw materials and implications for mobility patterns and settlement systems. In synthesizing their results, they employed the frameworks constructed by Dyson-Hudson and Smith (1978) concerning hunter-gatherer systems of territoriality based on a cross-cultural survey of modern forager groups. In this respect, Ambrose and Lorenz were unique in offering a thorough grounding for their archaeological inferences in terms of modern huntergatherer behavioral variability. To summarize, Ambrose and Lorenz (1990) argued that the preHowiesons Poort MSA periods at sites primarily along the Indian Ocean
148 Chapter 4
coast of South Africa1 were characterized by relatively stable residential settlements, low residential mobility, moderate group sizes, and the occupation of small, defended foraging territories (see Dyson-Hudson and Smith 1978 for further discussion). Ambrose and Lorenz, in turn, related these behavioral patterns with an ecological context of dense and predictable resource availability. In contrast, they suggested that the Howiesons Poort industry was characterized by more ephemeral uses of smaller residential sites, smaller group sizes, and larger and undefended foraging territories. Ambrose and Lorenz attribute these patterns to an ecological context of predictable but sparsely distributed resources. They support these views in terms of the characteristics of various MSA stone tool assemblages and exotic raw material transport patterns. What is most innovative about this approach is its linkage of the characteristics of lithic assemblages from various industries with broad and holistic patterns of foraging lifeways. By way of contrast, there is a large and growing literature suggesting that the Howiesons Poort industry emerged as the result of a single new weapon type—the bow and arrow (see McCall and Thomas 2012 for a review). While this may be partially true (though I remain to be completely convinced), it is still a highly problematic explanation in being both reductive and mechanistic. If we accept this explanation as satisfactory, we limit the other kinds of information that we may learn from Howiesons Poort lithic assemblages. The organizational perspective offered by Ambrose and Lorenz suggests that shifts in stone tool technology may not map onto the emergence of new types of weapons or tools in simplistic ways. Instead, forager mobility patterns and resource acquisition strategies structured both the location and the timing of knapping activities and tool use, as well as the nature of the technical problems for which tools were used. Thus, largescale shifts in technology such as the transition from the pre-Howiesons Poort MSA to the Howiesons Poort industry may offer a wealth of subtle information concerning changing foraging lifeways in the Upper Pleistocene of southern Africa. In my own research (for example, McCall 2006b, 2007), I have sought to integrate new evidence from the later MSA of South Africa, especially that having to do with the Still Bay industry, while making use of the organizational framework presented by Ambrose and Lorenz (1990). In these papers, I have argued that the pre-Still Bay, Still Bay, Howiesons Poort, and post-Howiesons Poort technological systems were each characterized by significant strategic shifts mapping onto changing economic and mobility patterns. Specifically, I have made the case that the striking features of the Still Bay and Howiesons Poort industries—thinned bifacial points and backed blades, respectively— each resulted from the adoption of more extreme mobility patterns, as well as shifts in settlement and territorial systems of the sort suggested by
The Organization of Middle Stone Age (MSA) Lithic Technology 149
Ambrose and Lorenz. In short, their organizational perspective couched in comparative perspectives on modern foragers has provided me with productive avenues for learning about Upper Pleistocene foraging behavior in southern Africa. A second example is that of Kuhn (1995), who presented an innovative analysis of the Italian Middle Paleolithic making use of the organizational framework. In escaping from the bonds of retouched tool typology, Kuhn focused instead on the dynamics of flake blank production strategies and the collection of lithic raw material. Kuhn was not satisfied, however, with simply explicating the techniques with which Italian Neanderthals reduced their cores or with viewing this as somehow self-evident in terms of behavioral interpretation. In contrast, he offered a compelling set of inferences concerning the organization of Neanderthal foraging systems and their relationships with the broader ecological structure of the region. Kuhn’s strategic approach is innately based on a synthetic knowledge of modern forager subsistence, mobility patterns, and technology, as well as the ways in which various environments may act in structuring these. In the end, Kuhn (1995) came to the conclusion that Mousterian technological systems were flexible and capable of being adapted to contingencies anticipated by tool makers at various time scales. The corollary of his finding is that planning (that is, the anticipation of future technical problems as the central concept of the organizational approach) was a strong determinate of lithic assemblage structure in terms of the production of curated tools. Finally, Kuhn was able to detect some important diachronic trends at the end of the Mousterian, finding evidence for increasing subsistence intensification, more stable residential site use dynamics, and shifting mobility patterns associated with the increasing prevalence of ambush hunting tactics. Kuhn’s (1995) study also has some important methodological implications. For one thing, it demonstrates that assemblages with similar inventories of lithic-type fossils may be associated with dramatically differing systems of foraging behavior. All the changes documented by Kuhn occur within the Mousterian itself and, without the appearance or disappearance of any major tool types, indicate more subtle forms of changing foraging ecology. In addition, major subsistence changes—in this case, the shift from predominately scavenging to hunting—do not always necessitate the invention of new lithic tool types or concomitant ways of reducing cores. Finally, Kuhn showed that certain aspects of lithic production, especially the collection of raw material and the transport of tools around the landscape as elements of personal gear, are strongly integrated with broader patterns of foraging behavior. Thus, Kuhn’s study offers a well-lit path for putting the organizational approach into action in studying Middle Paleolithic and MSA stone tool technologies in the interest of learning about prehistoric patterns of foraging ecology.
150 Chapter 4
Current Goals With so much yet to understand in terms of the organization of MSA technology in Africa, my goals for this chapter are relatively modest. First and foremost, I intend to show the ways in which the early MSA was distinct from the Acheulean in terms of its patterns of technological organization. Furthermore, in concluding this chapter, I argue that these organizational differences were substantial and related to the adoption of a new overarching settlement system: the occupation of true residential camps in the sense commonly expressed by archaeologists in the search for home bases and of the sort consistently used by modern forager groups. Although I suspect that there may be few who would argue that MSA foragers on the cusp of the Upper Pleistocene did not use home base sites, I think it is important to show how exactly the Acheulean and early MSA differed in terms of technological organization. The second more specific goal recognizes that the origins of MSA (1) marks the beginning of a general mode of technology that persisted for hundreds of thousands of years, (2) was present across the bulk of the Old World, and (3) represents the technological context from which modern humans emerged. Such technological systems, as Kuhn (1995) describes, were flexible and varied in significant ways over various spatial and temporal scales, stemming from particular prehistoric economic and ecological dynamics. Furthermore, this variability clearly takes on greater frequency and amplitude later in the Upper Pleistocene, likely with direct relationships to profoundly important evolutionary phenomena. These include the development of increasingly complex social and symbolic systems, intensified foraging economies and technologies, and the emergence of modern humans from Africa. Understanding the origins of MSA technological systems and the kinds of variability inherent within its early phases represents an obvious priority for research on modern human origins. It is my belief that the organizational approach is uniquely poised to provide cogent insights on these issues. Case Study 1: Gademotta and Kulkuletti Gademotta and Kulkuletti are a set of MSA archaeological localities located in the central Rift Valley of Ethiopia (Figure 4.3). Based on excavations conducted in the early 1970s, Fred Wendorf and Romuald Schild (1974) report detailed lithic assemblage data from six major localities: ETH-71-1, ETH-72-5, ETH-72-7B, ETH-72-6, ETH-72-8B, and ETH-72-9.2 These localities are located on the western peripheries of Lake Ziway—one of four lakes in the Rift Valley that formed a single great lake during higher lake level stands during the Pleistocene. These localities caught the attention of archaeologists involved with the
The Organization of Middle Stone Age (MSA) Lithic Technology 151
Figure 4.3 Map showing the location of the Gademotta and Kulkuletti archaeological sites, Ethiopia
Ethiopian archaeological survey by virtue of the exploitation of highquality obsidian and the resulting beauty of their MSA artifacts, which are frequently described as advanced or precocious (Wendorf and Schild 1974; Clark 1988; McBrearty and Brooks 2000; Morgan and Renne 2008; Sahle et al. 2013). These sites offer evidence for the early use of the Levallois technique for the production of both large flakes and points, as well as the production of retouched and bifacial points. Although there has been some controversy over the exact dates of the Gademotta and Kulkuletti assemblages, it is now increasingly clear that these sites represent some of the earliest known MSA assemblages. Wendorf and colleagues (1975) initially put forward a date of 180 ka based on K/Ar, but later revised this to 235 ka (Wendorf et al. 1994). Recently, Morgan and Renne (2008) report K/Ar dates ranging from 276 ka to 183 ka (see also Sahle et al. 2013). These new dates make Gademotta and Kulkuletti generally comparable with the MSA of the Kapthurin Formation of Kenya, which is currently thought to be among the earliest manifestations of the MSA (Tryon and McBrearty 2002; Tryon 2006). In addition to demonstrating the relative antiquity of these sites compared with sites elsewhere in Africa, these new dates also show that there was little typological change in lithic sequences over long periods of time. Another feature that caught the attention of early researchers involved the spatial patterning associated with Gademotta and, to a lesser extent,
152 Chapter 4
Kulkuletti. At Gademotta ETH-72-8B locality, Wendorf and Schild (1974) described artifacts associated with a depression feature, which they interpret in architectural terms as the floor of a residential camp or even a large ephemeral structure. Furthermore, they described the distribution of lithic artifacts as relating to this feature, with suggestive differences in patterning between its interior and exterior. In contrast, Wendorf and Schild argued that Kulkuletti was a quarry site adjacent to an obsidian flow raw material source. On the one hand, the spatial patterning associated with pit feature at Gademotta remains ambiguously documented. On the other hand, if it were upheld as relating to domestic activities in a residential context, it would be among the earliest examples of this type of spatial patterning in the archaeological record. In any case, the ETH-72-8B locality Gademotta stands out as having unique and suggestive features, perhaps indicating a relatively discrete period of site use and perhaps with limited time averaging (Wendorf et al. 1994; Sahle et al. 2013). Data Analysis: Comparisons with Acheulean Patterns In their report, Wendorf and Schild (1974) provide a good deal of detailed data concerning the Gademotta and Kulkuletti lithic assemblages. These data include the frequencies of various types of cores and tools, as well as patterns of cortex on dorsal flake surfaces (categorized as primary, secondary, and tertiary flakes). These data offer fertile ground for exploring patterns in early MSA lithic assemblage structure and for comparing them with those associated with the Acheulean industry. Based on the analyses of Acheulean lithic assemblages presented in the previous chapter, one might ask several obvious questions of the Gademotta and Kulkuletti assemblages: (1) What are the similarities and differences between Acheulean and early MSA assemblage structures? Thought of another way, how does the variability within early MSA assemblages compare with that known of the Acheulean? (2) Are there tool forms that are associated with patterning like that associated with Acheulean handaxes? In other words, were handaxes somehow replaced by some homologous tool form? (3) Are there stone tool forms that demonstrate patterning that is significantly different from that found in the Acheulean? Or expressed more broadly, how do early MSA technology and its organization differ from those of the Acheulean? These research questions offer a preliminary basis for identifying significant differences between Acheulean and early MSA foraging behavior. The Gademotta and Kulkuletti data show variability along several axes. The frequencies of cores from the total assemblage range around 0–3%; the frequencies of retouched tools range around 2–7.5%; the frequencies of prepared cores from the total core assemblage range about
The Organization of Middle Stone Age (MSA) Lithic Technology 153
16–41%; and the frequencies of prepared core flakes from the total flake assemblage range around 2.5–6%. In terms of flake cortex patterns, I believe that the frequencies of primary flakes act as the best indicator of the sequential position of flakes, which range about 12–21%. These frequencies are typical of the early MSA in that retouched tools and diagnostic flakes from prepared cores are both rare, prepared cores occur in moderate frequencies alongside other core reduction strategies, and cores constitute a very small portion of total assemblages relative to debitage. Also, these ranges of variation are small relative to those known from various Acheulean contexts, especially those from Olorgesailie, with dramatic variations in the compositional characteristics of lithic assemblages. One important set of characteristics is the frequencies of cores relative to other markers of operational sequence and/or tool curation. Figure 4.4 shows the relationship between the frequency of cores from the total assemblage and the frequency of primary flakes from the total assemblage. This analysis suggests that there is little relationship
% Cores from Total Assemblage
.03
.02
.01
.00 .12
.14
.16
.18
.20
.22
% Primary Flakes from Total Flake Assemblage
Figure 4.4 Graph showing the relationship between the percentage of primary flakes and the percentage of cores from the total assemblage at the Gademotta and Kulkuletti
154 Chapter 4
% Cores from Total Assemblage
.03
.02
.01
.00 .02
.03
.04
.05
.06
.07
.08
% Retouched from Total Flake Assemblage
Figure 4.5 Graph showing the relationship between the percentage of retouched flakes and the percentage of cores from the total assemblage at the Gademotta and Kulkuletti
between the two (r2 = 0.322; p = 0.18). Similarly, Figure 4.5 compares the frequencies of cores from the total assemblage and retouched tools from the total flake assemblage, finding no statistical relationship. Viewed from one perspective, this analysis demonstrates that cores from MSA contexts at Gademotta and Kulkuletti do not have the same organizational properties as handaxes during the Acheulean. It may not be wise, however, to read too much into these finding. For one thing, nonbiface cores at the Acheulean sites examined in the previous chapter also seem to have quite distinctive patterning compared with handaxes. For another thing, it is generally unwise to overinterpret what amounts to a negative result. When prepared cores are examined, much different patterning becomes apparent. Figure 4.6 compares the frequency of prepared cores from the total core assemblage with the frequency of primary flakes. This relationship also fails to reach statistical significance (r2 = 0.322; p = 0.18), in part because of the ETH-72-7B level 2 data point, which
The Organization of Middle Stone Age (MSA) Lithic Technology 155
% Prepared Cores from Total Core Assemblage
.50
.40
.30
.20
.10
.00 .12
.14
.16
.18
.20
.22
% Primary Flakes from Total Assemblage
Figure 4.6 Graph showing the relationship between the percentage of primary flakes and the percentage of prepared cores from the total core assemblage at Gademotta and Kulkuletti
has no cores at all. Figure 4.7 shows the relationship between the frequency of flakes removed from prepared cores and the frequency of primary flakes. This relationship is clear and strongly direct (r2 = 0.697; p = 0.019), supporting the weaker pattern implied by the previous analysis. First, this finding implies that prepared core reduction occurred more frequently in the context of early stage core reduction. This pattern may also have resulted from the reduction of prepared cores in locations of raw material availability. This viewpoint is supported by the fact that the Kulkuletti putative quarry site (ETH-71) has the highest frequency of prepared core flakes and very nearly the highest frequency of prepared cores, while the Gademotta putative residential camp site (ETH-72-8B) has the lowest frequency of both. Note that these findings run counter to the expectation that prepared core reduction would result in the production of larger quantities of debitage and involve longer sequences of reduction, thereby reducing the frequency of primary flakes relative to other debitage when prepared cores are common.
156 Chapter 4
% Prepared Core Flakes from Total Flake Assemblage
.07
.06
.05
.04
.03
.02 .12
.14
.16
.18
.20
.22
% Primary Flakes from Total Flake Assemblage
Figure 4.7 Graph showing the relationship between the percentage of primary flakes and the percentage of prepared core flakes from the total flake assemblage at the Gademotta and Kulkuletti
At this point, it is also important to observe that both prepared cores and prepared core flakes have exactly the opposite organizational patterning as Acheulean handaxes in terms of their relationship with both primary flakes and retouched tools. While handaxes occur in much lower frequencies in contexts with large amounts of early-stage core reduction debris, prepared core reduction seems to have been much more prevalent under such circumstances. This pattern suggests that the reduction of prepared cores was more common at locations of raw material abundance, such as at the Kulkuletti obsidian flow. More broadly, this finding suggests variability in prevalence of various core reduction strategies during the early MSA that may be referable to organizational dynamics of key interest in terms of inferring dynamics of mobility and settlement systems. The follow-up questions are these: (1) What was being produced using these prepared core strategies; (2) where did these products get deposited into the archaeological record; and (3) what were their
The Organization of Middle Stone Age (MSA) Lithic Technology 157
% Retouched from Total Flake Assemblage
.08
.07
.06
.05
.04
.03
.02 .12
.14
.16
.18
.20
.22
% Primary Flakes from Total Flake Assemblage
Figure 4.8 Graph showing the relationship between the percentage of primary flakes and the percentage of retouched tools from the total flake assemblage at the Gademotta and Kulkuletti
ultimate characteristics when they were discarded? Variation in the frequencies of retouched tools may offer one source of evidence in addressing these questions. Figure 4.8 shows the relationship between the frequencies of retouched tools and primary flakes at the Gademotta and Kulkuletti localities. In contrast with the patterning associated with prepared cores, the frequency of retouched tools has a strong inverse relationship with the frequency of primary flakes (r2 = 0.635; p = 0.032). Retouched tools are relatively rare at the Kulkuletti quarry site and occur in a significantly higher frequency at the putative Gademotta residential camp site. One possibility is that certain selected products of prepared core reduction were transported from the location of their manufacture as elements of personal gear, leaving behind large accumulations of debris from prepared core reduction, including both the cores themselves and diagnostic flakes. Once incorporated as curated elements of personal gear technology, these flakes were then retouched in higher frequencies as they were transported, used, resharpened, and recycled. This pattern
158 Chapter 4
of technological organization would also account for higher frequencies of retouched tools at sites with lower frequencies of both early-stage core reduction and prepared core reduction debris. Implications for Residential Site Use Patterns Clearly, there are important differences in patterns of lithic assemblage composition and variability between the Acheulean sites discussed in the previous chapter and those of early MSA Gademotta and Kulkuletti. To begin with, the same strategic approaches to core reduction and tool manufacture are evident across all the Gademotta and Kulkuletti localities. There is far less variability in the characteristics of assemblage composition at the early MSA sites than what is evident at Acheuleanage sites, which have radically variable relative frequencies of biface and nonbiface cores. This pattern suggests that early MSA knappers had similar strategic goals across a range of contexts in terms of site use dynamics and raw material provisioning. Why might this be the case, and what does it mean? I would argue that this pattern stems from one of the fundamental properties of residential camps as locations of technological manufacture and maintenance. During the Acheulean, there were apparently quite different lithic production strategies employed at sites with divergent functional and organizational properties. In short, Acheulean hominins employed different strategic approaches to knapping and tool production in locations of raw material availability/acquisition at various activity area sites, such as those associated with animal butchery or other tasks involving stone tools. This distinction also includes tool discard behavior, which apparently involved the caching of handaxes (and other artifacts) at strategic landscape locations, effectively creating artificial lithic raw material sources at certain prioritized places. In contrast, the use of residential camps offers the opportunity to accumulate supplies of lithic raw material at a central place and to use these supplies to produce the elements of personal gear technology necessary for daily foraging activities. Individuals may acquire lithic raw materials during the course of day-to-day economic activities (likely involving embedded procurement), return them to residential camp sites, use them in the production of the lithic components of their technological systems, and then use the resulting debitage expediently to resolve immediate technical problems occurring in the camp. In this sense, residential camp sites may take on some of the properties of lithic raw material sources as stone is accumulated within camps. As a result, there is little variation in the overarching technical goals of early MSA knapping activities from one context to the next, though
The Organization of Middle Stone Age (MSA) Lithic Technology 159
there is variation in their realization, based on a constellation of local organizational conditions. What drives variability in the composition of lithic assemblages are factors including the local abundance and quality of lithic raw materials, the duration and intensity of site occupation, site-specific dynamics in terms of types of foraging activities that are conducted, and the anticipation of the conditions likely to be present at future residential camps. In contrast, the conditions under which technological systems are used in the field during foraging activities may be much less variable or unpredictable. Some more specificity may help make this point more effectively: for example, sites with locally abundant and/or high-quality lithic raw materials might induce more intensive lithic production activities relative to those with scarce and/or low-quality lithic raw material. Occupation duration may influence patterns of assemblage composition in subtle but diagnostic ways. When individuals move from one residential camp to another, it is likely that they will discard worn-out elements of their personal gear and “retool” using lithic raw material collected in the vicinity of their next residential camp. Discarded elements of personal gear are likely to have high frequencies of retouch and other related features resulting from their curation. Thus, during brief residential camp occupations, little new knapping debris would be generated relative to the discarded (and more frequently/intensively retouched) elements of personal gear, resulting in higher relative frequencies of retouched tools. In contrast, longer and more intensive occupations of residential sites would involve more core reduction activities, lower frequencies of retouched tools relative to the total debitage, and also higher frequencies of early-stage core reduction debris (in this case, documented by high frequencies of primary flakes). From this vantage point, we may suppose that the Kulkuletti site, located adjacent to a high-quality raw material source, has high frequencies of early-stage core reduction debris and low frequencies of retouched tools by virtue of its raw material abundance, which induced more knapping activity relative to other landscape locations. It also seems possible that this lithic raw material abundance stimulated knappers to engage in more core preparation, perhaps in the production of such objects as Levallois flakes and points to be incorporated as predesigned elements of personal gear. Perhaps having high-quality lithic raw material alone is a precondition of such complex patterns of prepared core knapping. In contrast, the artifacts at the Gademotta ETH-72-8B locality may have accumulated as the result of repeated short-term domestic occupations in combination with scarce and/or lower-quality local lithic raw material. In any case, these dynamics of organizational variability relate to more specific details of early MSA mobility patterns, settlement systems, and
160 Chapter 4
resources acquisition strategies in ways that are clearly worth exploring in the future. More broadly, I argued in the last chapter that Acheulean handaxes were a form of technological hedge against uncertainty at large spatial and temporal scales; they both assured the presence of large pieces of lithic raw material and provided multifunctional tools capable of a wide range of technical tasks in contexts in which hominins were moving around the landscape in constant and extreme ways. Based on the evidence presented in this chapter, I now argue that handaxes began to fall out of the archaeological record when the occupation of residential camps began to radically reduce this uncertainty about future conditions. Under such circumstances, individuals could carry their personal gear toolkit, perhaps composed of more specially designed tools and weapons, on daily foraging trips with the knowledge that they would return to their residential camp at day’s end. Thus, most technological activities involving stone tools (woodworking, butchery, and so on) could be counted on to occur at residential camps where lithic raw materials could be accumulated predictably. Furthermore, foraging trips could be conducted with a limited range of tools linked with more specifically anticipated tasks. Instead of carrying handaxes, early MSA foragers could carry a range of tools, including those composed of smaller and more effective sharp flakes. These included special types of debitage, such as Levallois flakes and points, which may have been incorporated within toolkits as elements of spears and/or knives. For such reasons, I argue that shifting settlement dynamics may have accounted for the technological changes apparent in the transition from the Acheulean to early MSA without necessarily implying any major novel subsistence practices. Case Study 2: The Omo Kibish Early MSA The Kibish formation of the Lower Omo River of southwestern Ethiopia, located approximately 400 km from Gademotta and Kulkuletti, has been a major location of both paleontological and archaeological finds (Figure 4.9). Both early anatomically modern human fossils (R. Leakey 1969) and early MSA archaeological remains (Shea 2008) have been found in this region. The Kibish formation comprises three distinct members: Member 1 (in which the early modern human fossils were discovered) dates to around 195 ka; the boundary between Members 2 and 3 dates to around 104 ka through K/Ar. Recently, John Shea and Matthew Sisk (Shea 2008; Sisk and Shea 2008) have presented important new data derived from the analysis of the lithic assemblages at archaeological localities in the Kibish formation. Their research has focused on three such localities with substantial accumulations of stone tools: KHS, which belongs to Member 1; AHS, which also belongs to
The Organization of Middle Stone Age (MSA) Lithic Technology 161
Figure 4.9 Map showing the location of the Omo Kibish archaeological sites, Ethiopia
Member 1; and BNS, located at the boundary between Members 2 and 3. Thus, the KHS and AHS sites are comparable in age with the Gademotta and Kulkuletti sites, whereas BNS dates to a somewhat later period. The datasets provided by Shea (2008) and Sisk (Sisk and Shea 2008) are valuable for a number of reasons. First, they result from the application of modern lithic analysis techniques, which go beyond a simple focus on formal tool types. In addition, Sisk’s (2008) research includes the extensive refitting of lithics from the KHS and BNS localities, which offers fine-grained information in terms of both the nature of operational knapping sequences and site formation processes (see also McCall 2010a for a discussion of refitting). Furthermore, Shea and Sisk conducted this search bearing in mind issues of taphonomy and spatial organization. They also made use of modern mapping tools and GIS software, resolving many of the ambiguities inherent within the data from Gademotta and Kulkuletti presented by Wendorf and Schild (1974). In short, these new Omo Kibish lithic datasets are ideal in terms of exploring issues of site use patterns and technological organization. Data Analysis The Omo Kibish lithic assemblages offer useful perspectives on both synchronic and diachronic variability within the early MSA in East Africa. Given the detailed nature of the lithic data presented by Shea
162 Chapter 4
(2008), I once again used principal components analysis (PCA) as a technique for reducing these data and looking for patterned covariation in the frequencies of different types of lithic debris. The use of multivariate statistics in this case may seem like overkill, given that there are only three cases considered in this analysis (the KHS, AHS, and BNS sites). However, this technique allows me to examine all of the stone tool type frequency variables at once, rather than either examining a large number of bivariate analyses separately or by combining variables, which would lose some of the resolution of the data generated from the original analysis. The PCA analysis identified two PCs; PC 1 explains approximately 58.7% of the observed variation, and PC 2 explains approximately 41.3%. Table 4.1 presents the PC loadings for the individual variables considered in this analysis; Table 4.2 presents the eigenvalues for each Table 4.1 Rotated PC matrix for Omo Kibish lithic assemblage data Rotated Component Matrixa Component Levallois core other core type expedient cores core fragment coble fragment initial cortical flake residual cortical flake Levallois point Levallois flake pseudo-Levallois point noncortical flake core-trimming element flake fragment, proximal flake fragment, other blocky fragment retouched point scraper backed knife denticulate notch other retouched flake foliate point fragment Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a Rotation converged in 3 iterations.
1
2
–.998 –.964 .009 .205 .170 .373 .384 .982 –.519 .997 .987 .426 .999 .990 .813 .701 .992 –.476 .654 .982 .780
–.063 .267 1.000 .979 .985 .928 .923 .188 .855 .082 .162 .905 .043 .138 .582 –.713 –.123 .880 –.757 .188 .625
12.335 8.665
Total
58.737 41.263
58.737 100.000
Cumulative %
Extraction Method: Principal Component Analysis (PCA).
1 2
Component
% of Variance
Initial Eigenvalues
12.335 8.665
Total 58.737 41.263
% of Variance 58.737 100.000
Cumulative %
Extraction Sums of Squared Loadings
Total Variance Explained
12.011 8.989
Total
57.193 42.807
% of Variance
57.193 100.000
Cumulative %
Rotation Sums of Squared Loadings
Table 4.2 Eigenvalues and percentages of variation explain for PCA of Omo Kibish lithic assemblage data
The Organization of Middle Stone Age (MSA) Lithic Technology 163
164 Chapter 4
core fragment expedient cores initial cortical flake coble fragment residual cortical flake core trimming element levallois flake backed knife
1.0
foliate point blocky fragment
0.5
Component 2
other coretype
0.0
levallois point noncoritical flake flake fragment other pseudolevallois point flake fragment proximal scraper
levallois core
–0.5 retouched point denticulate notch
–1.0 –1.0
–0.5
0.0
0.5
1.0
Component 1
Figure 4.10 Graph showing the PC loadings for the frequencies of various stone tool types at the Omo Kibish sites
PC; and Figure 4.10 shows a two-dimensional plot of these PC loadings. Interestingly, the stone tool debris types that load on PC 1 are those associated with early stages of core reduction, including cortical flakes, core trimming elements, core fragments, and cobble fragments. PC 1 includes asymmetrical discoids, which may be considered expedient relative to more complex core forms, such as Levallois cores; it also includes Levallois flakes and backed knives. In contrast, the variables that load on PC 2 are mainly those relating to later phases of core reduction and tool retouch, including noncortical flakes, scrapers, denticulates, and other retouched flakes. PC 2 also includes Levallois points and pseudo-Levallois points, which are not retouched tools but were likely elements of personal gear and/or components of composite technologies. It is noteworthy that two stone tool types, foliate points and blocky fragments, load in strongly positive fashion on both PCs. Although the significance of blocky fragments may be ambiguous, foliate points warrant additional attention. To begin with, there was only one complete foliate point discovered during these three excavations (found at the BNS site), in addition to 16 foliate point fragments. It is likely that the category of broken points combines those that were broken
The Organization of Middle Stone Age (MSA) Lithic Technology 165
during the process of their manufacture and those that were broken during use—which would be especially likely if, as Shea and Sisk (2010) have suggested, such foliate points were elements of projectile weapon systems, resulting in the breakage of points through impact. Thus, broken foliate points may occur in elevated frequencies in quite distinct organizational contexts for different reasons. Furthermore, bifacial points represent complex forms of technological organization by virtue of the reductive properties and their tendency to shift in terms of both formal and functional properties over the course of their use-lives. For these reasons, the early occurrence of foliate points in the MSA of East Africa deserves the increased attention it has received in recent years. Finally, it is also interesting to note that the Levallois core variable loads strongly negatively on PC 1, demonstrating that Levallois cores are not associated with high frequencies of retouched tools or later-stage core reduction debris. These analytical results are quite consistent with those from the previous case study at Gademotta and Kulkuletti. Once again, they show the tendency of Levallois flakes (1) to be associated with early-stage core reduction debris, (2) not to be associated with later-stage core reduction debris, and (3) not to be associated with retouched tools or other tools that were curated elements of personal gear (for instance, Levallois and pseudo-Levallois points). Once again, Levallois knapping seems to have occurred in contexts in which lithic raw materials were abundant. In contrast, various forms of retouched tools and points are associated with low frequencies cores, in general. They are also associated with higher frequencies of later-stage core reduction debris in the form of noncortical flakes, perhaps suggesting a relationship either with cores that are more intensively reduced or with cores that were transported from the initial location of reduction. I return to this issue shortly. Intersite Variability and Implications for MSA Technological Organization Additional light can be shed on these findings by considering the specific archaeological contexts associated with each of these three Omo Kibish sites. Although there are only three sites considered, they hold important implications in terms of synchronic organizational variability, as well as perhaps holding clues about patterns of change over time. In terms of scale, the KHS assemblage is small, composed of only 343 lithic pieces, 148 of which are nondiagnostic angular debris. In contrast, AHS and BNS include 7,737 and 1,924 total lithic pieces, respectively. In addition to their size and total counts of lithic artifacts, these sites also show substantial variability in terms of their composition.
166 Chapter 4
1.50000
BNS
Regression Score for PC 2
1.00000
.50000
.00000
AHS
–.50000
–1.00000
KHS –1.00000
–.50000
.00000
.50000
1.00000
1.50000
Regression Score for PC 1
Figure 4.11 Graph showing the regression scores for individual Omo Kibish sites for PC 1 and PC 2
Some of this variability can be observed through the calculation of regression scores for each lithic assemblage based on the PCA described earlier. Figure 4.11 shows a bivariate plot of the PC regression scores for the three sites considered here. The two large sites, AHS and BNS, both have large values in terms of their regression scores for opposing PCs. AHS is associated with PC 1, and BNS is associated with PC 2. KHS is not positively associated with either PC but loads negatively against both. Taken at face value, these regression scores point to the following conclusions: (1) AHS has higher frequencies of later-stage core reduction debris, retouched tools, and points; (2) BNS has higher frequencies of early-stage core reduction debris, expedient cores, and Levallois flakes; (3) KHS has a higher frequency of Levallois cores relative to its total size, explaining its negative PC regression scores, but is otherwise ambiguous (likely stemming from its small sample size). Some caution is warranted in thinking about this variability, however. In the absence of some distinctive outgroup, such an analysis may have the effect of amplifying what may be minor differences, and consideration of assemblage composition in grosser terms helps to underscore this point. Figure 4.12 shows variation in the frequencies in the percentages of cores, debitage, debris, and retouched tools at these three sites. There is clearly
The Organization of Middle Stone Age (MSA) Lithic Technology 167
Figure 4.12 Graph showing raw percentages of different categories of stone tool debris at the Omo Kibish sites
a good deal of variation in the relative amount of nondiagnostic debris across these three sites, with AHS having by far the largest amount. Aside from this aspect of variability, however, the relative frequencies of cores, debitage, and retouched tools represent a limited continuum: KHS has somewhat elevated frequencies of cores and retouched tools relative to debitage, AHS has low frequencies of cores and retouched tools relative to debitage, and BNS is intermediate between the two. Furthermore, the general forms of core reduction strategies, debitage characteristics, and retouched tool types are present at all three sites. As with the Gademotta and Kulkuletti sites, I suspect that this variability resulted from differences in terms of local dynamics of lithic raw material availability, the amount of knapping activity that was conducted, and the intensity with which residential sites were occupied. In this regard, KHS has the least amount of lithic debris resulting from knapping activities. It seems plausible that this was a residential camp site at which relatively limited knapping took place and where little knapping debris was produced, resulting in low frequencies of debitage relative to retouched tools (which were likely discarded elements of curated personal gear). BNS is generally similar to KHS in these respects. In contrast, AHS shows evidence for more intensive knapping activities in terms of its assemblage composition patterning. As Shea puts it: “The abundant debris, cortical flakes, and non-Levallois debitage in the AHS
168 Chapter 4
assemblages suggest that a considerable amount of stone tool production occurred at this locality, possibly as the result of repeated, sustained occupations” (2008: 460). In short, AHS may have differed in terms of how intensively it was occupied, the amount of stone production that went on at the site, or both. Finally, it is also possible that some of the variation in assemblage composition between these three sites was diachronic rather than synchronic in nature, since BNS dates to around 90 ka later than both KHS and AHS. However, I believe that, in the absence of other information to the contrary, this explanation should be accepted only as a last resort. To begin with, there are no new tool types or core reduction strategies that emerged between the deposition of the sites in Member 1 and Member 3. In addition, patterns of lithic raw material exploitation remain basically static. Finally, in terms of its particular assemblage composition patterning, BNS is actually mostly intermediate between KHS and AHS, suggesting that synchronic organizational variability was more salient than any putative technological change over time. Shea (2008: 480) seems to agree with this conclusion, arguing that the MSA lithic assemblages from all three geological members at Omo Kibish belong to a single stone tool industry or industrial complex common across the northern portions of the Rift Valley. This pattern is also reminiscent of recent observations at Gademotta and Kulkuletti, which seem support a view of stable technological organization across a substantial portion of the Upper Pleistocene (see also Morgan and Renne 2008 and Sahle et al. 2013). Refitting, Spatial Patterning, and Residential Site Use One of the most important contributions of the Omo Kibish excavations and analyses is the application of modern mapping techniques and lithic refitting as methods for resolving some of the spatial ambiguities resulting from early fieldwork in the region. As discussed in the previous section, the Gademotta site is characterized by the presence of an intriguing depression feature, which was taken by Wendorf and Schild (1974) as a manifestation of residential camp site use and even the presence of ephemeral physical structures of some kind. While I have argued that Gademotta was indeed a residential site, because of other aspects of its archaeological patterning the true significance of this feature remains difficult to sort out in the absence of other more detailed spatial data— which is a shame, since this feature would be (by far) the oldest known example of this form of site structure if it were confirmed, and it would have profound implications for our views of the early MSA. Perhaps the most important contribution of the recent fieldwork at Omo Kibish is the lithic refitting analysis performed by Sisk and Shea
The Organization of Middle Stone Age (MSA) Lithic Technology 169
(2008). It has often been observed that a key problem with studies of the MSA (and most other Paleolithic contexts) is the tendency of archaeological sites to form as time-averaged palimpsests spanning thousands of years and innumerable episodes of site use. This time averaging has problematic consequences, both obscuring spatial patterning related to the use of discrete activity areas and blending assemblages of artifacts that may not have had functional relationships with one another (Binford 1978, 1981, 1983, 1984, 1987; O’Connell et al. 2002; Shea 2008, 2011). Refitting is a key tool for assessing both cultural and natural dynamics of site formation, including the recognition of archaeological palimpsests and time averaging (see McCall 2010a for further discussion). Among other things, the refitting of lithic assemblages has tended to demonstrate the complexity of site formation processes in rock shelter contexts, which constitute the bulk of sites excavated by early archaeologists—on which our general views of the MSA and Middle Paleolithic are based. As I have argued (McCall 2010a), refitting rate may often serve as a gross index of both spatial and temporal discretion in terms of site formation. For example, one of the most extensively refitted sites is the Magdalenian camp site of Pincevent in the Paris basin of France. Through the systematic, dedicated, and painstaking efforts of researchers working at this site, more than 90% of the lithic assemblage has been refitted (Bodu 1996). This achievement is not, however, purely the outcome of the labor of individuals working at Pincevent; it also has much to do with the nature of the assemblage. Pincevent is one of the clearest examples of an Upper Paleolithic residential camp site (David and Enloe 1993). In addition, it was apparently occupied for a short period of time and sealed rapidly by low-energy alluvial processes as a result of an ice dam during the terminal Pleistocene, effectively preventing either time averaging or the formation of palimpsests. Clearly, this situation is not the norm for Paleolithic archaeological sites, and Pincevent has offered radically alternative ways of examining Magdalenian foraging lifeways. The spatial patterning of refitting lithic pieces at Pincevent also offers important information concerning site structure, corresponding with discrete activity areas and what were likely tent structures. The spatial patterning of refits is so striking that it has been applied to analyzing the relationships between individual knappers associated with specific structures and social dynamics involved with the instruction of knapping skills (Bodu 1996). In short, Pincevent is among the clearest cases in which the spatial patterning of artifacts and refitting demonstrate specialized dynamics of residential site use.3 On the one hand, this case is extreme in terms of both its geological taphonomy and its organization
170 Chapter 4
as a long-term residential camp associated with a highly seasonal and logistically oriented foraging strategy. On the other hand, it has many clear manifestations of archaeological phenomena that mark both home base site use dynamics and fine-grained patterns of site formation. Although not nearly as extreme, the Omo Kibish sites show many of these same patterns in terms of their lithic refitting. Most significantly, Sisk and Shea (2008) report a refitting rate of 41% for the KHS assemblage. This is the highest refitting rate to be found in the global archaeological record prior to the Upper Paleolithic and is, in fact, at the high end of the spectrum of currently known Upper Paleolithic refitting rates (McCall 2010a). Furthermore, the spatial distribution of refits at KHS is patterned in a manner similar to that observed at Pincevent in terms of its concentration of refits within the densest accumulations of knapping debris. In essence, this refitting analysis demonstrates that a large number of cores (at least 27) were reduced in a discrete area without significant disruption by either geological or cultural aspects of site formation. This evidence does not, per se, prove anything about the timing of potential site usages. As at Pincevent, however, the most likely explanation of this pattern is a relatively discrete and short-term usage of the site followed by sealing through sedimentation. Indeed, Shea (2008) confirms that most of the KHS artifacts occur in a distinct stratum (Level 3) between 2–6 cm in depth, sealed by a period of rapid volcanic sedimentation and including several complete and/or articulated sets of faunal remains. Clearly, KHS is a rare example of an MSA site with a low degree of time averaging, and it also has certain aspects of spatial patterning that are consistent with residential site use. This pattern is also true, if to a lesser degree, of the BNS site. Here, Sisk and Shea (2008) report a refitting rate of approximately 7% (slightly greater when restricted to Level 3), which is still quite high relative to other Pleistocene open-air sites. BNS also has aspects of its spatial patterning in common with KHS, although perhaps slightly stretched along one distributional axis as a result of small-scale colluvial artifact movement. It is also interesting to think about the refitting dynamics at KHS and BNS through a broader comparison of their lithic assemblage. KHS is the smallest of these three sites and has only 170 potentially refittable pieces. In addition, it has high frequencies of cortical flakes, which served as a guide for the refitting analysts in this case (Sisk and Shea 2008: 488). It also has high frequencies of cores relative to debitage and low frequencies of retouched tools, which are both key factors influencing refitting rates (McCall 2010a). Finally, KHS has a very high frequency of Levallois cores, which arguably produce more distinctively shaped
The Organization of Middle Stone Age (MSA) Lithic Technology 171
debitage useful for refitting analysis. Taken together, these factors would seem to indicate that KHS represents a single occupation or a small number of occupations in which individuals (1) engaged in Levallois core reduction for the purposes of lithic retooling, (2) discarded several “used up” retouched tools, and (3) removed the highest quality results of their core reduction activities as elements of personal gear. In this respect, it may be equally useful to consider what is missing from the KHS assemblage than what is present. At KHS, there are fewer Levallois flakes and points than there are Levallois cores, which suggests to me that these were transported from the site, while the remaining early-stage core reduction debris refits at very high rates. BNS is significantly larger than KHS and has a number of features in common. It also has high frequencies of cortical flakes and low frequencies of retouched tools. In contrast, however, BNS has a very high frequency of angular debris, a much lower frequency of cores relative to debitage, and very few Levallois cores. On the one hand, all these features make it more difficult to refit lithic assemblages from a practical perspective. On the other, they likely imply significant organizational differences between BNS and KHS. I would argue that these features imply a longer-term or more intensive occupation of BNS. Longer occupations of residential sites often result in the use of more expedient core reduction strategies to produce debitage for the resolution of immediate domestic tasks (Binford and O’Connell 1984; Parry and Kelly 1987; McCall 2012). Thought of another way, this pattern could be taken as an indication of longer or more intensive domestic economic use relative to retooling activities, which often necessitate more complex core reduction strategies. Thus, I argue that BNS represents a longer-term and more stable residential camp site that fostered a lithic assemblage with features that were more complex and difficult to refit (including simply being larger). The differences between KHS and BNS may represent variability in the dynamics of residential site use among early MSA foraging populations. In addition, since KHS substantially predates BNS, this longer-term or more intensive pattern of site use may represent a chronological trend, although this scenario is not my preferred way of thinking about this patterning in the absence of further evidence (such as a refitting analysis of AHS, which is a very large and early assemblage). At a minimum, these analyses demonstrate that the Omo Kibish sites are unique in their dynamics of formation. In combination with their broader patterns of technological organization and lithic assemblage composition, I would also argue that these patterns of refitting and spatial distribution are good indications of early residential camp site use on the cusp of the Middle and Upper Pleistocene.
172 Chapter 4
An Organizational Perspective on the Acheulean-to-MSA Transition Since the 1990s, the field has witnessed a radical rethinking of the nature, timing, and causes of the transition from the Acheulean to the MSA, especially in sub-Saharan Africa, where this phenomenon occurred earliest (McBrearty and Brooks 2000; McBrearty 2001; McBrearty and Tryon 2006; Tryon and McBrearty 2002; Tryon 2006; Tryon, McBrearty, and Texier 2005; van Peer et al. 2003). On the one hand, the surge in interest on this topic has resulted in the exploration and dating of more transitional sites, which have expanded our base of knowledge with which to approach this problem. On the other hand, shifting perspectives on Paleolithic lithic analysis have generated more technological studies in lieu of traditional typological approaches. These new data and changing viewpoints offer an important opportunity to rethink the Acheulean-to-MSA transition in terms of technological organization, mobility and settlement systems, and broader patterns of economic behavior. This book has presented analyses of both Acheulean and early MSA lithic technological organization at a series of key sites, and this information is germane to understanding this transition in greater detail. Sally McBrearty and Christian Tryon (McBrearty 2001; McBrearty and Tryon 2006; Tryon and McBrearty 2002; Tryon 2006; Tryon, McBrearty, and Texier 2005) have been among the most prominent of those thinking about the Acheulean-to-MSA transition in recent years. Working primarily at late Middle Pleistocene and early Upper Pleistocene sites in the Kapthurin Formation of Kenya, they have demonstrated that the technological shifts associated with the Acheulean-to-MSA transition occurred at around 285 ka, though the specific dynamics involved in this transition were complex. Specifically, McBrearty and Tryon have shown that the Levallois technique, which has traditionally served as a core reduction signature of the MSA, actually originated within late Acheulean contexts. In addition, they have found that there is interstratification of Acheulean and early MSA occurrences within the Kapthurin Formation—a finding also corroborated by van Peer and colleagues (2003) at Sai Island in Sudan. McBrearty and Tryon have also documented the early production of blades, pigment, and grindstone technology associated with this transitional period, further underscoring its complexity. From a technological perspective, McBrearty and Tryon have focused mostly on the significance of the shift from handaxe production, which they characterize as a “handheld” technology (McBrearty and Tryon 2006: 258), to the production of triangular points, which they
The Organization of Middle Stone Age (MSA) Lithic Technology 173
view as hafted elements of weapons in the form of stone-tipped spears (see also Shea 2006). In contrast, this perspective sees relatively little significance in terms of the transition to the MSA with the origins of the Levallois technique, which is present in both the late Acheulean and early MSA contexts. In this same vein, Tryon and colleagues (2005) have documented the origins of the Levallois technique in the late Acheulean associated with the production of large flake blanks for the manufacture of handaxes, cleavers, and other large tools. In addition, they point out that the conceptual basis of the Levallois technique—the preparation and manipulation of striking platforms and core convexities for the purposes of predetermining resulting flake size and shape—has much in common with knapping procedures associated with the bifacial thinning of handaxes. In terms of the transition to the MSA, Tryon and colleagues (2005) have argued that the Levallois technique was coopted for different goals in terms of flake production, and it diversified in its manifestations significantly. These provocative findings deserve some further unpacking in light of the findings presented in this book and from the standpoint of technological organization. To begin with, I generally agree with the characterization of the origins of the MSA as being significantly linked with the production of stone points and that these were often elements of hand-delivered spears and/or knives. I am not certain, however, that this technological shift relates to any major form of increasing technological sophistication in a straightforward way. For one thing, we know that there were sharpened wooden spears in the Acheulean archaeological record (for instance, Thieme 1997, 2005) and that modern ethnographic equivalents made from wood alone are frequently quite effective for thrusting or throwing at close range (Churchill 1993). In addition, as I discuss in greater detail in the next section, there also seems to be little discernible change in patterns of hominin carnivory between across the Middle Pleistocene and into the early Upper Pleistocene, as might be implied by either more effective or more sophisticated projectile weaponry. Also, as Shea (1997) has observed, the tipping of spears with stone points involves a complex set of trade-off in terms of functional cost and benefits: (1) stone-tipped spears are perhaps slightly more capable of producing large wounds and more internal hemorrhaging than other weapons; (2) stone points are also quite functional as knives or other hand tools, as was believed by Bordes (1961), and use-wear analyses have shown that they were often used in this capacity (for example, Hardy and Kay 1999); (3) stone-tipped spears are less functional for certain tasks, such as digging and probing, because brittle stone points often break under such conditions of use. For these reasons, we may
174 Chapter 4
envision a scenario in which Acheulean hominins carried a combination of handaxes (and/or other forms of large stone tools) and carved wooden spears (and/or other carved wooden tools) as their primary elements of personal gear technology, whereas MSA hominins carried stone-tipped spears and perhaps various other flake-based tool forms. I argued in the last chapter that handaxes were carried as multifunctional tools along the lines of Swiss army knives while at the same time acting as a strategy for transporting and economizing lithic raw materials around the landscape in the context of constant and unpredictable mobility patterns. This concept is true in terms of the reducibility of handaxes and their capability of producing further useful debitage as cores. It is also true in terms of the apparent strategic caching of handaxes at various landscape “hot spots.” Perhaps the most important divergence represented by the transition to the MSA is the abandonment of the elements of the handaxe technological strategy associated with lithic raw material economy and transport. Various lines of evidence suggest that stone points were capable of most of the same tasks as handaxes were (see also Greaves 1997 for ethnoarchaeological documentation of highly multifunctional projectile points). I would argue that residential site use patterns allowed foragers to accumulate lithic raw materials at camps and to use them to provision individuals with personal gear tools for their daily subsistence activities, in essence removing constraints of raw material economy at the heart of handaxe design. In addition, this residential site use pattern opened the possibility of resolving domestic economic problems with the debris produced during episodic knapping activities at residential camps. This pattern is precisely what Binford and O’Connell (1984; cf. Binford 1986) observed among the modern Alyawara of the Australian Central Desert (see also McCall 2012). In this case, Binford and O’Connell documented an instance in which Alyawara men collected a block of lithic raw material from a quarry site—a trip that was integrated into other subsistence activities—that was then transported back to their residential camp site. Once there, the core was reduced in the interest of producing certain elements of personal gear, such as leiliras (or “men’s knives”) and women’s scrapers (or “spoons”). In addition to these formal tools, the resulting debitage was used for various domestic activities on an ad hoc basis based on the idiosyncratic features of certain flakes and the nature of the tasks at hand. Repeated use of the same residential camp site allows this form of lithic raw material provisioning and concomitant knapping strategies. In contrast, it is perhaps more difficult for us to envision, at least in ethnographic terms, the alternative—in which groups seldom occupied the same sleeping place sites on consecutive days. However, we can
The Organization of Middle Stone Age (MSA) Lithic Technology 175
imagine that under circumstances in which individuals could not rely on returning to a site already provisioned with lithic raw material, the constant transport of cores around the landscape would be necessary. In addition, this site use pattern would also put a premium on the transport of cores forms that were optimally designed to be used for a wide range of tasks themselves. Thus, handaxe designs emerged as a way of dealing with both of these problems resulting from a routed foraging mobility system in efficient ways. Returning to the issue of the Levallois technique and other prepared core technologies, I argue that these strategies articulated with the new forms of personal gear tools associated with residential site use mobility system. In contrast, residential site use patterns seem to have facilitated new varieties of tool and weapon design. Within the residential site use mobility system, an individual forager could likely have anticipated a much narrower range of tool usage during any given daily foraging trip than would have been the case within longer-term routed foraging movements in which individuals might have been isolated from lithic raw materials for days at a time. Under such circumstances, individuals could more specifically have anticipated the range of activities likely to be conducted on a foraging trip and tool up accordingly. Points and their implied spear technology would seem to represent one such form of technological shift, whereby individuals were relieved from the constraints of carrying a supply of lithic raw material and could instead have carried stone-tipped spears, which represented a distinctive shift in terms of functional trade-off. Other prepared core flake production strategies often seem to have been aimed at producing the largest possible flakes from a given volume of raw material (Sandgathe 2004), which would have been highly useful objects in their own right and effective blanks for the production of other types of tools. The case studies presented in this chapter offer an alternative organizational viewpoint on MSA prepared core technology. The Gademotta and Kulkuletti assemblages suggest that higher frequencies of prepared cores and prepared core flake debris are associated with other debris produced at early stages of core reduction. This pattern is generally confirmed by studies of the Omo Kibish sites, which also suggest that the relative frequency of prepared core debris was related to the length and/or intensity of site occupation. Longer and/or more intense site occupations resulted in more expedient knapping and the further reduction of cores, which had the effects of lowering the ratios both of prepared core debris to expedient debris and of early-stage core reduction debris to late-stage core reduction debris. I would argue that the substantial variability present within MSA assemblages in terms of core reduction strategies may be understood in terms of dynamics of
176 Chapter 4
mobility and settlement. Furthermore, I think these sorts of organizational variables have a great deal of potential in dealing with MSA assemblages in which certain standard types of core reduction, such as the Levallois technique, are absent and others, such as blade production, are present. Finally, McBrearty and Tryon (2006) make the case that certain archaeological features generally associated with later MSA sites, such as the production of pigment and the use of grindstones, actually appear with the transition from the Acheulean to the MSA. In fact, in certain respects, these archaeological features seem to serve as a better marker of this transition than do any particular forms of core reduction or specific sets of stone tool types. McBrearty and Tryon argue that these features arose in concert with the origins of modern humans on the boundary between the Middle and Upper Pleistocene, with the biological origins of our species playing a key role in these dynamics. Indeed, the contemporaneity of these phenomena is suggestive, as well as their occurrence in the same regions of the Rift Valley. This idea is certainly a possibility worth exploring as early modern human fossil remains are discovered. I strongly agree that the emergence of these archaeological features in the transition from the Acheulean to the MSA is highly significant in terms of early hominin behavior and evolution. I am not convinced, however, that any particular aspect of biological evolution was primarily responsible for these cultural changes or any other features of the Acheulean-to-MSA transition more broadly. In my view, the organizational changes for which I have argued were responsible for the observed shifts in lithic technology and are also quite capable of explaining emergence of pigment processing and grinding technology. Specifically, I think the use of the home base pattern of site use was a precondition and stimulus for the origins of these economic dynamics; and grinding technology is typically immobile and operates in organizational terms as a classic example of “site furniture” (Binford 1977b). It can be produced and used only under conditions where individuals repeatedly occupy the same location in predictable fashion while also consistently carrying out the same sorts of economic activities over extended periods of time. The origin of pigment production is a more complex and controversial subject. Certain kinds of pigment residue, such as ocher fragments and dust, have been known to be present at even relatively early MSA archaeological sites for several decades (Deacon and Deacon 1999; McBrearty and Brooks 2000). It was only when the production of symbolic objects became such an important element of various definitions of behavioral modernity that these instances of pigment production took on their current importance. Much of the controversy revolves around the dating/provenience of pigments at early MSA sites
The Organization of Middle Stone Age (MSA) Lithic Technology 177
and the potential economic (and therefore nonsymbolic) uses of these pigments (for example, Lombard 2007). It is certainly beyond the scope of this chapter to take too strong of a position on these debates; for the current purpose, I simply assume that pigments were produced with at least some symbolic purpose. And, as I argue shortly, there is little doubt that the home base pattern of site use had significant social consequences in terms of both interpersonal and intergroup relationships. Pigment production may have originated to help individuals better negotiate the social worlds within their own groups and to help groups relate to one another given new constellations of landscape use, territoriality, and group identity. I might go further and propose that a similar case could be made for other archaeological phenomena traditionally used as markers of behavioral modernity, including the spatial structuring of sites, more complex forms of mobiliary and parietal art, the production of bone tools, and the manufacture of more complex forms of hunting weaponry. Although these phenomena are variable in their appearance and generally absent from the majority of early MSA contexts, I believe that the basal pattern of MSA residential site use laid the foundation for these developments in somewhat later periods of Paleolithic prehistory. Specifically, I argue that these signatures of behavioral modernity now known to be elements of the later MSA archaeological record emerged in the context of environmental and/or demographic conditions that necessitated more intensified forms of behavior. For example, I have previously made the case that the Still Bay and Howiesons Poort industries of the southern Africa later MSA, which have many striking manifestations of behavioral modernity, were also periods in which foraging populations exhibited unusual and extreme varieties of organizational patterning (McCall 2007, 2011; McCall and Thomas 2012). Specifically, I have suggested that the biface-dominated Still Bay industry emerged in concert with a pattern of longer and more frequent residential moves. Similarly, I have argued that the Howiesons Poort industry related to the emergence of an unprecedented system of logistical mobility involving longer-term seasonal occupation of residential camps and the staging of logistical trips to target distant economic resources in specialized ways. I believe that, in addition to bringing about highly distinctive patterns of lithic technology these forms of economic organization also stimulated the development of novel symbolic systems, which were instrumental in underpinning more complex social structures of reciprocity, information sharing, and group identity. In short, none of these features existed in a vacuum and each is entwined in a broader interrelated web of environmental, economic, and social dynamics.
178 Chapter 4
More important, it seems productive to think of the Still Bay, Howiesons Poort, and other regionally distinctive later MSA industries as variations on a basal MSA theme that emerged at the boundary of the Middle and Upper Pleistocene. The most important point of this chapter is that this basal patterned emerged in the transition from the Acheulean to the MSA in relation to the origins of a pattern of home base residential site use. Although this transition may (mostly) lack the striking archaeological signatures of behavioral modernity found in later periods of the MSA, its consequence may actually have been more profound in setting the stage for the origins of the subsequent patterns of modern human forager cultural behavior.
Why Did Home Base Residential Site Use Emerge? This chapter has focused on the analysis of the lithic technological changes associated with the Acheulean-to-MSA transition and the argument that these changes resulted from a shift to a pattern of home base residential site use. It is now worth briefly considering why this transition occurred and how it related to broader patterns of Middle and Upper Pleistocene hominin evolution. Routed foraging represents a primitive mobility system common to a wide variety of nonhuman primates and, in somewhat alternative forms, other large-bodied social mammalian carnivores. In the last chapter, I discussed the strategic underpinnings of the routed foraging mobility system and its advantages, which included (1) the ability to move rapidly through environments with randomly and evenly distributed foraging resources, (2) the movement of food consumers to the locations of resource availability, maximizing foraging efficiency, and (3) the elimination of transport costs involved in moving food resources to consumers at some distant location. The question now is this: what were the trade-offs involved in the adoption of the modern pattern of home base residential site use? And, more specifically, what did hominins gain from this shift in the transition from the Acheulean to the MSA? Many of these same issues were at the heart of Washburn’s theoretical model of the evolution of the genus Homo involving the adoption of home base site use (Washburn and Avis 1958; Washburn and Lancaster 1968). In many ways, Washburn argued that the social consequences of home base use were themselves the benefits. Specifically, it offered hominins a common physical and social environment in which to share food, build cooperative alliances, engage in mating relationships, rear young in relatively safe locations, and care for immobile elderly, sick, or injured individuals. In turn, resulting social strategies of risk reduction facilitated riskier but more productive forms of subsistence behavior,
The Organization of Middle Stone Age (MSA) Lithic Technology 179
especially the hunting of large-bodied prey species. More nutritious diets allowed for the reduction in hominin gut size and the expansion of brains (Aiello and Wheeler 1995), further reinforcing emerging patterns of social and technological complexity. While I feel that many important aspects of this argument may hold some truth, I also believe that this model includes some logical shortcomings. First and foremost, the Washburn argument assumes the inherent superiority of home base site use and seems to imply that it was simply a matter of time until early hominins were able to gain the cognitive and cultural capability of crossing this threshold. This viewpoint stems from long-held unilinear anthropological beliefs in the ever-increasing complexity of hominin cognitive and cultural sophistication—a flawed logic that is still remarkably pervasive in both the technical and popular literature. For me, there are several important trade-offs in terms of foraging efficiency inherent in this shift, and I argue that home base site use was adopted in response to a specific set of ecological and demographic conditions. In other words, home base site was only a more efficient mobility system under the conditions faced by early MSA hominins, as well as subsequent modern human foragers. Exploring the nature of the conditions that fostered the origins of home base site use among early MSA hominins represents a program of scientific research based on increasingly available archaeological evidence. This approach departs from traditional teleological assumptions of the inevitability of home base use as an aspect of evolutionary progress. In addition, although I do not doubt that the social consequences of home base site use ultimately had the kinds of profound outcomes argued for by Washburn, I wonder about the reality of these cause-and-effect relationships. Specifically, it seems difficult to argue that early hominins could have envisioned all the social benefits of home base site use or that the anticipation of these benefits motivated the adoption of novel forms of mobility and settlement systems. In many ways, this approach seems like putting the theoretical cart before the horse. In contrast, I would argue that the structural social changes brought about by home base use were more like emergent properties or “unintended consequences” brought about by these new forms of residential site use. As I discuss in greater detail in the conclusion, by the close of the Middle Pleistocene, hominins possessed very large brains, sophisticated cognitive capabilities as suggested by complex lithic technologies, and the complicated forms of social interaction common to our earlier primate ancestry. The adoption of home base site use combined these ingredients in fundamentally new ways, laying the groundwork for the complex patterns of social behavior to be found throughout the Upper Pleistocene and common to all modern humans living today.
180 Chapter 4
If not these social consequences, what then were the benefits that early MSA hominins gained from home base residential site use, and what exactly were the conditions that brought them about? In terms of foraging behavior, I would argue that the use of home base residential sites offered significant opportunities in terms of subsistence diversification and labor division. If individuals understand that resources will be pooled and shared at residential sites following foraging trips, this assurance would allow a number of different specialized foraging groups to target specific resources simultaneously. For example, many observations of huntergatherers in the Kalahari have focused on the collection of mangetti nuts (Lee 1968, 1979; Yellen 1977), which generally involves the organization of task groups for day trips to known mangetti groves. Meanwhile, other subsistence task groups may engage in any number of other economic activities, such as hunting, checking trap lines, or collecting other plant food resources. This type of foraging system allows for the conduct of several diverse subsistence activities at the same time while ensuring that most capable and skilled individuals are engaged in the foraging activities that suit them best. Thus, home base residential site use and concomitant patterns of foraging behavior hedge against risk by ensuring the conduct of a broad base of simultaneous subsistence activities while also maximizing return rates by ensuring that specialized task groups are in the right place at the right time. This system also facilitates riskier varieties of foraging behavior that might not be viable otherwise. Returning to the Kalahari example, one can observe that poison arrow hunting, which is typified by low success rates but very large potential subsistence windfalls, might not be a feasible foraging option if it were not funded by a diverse base of more predictable resources. In other words, it is much easier to stage a hunt, which may have only a 10% chance of success (for instance, Lee 1968), when one may also count on the collection of other food resources by other group members. In such ways home base residential site use allows for the more efficient exploitation of riskier and/or lowerranked subsistence resources through task group specialization and food sharing. Finally, home base residential site use may also substantially reduce mobility costs for certain segments of a foraging population. As Washburn suggested initially, home bases allow children, elderly, and sick or injured individuals to remain stationary while other individuals with lower mobility costs do most of the daily moving (Washburn and Avis 1958; Washburn and Lancaster 1968). For example, for parents of infants or very small children, the costs of transporting them around the landscape may be quite high. Among nonhuman primates that employ a routed foraging mobility pattern, the cost for females of transporting
The Organization of Middle Stone Age (MSA) Lithic Technology 181
offspring has been widely documented, as well as substantial variability in mobility patterns in reaction to these costs (Wrangham 1980; Burton and Fukuda 1981; Markham et al. 2013). Furthermore, by virtue of their grasping ability, primates are unique in patterns of infant transport. Other social large-bodied carnivores (lions, hyenas, wolves, African hunting dogs, and so on) generally resolve these problems through a denning strategy until offspring are large enough to move efficiently on their own. The tethering of carnivore groups to dens during periods of offspring rearing has also been shown to have high costs in terms of foraging efficiency (Lamprecht 1978; White, Lewis, and Murray 1996; Holekamp et al. 1996; Creel and Creel 1998; Frame et al. 2004; Vorster 2012). From this perspective, the increases in foraging efficiency accrued through home base residential site use may also serve to compensate for declines in foraging efficiency caused by tethering to a particular landscape location. Clearly, there is a great deal of complexity in terms of the cost-benefit trade-offs brought about the by origins of home base residential site use. It is now worth thinking about the ecological conditions that might have shifted the balance of mobility decisions from routed foraging to residential site use. In general, I believe that home base use favored riskier and/or less efficient types of foraging behavior while also reducing costs associated with offspring transport as more extreme forms of movement around the landscape were necessitated by shifting subsistence patterns. In other words, home base site use may have served as an organizational basis for some very basic forms of subsistence intensification. One potential reason for this intensification would be increasing overexploitation of foraging environments and reductions in the density of high-ranked subsistence resources, which would have lowered overall return rates associated with the routed foraging mobility system. In this model, hominins changed their mobility systems as environmental resource structures were altered and overall productivity declined. There are two prime movers that serve as the “usual suspects” in terms of these patterns of environmental overexploitation and subsistence intensification: environmental degradation and population increase. It is certainly possible that fluctuating climatic conditions inherent to the Pleistocene may have created circumstances of rapidly deteriorating environments at regional and local scales (for instance, Potts 1998). However, because there were perhaps dozens of such glacial cycle transitions across the Lower and Middle Pleistocene, it is hard to see any particular one as implicated in either the ESA/MSA transition or the origin of home base residential site use. A more likely but more problematic dynamic is population packing. The profound effects of population increase are abundantly evident in
182 Chapter 4
aspects of both “post-Pleistocene adaptation” and historical accounts of modern hunter-gatherers (Binford 1968, 2001; Flannery 1973; Kelly 1995; Stiner, Munro, and Surovell 2000; Binford 2001). Large population densities systematically depress the highest ranked subsistence resources first. These resources typically have very slow turnover rates and take long periods of time to regenerate. Thus, under such circumstances, forager groups are forced to begin targeting lower-ranked resources with faster turnover rates but lower associated return rates. Larger population densities may have affected mobility systems in other ways. As Binford (2001) has shown in his comprehensive review, there is a strong negative relationship between forager population densities and territory sizes (see also Kelly 1995). Since there are practical limitations on the sizes of hunter-gatherer groups (Birdsell 1968), as overall population densities increase, territory sizes become smaller. Decreasing territory size has an obvious and direct effect on mobility and settlement systems. In this respect, routed foraging is particularly demanding in terms of the amount of land it requires to be effective, since it relies on the random encounter of highly ranked resources, which are usually sparsely distributed across any given landscape. Shrinking territory sizes may have rendered routed foraging impractical by limiting the amount of available land for foraging. In addition, smaller territory sizes are also a common stimulus for other forms of subsistence intensification, since foraging groups are less able to solve problems of environmental overexploitation through mobility. With unlimited territories, foraging groups can simply move into new pristine patches once foraging return rates begin to decline. Under circumstances of territorial circumscription, foraging groups must make do with local resources by expanding their subsistence practices to include increasingly marginal items. For these reasons and others, one might easily imagine that even moderate increases in hominin population densities may have had organizational consequences of the scale discussed here. The central problem with this demographic explanation is the notorious difficulty of assessing prehistoric population sizes and other aspects of demographic patterning (Naroll 1962; Birdsell 1968; Wiessner 1974; Hassan 1981; Binford 2001). Although there is a general consensus that hominin populations increased in both size and density over the course of the Pleistocene, the exact amplitude and nature of this increase is much more slippery. One potential sign of this shift may be the distribution of MSA archaeological sites, which both increase in number and move into marginal environments (Clark 1970; Deacon and Deacon 1999; McBrearty and Brooks 2000; McCall et al. 2011). This pattern may imply that increasing population densities in the core environments of sub-Saharan Africa raised the total number of
The Organization of Middle Stone Age (MSA) Lithic Technology 183
sites on the landscape and pushed populations into the arid peripheries of the region. Later in this book, I present further evidence for Middle and Upper Pleistocene population increase in the form of shifting patterns of hominin body size, and I discuss the implications of this increase for subsistence behavior. For now, I simply rest my case that these dynamics of population increase were a primary cause of increasingly overexploited environments, leading to the adoption of home base residential site use.
A Speculative Narrative of the Origins of Home Base Residential Site Use These models based on forager behavioral ecology may be difficult to envision in terms of actual patterns of behavior. For this reason, it may be useful to consider the origins of home base residential site use in more narrative terms. During the Middle Pleistocene in sub-Saharan Africa, I have argued that the hominins moved around the landscape in a routed foraging mobility system, moving rapidly through environments and opportunistically exploiting high-ranked food sources. One might imagine certain of these environments resembling those of the modern Serengetti, which has dense concentrations of game animals and other high-quality plant food sources—especially in national parks, where human foraging activities are restricted (Hawkes et al. 1991; Bartram 1997; O’Connell and Hawkes 1988a, 1988b; O’Connell et al. 2002). In moving rapidly through such an environment, hominins hunted available small-tomedium-sized prey with simple weapons, including sharpened wooden spears, clubs, and throwing sticks. In addition, they likely scavenged the carcasses of other (often larger and more dangerous) prey animals. Even with matching high predator densities, the high ungulate biomass associated with rich environments ensures that there would be ample scavenging opportunities to be had, and these often included large quantities of meat, marrow, brains, and other fats. As I discuss in greater detail in the next section, animal bone assemblage patterning suggests relatively early access to carcasses through either hunting or scavenging. Rapid and frequent movements through rich environments may have helped hominins achieve early carcass access through both tactics. Likewise, hominins likely exploited high-quality annual plant foods or perennial plants with low turnover rates. For example, Murray and colleagues (2001) document a range of fruits, nuts, seeds, and tubers— most of which are high in caloric content—consumed by modern Hadza hunter-gatherers in Tanzania. These resources constitute a vital aspect of the overall Hadza foraging economy, in spite of the fact the Hadza
184 Chapter 4
engage in great deal more hunting and scavenging behavior in comparison with foragers living similar latitudes elsewhere (Binford 2001). Such seasonally abundant, immobile, and predictable resources may have served as pivot points for routed foraging mobility systems, as they do for modern baboons and other savanna-dwelling primate species (Whiten et al. 1991). It is also easy to imagine that hominins exploited plant foods within certain patches, such as baobab (Adansonia digitata) groves. As the availability of resources within these patches declined and return rates began to drop, hominins simply moved and incorporated new patches into their mobility routes. Thus, routed foraging facilitated movement between a large number of patches of annual plant foods with high return rates, ensuring that none was overexploited and that foraging return rates remained very high. As population densities increased, several related changes happened. Territory sizes began to reduce, which had several major consequences for hominin foraging systems. Hominin hunters became packed in higher concentrations, which had the effect of depressing prey densities. Lower prey densities increased the risk associated with hunting activities and lowered overall hunting return rates. Smaller territory sizes also effectively reduced the number of per capita scavenging opportunities, since scavenging is highly demanding in terms of land availability. For example, Blumenschine (1988) offers a vivid actualistic account of the kinds of scavenging opportunities that would have been available for early hominins, emphasizing the fact that carcasses are often sparsely and unpredictably distributed over both space and time. Thus, the keys to maintaining high return rates during scavenging activities are the capability of monitoring large tracts of land and the ability to move over very large distances quickly. Smaller territory sizes inhibit both of these activities and make scavenging increasingly difficult. In general, decreasing territory sizes would have made hunting with simple weapons and scavenging more difficult, risky, and inefficient. Similarly, smaller territory sizes and higher population densities would have limited the number of available plant food patches for early hominin foraging groups. This situation would have also inhibited the ability of hominin foragers to address problems associated with declining resource availability and foraging return rates by moving frequently between patches. In contrast, hominin groups would have had to spend longer periods of time within plant food patches in order to collect the same amounts of food, lowering the overall return rates associated with these foraging activities. Furthermore, smaller territory sizes would have also made foraging groups more susceptible to periodic resource failures, such as the results of droughts or other plant food resource failures. Once again, territorial circumscription reduced hominin ability to cope with
The Organization of Middle Stone Age (MSA) Lithic Technology 185
such resource failures by moving out of affected territories. Likewise, this circumscription would have lowered the return rates and increased the risk associated with the collection of even high-quality plant food resources. To address the results of this combination of increasingly packed populations, smaller territory sizes, lower foraging return rates, and increased foraging risk, hominins began to engage in the residential or home base pattern of site use and mobility. This shift helped to resolve these emergent problems in several strategic fashions. First, in facilitating the simultaneous operation of multiple different task-specific activity groups, it helped to increase foraging return rates and mitigate the risk associated with certain subsistence resources (especially hunting and/or scavenging). Second, as was suggested by the Washburn/Isaac synthesis long ago, it supported social systems of reciprocity that were instrumental in mediating the risk associated with certain subsistence activities. Thus, the residential site use system allowed hominin populations to begin living in larger population densities while also laying the groundwork for an important series of cultural changes in terms of the emergence of the novel social systems common to later modern human foraging groups.
Implications and Conclusions Since the 1990s, it has become apparent that many important cultural changes first occurred during the MSA of sub-Saharan Africa. There is also now a broad consensus that the earliest anatomically modern humans originated during the early MSA in eastern sub-Saharan Africa. My argument in this chapter has been that the shifts in patterns of stone tool technology associated with the transition from the Acheulean to the MSA resulted from the adoption of fundamentally new forms of mobility and site use. These changes were not revolutionary in the sense of having occurred suddenly and irreversibly, as is implied by Childe’s (1953) discussion of the Neolithic revolution and Mellars’s (1989) description of the Upper Paleolithic revolution. Instead, they appear to have happened in mosaic fashion over the course of many millennia across a broad swath of sub-Saharan Africa and possibly beyond. More important, I believe that the relationships between the beginning of the MSA, the significant cultural changes that occurred during the Upper Pleistocene, and origins of anatomically modern humans are not coincidental. These cultural changes fundamentally depended on the economic and social results of the adoption of the residential site use pattern, in essence representing the structural basis of modern organizational patterning. Furthermore, after perhaps more than 100 ka
186 Chapter 4
of evolution and development within sub-Saharan Africa, such cultural changes ultimately allowed what had been a small and regionally isolated population of early modern humans to expand to the rest of the world, apparently genetically swamping other contemporaneous hominin populations. Thus, the transition from the Achuelan to the MSA was a subtle phenomenon in terms of its archaeological manifestations. Generally speaking, it does not seem to have been associated with the rise of any astonishingly new forms of weapon technology or symbolic systems. Similar patterns of cognitive sophistication are implied by both later Acheulean and early MSA knapping strategies. However, if I am correct about the implications of this transition for the nature of early hominin mobility and settlement systems, then it represented a crucial organizational shift with regard to subsequent dynamics of hominin evolution. It would also imply that Washburn was partly right about the significance of home base site use for hominin economic and social dynamics, while simply misunderstanding the timing of this set of cultural changes relative to the origins of the large brain size associated with Homo erectus at the onset of the Lower Pleistocene.
Chapter 5
Fear and Loathing in Paleolithic Faunal Analysis
B
efore the 1960s, animal bones found at archaeological sites were largely an afterthought, at best relegated to checklists in appendices (Reitz and Wing 1999). Animal bones simply did not have much to offer the culture-historical orientation of archaeological research before this time. In contrast, faunal analysis had a great deal to contribute to subsequent goals of reconstructing patterns of prehistoric economic lifeways and thus became a mainstay of processualist archaeology. Binford (1981) epitomizes this sentiment in his seminal monograph on the archaeology of animal bones. In fact, after two decades of confronting the ambiguities of lithic artifacts within the functional debate with Francois Bordes (1961), Binford (1981) explicitly made the case for the superiority of faunal analysis in reconstructing prehistoric foraging activities. Simply put, animal bones are referable to real categories in terms species, sex, age, element, side, and so on. Thus, the identification of animal bones in archaeological assemblages should ultimately result in the deduction of objective facts or what we might consider “right answers.” In addition, the aspects of human behavior involved in the accumulation of animal bone assemblages represent relatively discrete and short-term phenomena in terms of the acquisition, butchery, consumption, and discard of animal parts. For this reason, animal bone assemblages may offer us much more fine-grained windows on the activities of prehistoric peoples, and it is easy to see why this methodological specialty appealed to the “New Archaeology” and its various intellectual descendants.
Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 187–212 © 2015 Left Coast Press, Inc. All rights reserved. 187
188 Chapter 5
These kinds of information also obviously articulated with major debates within Paleolithic archaeology, especially in terms of hunting and scavenging. For example, Mary Leakey (1971) was among the first to directly make the case for early hominin hunting at Olduvai Gorge based on specific characteristics of the FLK 22 faunal assemblage. In contrast, Binford (1981) used the same data to argue for early hominins as marginal scavengers. Later, a related debate emerged about the significance of various forms of bone surface damage morphology (cut marks, tooth marks, percussion marks, and so forth.) for the hunting and scavenging debate (Bunn 1981; Bunn and Kroll 1986; Shipman 1986; Blumenschine 1988, 1995; Blumenschine and Selvaggio 1988; Lupo 1994; Monahan 1996; Capaldo 1997, 1998; Domínguez-Rodrigo 1997, 1999; Lupo and O’Connell 2002; Domínguez-Rodrigo and Pickering 2003). Ultimately, however, it is my contention that argumentation based on both patterns of assemblage composition and damage morphologies proved ambiguous for the hunting-and-scavenging debate for a number of related reasons. But it is also clear that these sources of evidence articulate in much more direct ways with the inference of patterns of foraging ecology. As discussed in the introduction, patterns of assemblage composition are complicated by various taphonomic processes in terms of site formation and selective bone element destruction, especially through density-mediated attrition. Likewise, patterns of bone modification, such as the superposition of cut marks, tooth marks, and percussion marks, have proven to be surprisingly ambiguous relative to the distinction between hunting and scavenging activities. Many idiosyncratic variables influence the frequency and position of bone damage morphologies by both hominin butchers and nonhominin carnivores. Therefore, these sorts of patterns of bone modification seem not to be easily understandable in terms of the ordering of carcass access and therefore the hunting-andscavenging debate. The specter of equifinality thus continues to trouble zooarchaeological research on early hominin evolution. Because of this history of research and debate, we are left in a position in which it is clear that zooarchaeological research has enormous contributions to make in terms of our understanding of hominin evolution but also in which resulting evidence has not articulated well with the major research questions being asked. One way forward would seem to be asking different questions of the archaeological record of animal bones. This chapter proposes some alternative directions for zooarchaeological research on hominin evolution and modern humans that move away from the strict bounds of the hunting-and-scavenging debate and consider the implications of existing faunal evidence for prehistoric foraging ecology and subsistence organization. From a theoretical perspective, I argue that human behavioral ecology offers
Fear and Loathing in Paleolithic Faunal Analysis 189
a framework for considering the implications of faunal assemblage patterning for foraging behavior and organization. More broadly, this section examines the nature of change in faunal exploitation patterns over time and especially across the transition from the Acheulean to the early MSA. Chapters in this section find that there are only minor and subtle changes in faunal exploitation evident across the Lower and Middle Pleistocene. The faunal assemblages associated with the ESA and the early MSA of Africa, as well as their Lower and Middle Paleolithic counterparts in Eurasia, are generally indicative of foraging strategies focused on high-ranked resources with little apparent change over time. Using this evidence, I argue that later Middle and early Upper Pleistocene hominins shifted aspects of their organization in terms of mobility and settlement systems while maintaining a focus on the same general types of foraging resources with high return rates and low rates of turnover. I also argue that faunal acquisition tactics and hunting technologies were relatively simple and changed little over the course of the Lower and Middle Pleistocene.
Fear and Loathing in Archaeological Faunal Analysis Zooarchaeological research on Pleistocene hominin foraging lifeways has been riddled with controversy over the last four decades. This situation stems from (1) the nature of the available evidence itself in terms of problems of equifinality and (2) severe inconsistency in terms of the research methods used by different individuals and/or schools of thought. While it is clear that there is no “cookbook” for conducting faunal analysis, nor should there be, it is also apparent that there are significant difficulties raised by the different analytical structures applied by various researchers. Before proceeding, I review some of the sources of this controversy and ways in which resulting ambiguities may be circumnavigated. Quantifying Assemblages One major issue is simply the ways in which various analysts count animal bones, quantify assemblages, and calculate standardized indices. Individual faunal specimens may be identified relative to a range of nested analytical categories including species, sex, age, element, element portion or landmark, and side. It is standard practice for zooarchaeologists to report counts, including the total number of identified specimens (NISP), the minimum number of individuals (MNI), and the minimum number of elements (MNE). From these, other more complex indices, such as
190 Chapter 5
the minimum number of animal units (MAU), may be determined and standardized for the construction of comparisons between assemblages. However, such terms are rarely defined or applied in a consistent way. For example, NISP is considered the most basic level of faunal assemblage quantification. But when is a specimen considered “identified?” Different analyses approach this issue at substantially different scales in terms of the determination of element portion, element, side, size class, species, and sex. Similarly, the determination of MNI and MNE logically involves the identification of specimens that are redundant in terms of individual animals, which in some cases may be quite simple. Two whole right femurs belonging to animals of the same species obviously indicate at least two individuals. With fragmentary specimens, however, the situation may be more complicated. For example, one proximal fragment and one distal fragment of a right femur belonging to the same animal species does not necessarily indicate more than one animal or even more than one element. Here, overlapping is the key determining factor in diagnosing multiple individuals and/or elements. This approach is problematic, because the overlapping of faunal specimens is a difficult thing to quantify and to examine within assemblages that may contain vast numbers of total specimens (although see Abe et al. 2002 for a GISbased solution to this problem). Some examples help illustrate these issues of counting and quantification. At one end of the spectrum, Sabine Gaudzinski (2000; see also Gaudzinski and Roebroeks 2000) reports an analysis of caribou bones from the site of Salzgitter Lebenstedt in Germany. These data indicate that the NISP values reported by element are very close to the sums of the sided MNE values (referred to as MNI by Gaudzinski). For example, Gaudzinski (2000: 399) reports 16 left ulnae and 20 right ulnae; she also reports the total number of identified ulnae specimens as 36—the sum of the right and left MNE values; this situation is very similar for the other elements. If these values are correct, then they imply that whole bones constituted a good portion of the assemblage considered in this analysis. This pattern would be difficult to fathom under normal circumstances and under any circumstances would be highly noteworthy. It is possible that there was some bias inherent in the collection itself, which was, after all, accumulated in 1952, before animal bones commonly received systematic analysis. Another possibility is that I simply do not properly understand the analytical practices through which these values were determined. Another example at the opposite extreme helps to further elucidate this set of problems. Curtis Marean and colleagues (2000) present important data concerning the composition of the MSA faunal assemblage at the
Fear and Loathing in Paleolithic Faunal Analysis 191
site of Die Kelders, South Africa. Their report presents much higher NISP values for bone elements relative to their MNE values. For example, for the femurs of all size class I–IV ungulates in Layer 10, they report an NISP value of 96 and an MNE value of 19 (Marean et al. 2000: 217, 220). This is consistent with an assemblage with what I would consider normal (if perhaps somewhat high) levels of fragmentation. However, Marean and colleagues also report an NISP for ribs of 1,201 and an MNE of only 38. Once again, these values are difficult to understand in the absence of further detail about the processes of counting and quantification. It could be the case that this MNE value is standardized by virtue of the fact that these animals have 24 or 26 ribs, although it would really be an MAU value in that case and not an MNE value—which I doubt. What seems more likely is that large numbers of specimens were generally identifiable as “ribs” but were not identifiable at finer-grained scales in order to be included in the MNE count. If true, this approach would be reasonable, although it acts as a good example of the lack of clarity in counting and quantification that pervades zooarchaeology at present. My purpose here is not to single out or embarrass any of these respected zooarchaeological scholars. It is simply to point out that there is a great deal of murkiness in terms of how archaeological animal bones are counted and the various indices calculated, in addition to how they are reported. In practice, there are often multiple levels of identification operating within the same analysis, and they are rarely explicitly stated. The main problem that this situation poses involves the comparison of faunal assemblages from different sites or contexts. This is most unfortunate because, as I show in the next chapter, broad comparisons of faunal assemblage composition between different sites, regions, and time periods hold a great deal of potential information with which to evaluate shifting foraging practices. There has also been related controversy concerning the refitting of long bone shaft fragments in the interest of improving rates of specimen identification. This controversy also originated with the work of Marean (Marean and Kim 1998; Bartram and Marean 1999; Marean et al. 2000, 2001; Marean, Domínguez-Rodrigo, and Pickering 2004) in framing the so-called shaft critique. This critique argues that most zooarchaeological analyses have been biased in terms of the selective identification of certain elements relative to others. Specifically, cranial and lower limb specimens were recognized in higher frequencies relative to upper limb, because they are more distinctively identifiable and less prone to densitymediated destruction of various sorts. Likewise, for long bones, dense cortical shaft fragments are preserved at significantly higher frequencies owing to their greater density relative to the articular ends. To counteract this problem, unidentified
192 Chapter 5
shaft fragments may be refitted in order to reconstruct large enough specimens to permit identification. According to Marean and colleagues, this approach is capable of counteracting a significant methodological bias that, in combination with problems of density-mediated attrition, has led to the widespread pattern of head-and-lower-limb-dominated faunal assemblage—a pattern that has been commonly taken as evidence of early hominin scavenging behavior (Binford 1981, 1984; Stiner 1991, 1994, 2002). If refitting does actually significantly improve certain aspects of element identification, then this fact raises the prospect that analyses conducted before this procedure was widely used are biased and problematic in terms of comparison with assemblages that have been refitted or, worse, when unidentifiable shaft fragments may have been discarded by collectors. Not all zooarchaeologists, however, are unanimous in terms of their beliefs concerning the incomparability of assemblages where shafts either have or have not been refitted. Stiner (2002), for example, refers to this phenomenon as “shaft anxiety” and argues that long bone ends are not normally deleted in significantly higher frequencies than shaft fragments are. Indeed, many complex variables have roles in determining the survivorship of shaft fragments relative to ends, including the taphonomy of an assemblage in terms of its extent of density-mediated bone destruction and its degree of fragmentation. In assemblages with high degrees of density-mediated attrition and/or high levels of bone fragmentation (such as the famous case of Kobeh Cave first presented by Marean and Kim 1998), it is more likely that long bone counts will be underestimated by the frequencies of ends. As Stiner (2002) points out, this is not the case for many important Paleolithic faunal assemblages described previously, and it is not the case that all analyses conducted previously or using other methods should be thrown out. The problems that emerge from the controversies revolving around element identification, counting, and assemblage quantification raise doubts about our abilities to think synthetically beyond single sites and assemblages. Obviously, each zooarchaeologist wants to identify every bone fragment in as much detail as possible. It is also clear, however, that the current situation within the zooarchaeological community is less than optimal with regard to building bodies of comparable data with which to examine trends in faunal exploitation over space and time. As Marean and colleagues (2001, 2002; Abe et al. 2002; Abe and Marean 2003) suggest, image analysis and GIS-based computing technology may offer ways of improving the standardization of faunal analysis procedures, as well as making them more explicit. Such trends would most certainly be welcome! At the same time, there are good reasons to resist the imposition of faunal analysis “cookbooks”—they would
Fear and Loathing in Paleolithic Faunal Analysis 193
certainly raise problems in terms of dealing with the variable features of specific assemblages and the research questions asked of them. Finally, it is also important to consider the ramifications of the universal adoption of such technologically complex forms of faunal analysis for the use of older datasets (for example, FLK 22), which are often one of a kind. How do we go about asking questions about change over time and between regions given these methodological problems—or can this even be done? The next chapter is dedicated to evaluating these related problems. Here I begin with the premise that the best way of proceeding in terms of comparative and synthetic research is to think in the broadest possible terms about assemblage patterning. First, it is obviously crucial to deal with issues of density-mediated attrition and assemblage equifinality before proceeding to the more complex issue of hominin faunal acquisition strategies. To this end, I make use of Lam and colleagues’ (1999) publication of bone density values to evaluate several key Pleistocene faunal assemblages. The results of this analysis show that, while density-mediated attrition has strongly influenced element frequencies within most of these assemblages, there are aspects of this patterning that are not purely a product of this form of equifinality. The results of this analysis also show common patterns of element frequencies across the Lower and Middle Pleistocene, which I argue have resulted from similar strategies of faunal acquisition, processing, and consumption. Next, I use multivariate statistics to look for systematic patterning across a broader range of faunal assemblages. In this approach, I also include some control cases, including assemblages associated with ethnoarchaeologically observations of hunting and butchering activities and predictions of element frequencies based on density alone. Once again, the results of this analysis corroborate the fact that densitymediated attrition has profoundly affected many of these crucial Pleistocene animal bone assemblages. However, there are also aspects of this patterning that are not attributable to density-mediated attrition, which would seem to hold valuable implications for early hominin foraging strategies, technologies, and carcass transport dynamics. These, too, suggest relatively little change over the course of the Lower and Middle Pleistocene, while also showing some noteworthy differences with observations on assemblages produced by modern forager groups. These results also suggest that problems associated with the “shaft critique” may not be as daunting as has been suggested but that early access to carcasses may also have been common. Perhaps more important, this analysis has much to say about the future directions of research on faunal assemblage composition and comparative approaches to element frequencies.
194 Chapter 5
Cut Marks, Percussion Marks, and Tooth Marks Bone damage morphologies resulting from hominin butchery (often in relation to damage morphologies produced by nonhominin carnivores) are the other central source of information concerning Pleistocene faunal acquisition and consumption patterns. Stemming from early ethnoarchaeological accounts of butchery activities (for instance, Yellen 1977; Binford 1978, 1981), many zooarchaeological researchers have observed that foragers make important decisions about the ways in which they segment animal carcasses and remove consumable portions. As Binford (1981) points out, hominins equipped with cutting tools are uniquely capable of imposing their will on the process of carcass segmentation, whereas nonhominin carnivores, when they dismember carcasses, are at the mercy of the natural strengths of bone articulations and connective tissues. The choices made by foragers, in turn, reflect a wide range of important economic interests and constraints, while also offering key information concerning faunal acquisition strategies. This category of information has formed a central component of the hunting-and-scavenging debate. As zooarchaeologists began to increasingly question the taphonomy of faunal assemblages at Pleistocene sites, cut marks and other varieties of hominin bone modification were important confirmations of significant hominin interactions with bones of medium- to large-sized animal carcasses and that hominins played important roles faunal assemblage accumulation (Shipman 1981, 1986; Potts and Shipman 1981; Shipman and Rose 1983a, 1983b; Bunn 1981; Bunn and Kroll 1986). In addition, the relationship between homininproduced bone modification morphologies and nonhominin carnivore tooth marks became a linchpin of the hunting-and-scavenging debate. This debate centered on the idea that the ordering of hominin and nonhominin carnivore access to animal carcasses could be addressed by the locations and superposition of cut marks, tooth marks, and other damage morphologies. Cut mark placement formed one aspect of this line of investigation. For example, Shipman (1986) argues that if hominins were dismembering whole and intact carcasses, the highest frequencies of cut marks would be located near the articular surfaces of joints. In contrast, if hominins were scavenging carcasses, cut marks would tend to be concentrated on the fleshy regions of elements, mirroring the distribution of tooth marks left by nonhominin carnivores (that is, on the midshaft regions of elements). In addition, if hominins were accessing carcasses after abandonment by initial predators, then the meatiest portions would have already been consumed, and therefore cut marks would be concentrated on elements with less available flesh. Finally, if hominin had initial access to carcasses through hunting, their cut marks would be deposited first, and other carnivore tooth marks would be
Fear and Loathing in Paleolithic Faunal Analysis 195
superimposed on top of these cut marks. If hominins were scavenging carcasses abandoned by initial predators, this pattern would be reversed. Based on these hypothetical test implications, Shipman finds patterns within the Olduvai faunal assemblages that were consistent with the test implications supportive of early hominin scavenging. As a matter of interest, in their analysis of the FLK 22 faunal cut marks Bunn and Kroll (1986) found the same sorts of patterning as Shipman (1986) did but argue for an opposite set of test implications (see also Bunn and Kroll 1987; Shipman 1987). These authors proposed that the patterns of cut marks they observed resulted from the removal of large masses of flesh from meaty bone elements, indicating initial access to whole carcasses and therefore hunting behavior. Without delving further into the details of this case, we can note that the fact that these sets of researchers could disagree so profoundly about the interpretation of essentially the same patterns of cut mark and tooth mark location underscores both a dearth of actualistic research with which to approach this archaeological evidence and the general ambiguity associated with this line of research. To address these problems, researchers over the course of the next several decades conducted a great deal of experimental and ethnoarchaeological research aimed at understanding the implications of bone damage morphologies (Bunn, Bartram, and Kroll 1988; Blumenschine 1988, 1995; O’Connell and Hawkes 1988a, 1988b; O’Connell et al. 1992, 2002; Blumenschine and Marean 1993; Selvaggio 1994, 1998; Lupo 1994, 2001; Capaldo 1997, 1998; Marean et al. 2000; Lupo and O’Connell 2002; Domínguez-Rodrigo 1997, 1999, 2008). One particular focus of this research was the differentiation of bone damage patterns resulting from hominin-first and carnivore-first access to animal carcasses. In part driven by strongly differing positions on the hunting-and-scavenging debate, these actualistic studies arrived at remarkably different results. Several experiments, such as that of Blumenschine (1995), found that high frequencies of cut marks and low frequencies of tooth marks result from initial access to carcasses by hominins and subsequent chewing by carnivores. Initial access by carnivores resulted in the opposite pattern of cut mark and tooth mark frequencies. Viewed through this actualistic lens, the FLK 22 evidence would clearly imply hominin scavenging of carcasses dispatched by nonhominin predators. In contrast, DomínguezRodrigo (1997, 1999) arrived at contradictory conclusions in demonstrating that carcasses initially butchered by human experimenters and later scavenged by nonhominin carnivores may also accumulate high frequencies of tooth marks relative to cut marks. Domínguez-Rodrigo also found that the weight of meat remaining on carcasses was a key
196 Chapter 5
determinate of cut mark frequencies, with meatier carcasses (at earlier stages of hominin access) resulting in higher cut mark frequencies. Thus, Domínguez-Rodrigo concluded that the FLK 22 faunal assemblage cut mark patterns indicate early access to carcasses by hominins through hunting and/or aggressive scavenging. In addressing these contradictory results, Lupo and O’Connell (2002) offered a comparative review of this body of actualistic research. In assessing and explaining the substantial variability in experimental results, they blamed differences in the conditions under which experiments were conducted and inconsistencies in methodological approaches through which cut marks and tooth marks were defined. Lupo and O’Connell concluded that an order of magnitude increase in the number of such actualistic studies in concert with more strictly controlled experimental conditions would be necessary to resolve these apparent ambiguities. Likewise, while Domínguez-Rodrigo (2003) was highly critical of this line of Lupo and O’Connell’s (2002) work, he also agreed that there is high degree of variability in the deposition of cut marks and tooth marks on bones (Domínguez-Rodrigo and Yravedra 2009). Likewise, in spite of an almost paradigmatic split between these two camps of researchers, both have generally agreed that inferences based on cut mark and tooth mark patterning have increasingly lost credibility over the last decade. Why is there such substantial variability in cut mark and tooth mark patterning? Some of it once again has to do with issues of taphonomy. Both Lupo and O’Connell (2002) and Domínguez-Rodrigo and Yraveda (2009) identified the degree of bone fragmentation and carnivore ravaging as key determining factors in the frequencies and locations of cut marks. Thought of more broadly, density-mediated attrition and the systematic deletion of less dense elements and element portions obviously influences observations of cut mark placement and frequency on bones. Likewise, the “shaft critique” has a role to play in this discussion as well, given the tendency of cut marks and tooth marks to be located on long bone midshafts, which preserve in higher frequencies but which tend to be identified with less consistency. Other related factors, such as the size of prey animals and species-specific anatomical characteristics, also play important roles. In other words, cut marks appear differently on rats, rabbits, deer, wildebeest, cape buffalo, and elephants, and comparisons between these species may be problematic. Other aspects of cut mark and tooth mark variability are less tangible and harder to address though experimentation. One such variable that has received substantial attention is the degree of experience and skill of a butcher. It has been commonly observed that skilled butchers, such as the hunter-gatherer subjects of various ethnoarchaeological investigations, leave significantly fewer cut marks than inexperienced butchers, such as
Fear and Loathing in Paleolithic Faunal Analysis 197
various archaeologist experimenters (Binford 1978, 1981; Burke 2000; Lupo and O’Connell 2002; Dewbury and Russell 2007; DomínguezRodrigo and Yravedra 2009). Another is the nature of the cutting tool being used by the butcher. Within the experimental literature, this factor has manifested itself mostly in the differences between cut marks produced by stone tools and metal knives (Greenfield 1999, 2006; Lupo and O’Connell 2002; Domínguez-Rodrigo and Yravedra 2009). Prehistorically, this may also be the case for the differences in cut marks produced by different varieties of stone tools, such as unmodified flakes of various sizes, handaxes, denticulates, and so on and/or differences in raw material quality (Greenfield 2006; Dewbury and Russell 2007; Bello, Parfitt, and Stringer 2009). While these variables of skill and tool type could potentially be controlled within actualistic experimentation, this has not been done extensively so far, and more work is certainly needed to resolve this issue. Another similar problem is the set of circumstances under which carcasses are acquired and contextual variables influence the interests of butchers. For example, in discussing difficulties in terms of presenting an idealized butchery method for the Nunamiut, Binford (1978) described in detail the ways in which situational context influences the butchery decisions of hunters in the field, including the immediate issues of transport technology, terrain, weather, time of day, and condition of the carcass. Furthermore, he described hunters as also thinking into the future about how meat will be cooked and shared, as well as the nutritional condition of consumers at residential camps. Here, Binford (1978) provides a particular vivid case of a hunter modifying his butchery procedures to meet the needs of his pregnant wife at home. Kent (1993) also describes similar sets of considerations on the parts of Ju/’hoansi hunters in the Kalahari, who, when dismembering carcasses, keep in mind future issues of sharing and cooking. These variables are difficult to control for during the conduct of actualistic research on animal butchery, and butchery experimentation may often fail, because it exists outside the organizational contexts that influenced the prehistoric decision making responsible for observed archaeological patterning. Likewise, under circumstances where there is a great deal of variability in the nature of carcass acquisition, there will be concomitant variability in patterns of butchery and resulting damage morphology. At one end of the spectrum, one might imagine a butcher working in a meat-packing plant where carcasses emerge on conveyor belts and butchers engage in the exact same activities using the same gestures repeatedly (for example, splitting the vertebral bones of hogs; Figure 5.1). In less extreme cases, most agricultural societies kill and butcher livestock under redundant and controlled circumstances, which is likely a reason why agricultural
198 Chapter 5
Figure 5.1 Meat-packing plant workers break down hog carcasses in Chicago in 1905
societies (including our own) have strong cultural preferences concerning the butchery of different animal species. Even many modern forager groups, especially those with specialized hunting economies, such as the Nunamiut, acquire carcasses with a relatively limited range of characteristics in terms of species, sex, and age; they also tend to do so under a relatively limited range of circumstances in terms of landscape location and seasonality. In contrast, we can imagine that Pleistocene hominins with potentially mixed hunting and scavenging practices likely acquired carcasses under an immense range of circumstances in terms of the characteristics of carcasses and the conditions under which they were butchered. This diversity of context could potentially lead to variability in patterns of hominin bone modification that is outside our current range of understanding based on experimentation and ethnoarchaeological observation. There is also similar variability in the deposition of tooth marks by nonhominin carnivores under different conditions. Carnivore patterns of carcass consumption and bone modification are also influenced by such issues as the anatomical characteristics of specific prey animals, landscape location, the proximity of water sources, time of day,
Fear and Loathing in Paleolithic Faunal Analysis 199
seasonality, and (most important) the proximity of the potentially competing carnivores. Likewise, this context also encompasses a range of less tangible conditions, including how hungry ravaging carnivores are when consuming carcasses. This variability has been recognized recently by Gidna and colleagues (2013) in terms of the use of captive versus wild carnivores in experiments assessing the effects of the ordering of hominin and carnivore access to carcasses. As a more anecdotal example: I have done extensive archaeological fieldwork in the Kaudom National Park in northeastern Namibia, and I also often visit the Kgalagadi Transfrontier Park in northern South Africa in my travel to Namibia. The latter is characterized by large populations of medium- to large-sized ungulates concentrated within a few riparian corridors. The former is characterized by very low game densities (at least when I am there during the winter dry season). Under the conditions of relative plenty in the Kgalagadi Transfrontier Park, one may encounter intact and still-articulated carcasses with little by way of tooth mark damage and often with substantial fleshy masses of tissue remaining (Figure 5.2). In contrast, carcasses in the Kaudom National Park are extremely heavily chewed. Here, the carcasses of smaller animals are often almost completely destroyed, while the carcasses of larger animals (especially giraffes and elephants) are disproportionately represented and tooth marked in lower frequencies. In addition, this phenomenon is likely further exaggerated in regions such as the Serengetti, where ungulate biomass is extremely high; or even more important, in Pleistocene environments where ungulate biomass was
Figure 5.2 Carnivore-ravaged eland (Taurotragus oryx) in the Kgalagadi Transfrontier Park, South Africa
200 Chapter 5
likely much greater still. It is easy to see how these differences in relative predator and prey abundances could result in misleading inferences if not properly recognized; I return to these ecological topics in this book’s conclusion. Once again, one must consider what these aspects of bone damage variability mean in terms of the applicability of various research methods. To begin with, it seems evident to me at least that the use of cut mark and tooth mark data to address the ordering of access to carcasses is not the optimal way of proceeding. From an analytical standpoint, our bone modification datasets seem most ambiguous when applied to these sorts of issues. Lupo and O’Connell (2002) may be correct in asserting that a combination of a significantly enhanced program of actualistic research and stricter experimental controls is necessary to address this set of research problems adequately. Given current trajectories of faunal analysis research, however, it seems unlikely that this will occur anytime in the near future, and we are certainly currently missing an adequate set of referential frameworks. Furthermore, as I discussed in the introduction, questions concerning the ordering of access to carcasses have been primarily put forward in addressing the hunting-and-scavenging debate and, by that proxy, the cultural sophistication of early hominins relative to various evolutionary scenarios. To repeat my earlier position, I am not convinced that these lines of evidence are germane to questions of cognitive sophistication and behavioral similarities with modern human groups. Thus, I propose that we take the bone modification data as they are and consider their implications for other kinds of research problems. Cut mark and tooth mark patterning obviously relates to other sorts of questions concerning hominin foraging behavior, activities associated with site use, taphonomy in terms of nonhominin carnivore activities, and other cultural aspects of site formation. Shifting the questions we ask of these datasets could lead to new insights concerning early hominin behavior and evolution. This change does not free the study of cut mark and tooth mark patterns from all its epistemological shortcomings, but it at least allows fresh perspectives to be applied to this substantial body of evidence. Indeed, there have been a number of recent attempts at innovation in terms of the analytical methods applied to animal bones and the research problems to which they are applied. For example, in Chapter 7, I discuss the work of Stiner and colleagues (2009) in examining the placement and orientation of cut marks on fallow deer (Dama cf. mesopotamica) bones at the Lower Paleolithic site of Qesem Cave, Israel. They find that the cut marks on bones from the Lower Paleolithic assemblage are more haphazard and randomly oriented than those from subsequent Middle and Upper Paleolithic assemblages and infer that the Lower Paleolithic
Fear and Loathing in Paleolithic Faunal Analysis 201
hominins engaged in different patterns of butchery behavior than did later hominins. Stiner and colleagues argue many individual hominins removed masses of flesh for themselves rather than having this activity organized by a single individual or task group, as is universally done among modern humans. This finding would imply substantially different patterns of social organization and interaction between the Lower and Middle Pleistocene hominins and modern foragers. Other emerging directions in cut mark research have examined the depth, shape, and other morphological characteristics of cut marks themselves. Various modern scanning and imagining technologies, in combination with GIS computer applications, make it increasingly possible to characterize and compare the morphologies of individual cut marks. For example, Bello and Soligo (2008) present a set of techniques for analyzing the micromorphological characteristics of cut marks in order to assess issues such as the angle, force, and direction of cutting gestures (see also de Juana, Galán, and Domínguez-Rodrigo 2010; Boschin and Crezzini 2012; Merritt 2012). While again framed in terms of butcher skill, such studies also have clear implications for the characteristics of carcasses in terms of size and condition, the kinds of butchery tools being used, and the types of butchery activities being conducted in terms of skinning, defleshing, and dismemberment. Such approaches are already being applied by a growing number of researchers and clearly hold great potential in assessing various conditions influencing butchery behavior.
Turning the Theoretical Tables on Hunting and Scavenging Up to this point, I have focused on the difficulties and ambiguities involved in assessing the specifics of how Pleistocene hominins acquired carcasses, focusing on the opposed poles of hunting and scavenging. Furthermore, the interest in this set of research questions involves the use of various tactics of faunal acquisition as proxies for the sophistication or modernity of our hominin ancestors. I have argued that this exercise seems increasingly fruitless on both methodological and theoretical grounds, and I have proposed some methodological alternatives. We must also rethink the theoretical frameworks within which this body of evidence is considered and the sorts of questions that are asked of it. Specifically, I propose that the available Pleistocene faunal data have much to say about the nature of foraging systems and their broader patterns of organization in terms of the issues of mobility, settlement, social structure, and demography discussed previously. As I mentioned in the introduction, I think behavioral ecology has much to offer this situation in providing alternative viewpoints on faunal evidence from the Pleistocene in ways that are still in the process of
202 Chapter 5
emerging. One aspect of the behavioral ecological theoretical approach involves the ranking of foraging resources in terms of their return rates (Charnov 1976; Krebs 1978; Winterhalder and Smith 1981; Smith and Winterhalder 1992; O’Connell and Hawkes 1981; Hawkes, Hill, and O’Connell 1982; Kaplan and Hill 1985; Kelly 1995; Binford 2001; Ugan 2005; Ugan and Simms 2012; Bettinger, Winterhalder, and McElreath 2006; Bettinger 2009; McCall 2007; McCall and Thomas 2012; Bird, Bird, and Codding 2009; Codding, Bird, and Bliege Bird 2010). This theoretical framework operates from the premise that foragers will seek to optimize their foraging activities according to some set of nutritional currencies, usually thought of in terms of caloric content. Innumerable studies, including those cited here, have confirmed that humans and other animal species roughly adhere to such principles of optimization in terms of preferences for certain resources and exploitation of foraging patches in order to exploit those resources. Furthermore, information derived from these lines of behavioral ecological research has offered a wide range of insights concerning various aspects of human forager organization. The evidence I present in the next two chapters suggests that Lower and Middle Pleistocene hominins had foraging strategies tightly focused on high-ranked resources and were characterized by extremely productive patterns of faunal exploitation. Over the last few decades, there has been a growing consensus that Lower and Middle Pleistocene hominins had regular access to the carcasses of medium- to large-sized prey animals. In addition, in light of patterns of bone modification, we may be increasingly confident that these carcasses had variable but often large amounts of meaty tissue remaining. Furthermore, hominins were quite consistent in removing brains and bone marrow, which are high-energy fatty tissues that are usually inaccessible for consumption by nonhominin carnivores. This consensus has emerged slowly on both sides of the hunting-and-scavenging debate and has been buttressed by ancillary argumentation over tactics such as aggressive scavenging. One way or another, Pleistocene hominins were gaining access to the rich packages of nutritional resources implied by these patterns of faunal acquisition. Because of the technologies known from Pleistocene contexts, it also seems probable that the return rates associated with these faunal resources in terms calories, fats, and proteins were extremely high relative to the foraging work involved in their acquisition. And according to currently available evidence, it seems likely that Lower and Middle Pleistocene hominins possessed simple forms of hunting weaponry, such as sharpened wooden spears, throwing sticks, and clubs. Even MSA and Middle Paleolithic hominins seem to have used simple hand-delivered spears
Fear and Loathing in Paleolithic Faunal Analysis 203
tipped with various forms of stone points. The research of Churchill (1993) and others on the hunting tactics of modern human forager groups associated with these kinds of simple weaponry suggest that Pleistocene hominins possessing similar types of weapons used hunting tactics with small energy investments (Hitchcock and Bleed 1997; Shea 1997, 2006; Sisk and Shea 2009). More specifically, Churchill argues that hand-delivered and/or thrusting spears are generally associated with disadvantaging and ambush hunting tactics, both of which imply little cost in terms of the pursuit of prey. An ethnoarchaeological example may help clarify how and why this is the case. Hitchcock and Bleed (1997) present the compelling case study of Tyua, a lesser-studied foraging and small-scale farming society on the periphery of the Kalahari in Botswana. The Tyua make use of simple hand-delivered spears with lanceolate iron tips. Tyua hunting trips usually involve the ambush of prey animals from blinds adjacent to waterholes, and they are generally conducted at night. This arrangement allows hunters to conceal themselves and to launch their spears at prey animals from suitably close ranges. In general, Hitchcock and Bleed report high rates of hunting success using these tactics, which is especially true once prey animals are present and a shot is taken. In addition, within this dataset, all animals stuck with spears were killed successfully on the spot and did not require any further pursuit. Thus, the Tyua invest remarkably little effort in the manufacture of hunting technologies, the pursuit of prey animals, and transport of resulting carcasses. Likewise, their hunting activities often result in great subsistence windfalls, which is especially apparent when compared with the poison-arrow hunting of neighboring Kalahari San groups, who invest great amounts of labor in the manufacture of weapons, the collection and processing of poison, the tracking down of game, the pursuit of poisoned game animals, and the transport of carcass segments back to residential camps from wherever they happen to succumb to the poison. At this point, one may ask why foragers resort to more labor-intensive hunting tactics and complex weapons when there are such striking examples of high-return hunting with simple weapons. In the case of the Tyua, this situation may be in part due to the fact that modern game densities in northeastern Botswana are much less depressed than they are in western Botswana and eastern Namibia, where poison-arrow hunting is prevalent. Therefore, there is less risk involved in Tyua hunting in the eastern Kalahari, making more elaborate technologies and tactics associated with risk reduction unnecessary. This approach to hunting may also have to do with the fact that the Tyua have mixed economies and alternative strategies for dealing with foraging risk. The Tyua engage in small-scale agriculture, cattle pastoralism, and participate in
204 Chapter 5
wage-labor enterprises (Hitchcock 1995; Hitchcock and Bleed 1997). When certain subsistence resources fail, such as when hunting activities are unsuccessful, there are many fallback economic pathways that may be exploited. Or perhaps it is more appropriate to say that hunting is merely one aspect of the broad range of economic activities conducted by the Tyua. One can imagine the other disadvantaging tactics discussed by Churchill (1993) as having similarly high return rates. These tactics rely on the encounter with prey animals in association with terrain features that limit the capabilities of either flight or self-defense, such as rivers or other wetlands. Disadvantaging tactics also eliminate the effort associated with game pursuit, since hunting activities are focused on specific landscape locations. This set of tactics would potentially be even more applicable if routed foraging mobility systems were routinely employed, allowing for systematic movement between discrete disadvantaging terrain features. In short, disadvantaging tactics also require little by way of hunting technology and are efficient in terms of energy investment. Finally, scavenging may also be thought of as an extremely energyefficient carcass acquisition strategy, which was demonstrated vividly some time ago by Blumenschine’s (1988) study of scavenging opportunities on the plains of the Serengetti National Park and Ngorongoro Crater, Kenya. This study showed that the nutritional content of even substantially carnivore-ravaged carcasses was still quite high in comparison with the search costs involved. If we imagine that Plio-Pleistocene environments had even higher ungulate biomasses, scavenging opportunities involving fleshier carcasses might have been even more common. Under such circumstances, hominins may have gained access to fleshy carcasses through a combination of careful monitoring of the landscape (for example, observation of vultures) and rapid movement to subsistence opportunities (Bunn 2001; Stanford and Bunn 2001; Bunn and Pickering 2010; Ruxton and Wilkinson 2011). The efficiency of this practice would again be enhanced by a routed foraging mobility system, which would increase the frequency of carcass encounters and eliminate transport costs. From the vantage point of behavioral ecology and optimal foraging, one can consider these various carcass acquisition strategies and their associated return rates as a continuum rather than as a set of discrete or mutually exclusive practices. After all, there is a fine line between the use of ambush and disadvantaging hunting tactics. Likewise, disadvantaging tactics and active scavenging have much in common, with the main difference being the degree to which a prey animal is put at a disadvantage (so to speak). For example, an animal hopelessly stuck
Fear and Loathing in Paleolithic Faunal Analysis 205
in a bog or tar pit is as good as dead. But scavenging a recently dead and intact animal resembles disadvantaging in the sense that being dead is the ultimate disadvantage—and the available nutritional resources are the same. Certain types of scavenging situations represent one extreme end of this spectrum. If a large, fat, and prime-aged prey animal were to walk into a forager camp and drop dead, this would be among the highest foraging return rates imaginable. It would result in a huge subsistence windfall with no investment of energy in weapon technology, finding and/or pursuit of game, or carcass transport. A less extreme (but more imaginable) situation would be the encounter of an intact and recently dead large prey animal in the field. This circumstance would still require no weapon technology or pursuit, although it may require transport or the movement of consumers to the carcass. Other variations of this scenario would involve either smaller prey animals or more ravaged carcasses, which would effectively decrease the size of the food packages acquired but not the effort involved in acquiring them. Hunting with simple weapons represents a modest increase in the amount of effort involved in carcass acquisition but represents a tradeoff in ensuring that carcasses are acquired whole. When the variably fleshy carcasses of large animals are not commonly or reliably available, which is the norm in the modern environment, hunting offers an approach for resolving this problem. It requires investment in weapon technology, more effort in terms of finding and pursuing game, and, in the case of modern residential foraging systems, the transportation of carcass portions. As discussed, these costs may be quite minimal in comparison with those associated with the complex weapon systems of modern forager groups, such as poison-arrow hunting in the Kalahari. For this reason, it is apparently only when the availability of game and other high-ranked foraging resources becomes scarce or risky that more complex technologies emerge to reduce the risk involved in hunting activities (Bleed 1986; Bousman 1993, 2005; Hiscock 1994; Bamforth and Bleed 1997; Elston and Brantingham 2002; McCall 2007; McCall and Thomas 2009, 2012). Such technological trends are often attached to shifting economic strategies associated with foraging intensification. From this perspective, there is nothing about the development of either simple hunting weapons/tactics or the evident patterns of faunal exploitation in the Lower and Middle Pleistocene that relates directly with cultural sophistication or modernity. This phenomenon fits within a spectrum of faunal exploitation behavior consistent with high-ranked resource exploitation. In other words, we may think of scavenging and simple hunting behavior as related components of a broader strategy in which hominins exploited the “low-hanging fruit” in their foraging
206 Chapter 5
environments (in both literal and metaphorical senses). Furthermore, even the development of what Shea (2006) refers to as complex hunting weapons during the Upper Pleistocene is likely related to increasingly labor-intensive hunting strategies in the context of foraging resource depression and subsistence intensification. Although increased cognitive sophistication and social complexity may have been preconditions for the origins of complex hunting weapon technologies, one cannot use these evolutionary changes as sufficient explanations of technological change without resorting to tautology. Instead, such new technologies are better thought of within the context of subsistence intensification brought about by large populations of hominins living in environments in which high-ranked resources with slow turnover rates were depressed by increasingly intense foraging activities. This perspective offers alternative ways of thinking about the Paleolithic archaeological record of faunal exploitation and hunting technological change. Early hominins scavenged not because they were cognitively incapable of more complex forms of faunal exploitation but rather because this subsistence tactic represented the most efficient available foraging strategy. Likewise, early hominins made decisions about hunting behavior, developing necessary weapons and tactics, not because they were cognitively sophisticated but because simple hunting offered the most efficient method of foraging under certain environmental regimes of ungulate and predator density. Finally, even complex hunting weapons originated not because Upper Pleistocene hominins were “smarter” but rather because increasing hominin populations and declining availability of high-ranked resources necessitated subsistence intensification. Thus, complex hunting weapons fit into contexts of increasingly labor-intensive hunting tactics and a broad range of social and economic strategies aimed at reducing foraging risk. This approach also turns the tables on our thinking about the implications of hunting large versus small prey. Big game hunting has often been used as a marker of cognitive sophistication and cultural modernity; however, it is prudent to exclude various species of very large and/or dangerous fauna from this discussion (Klein 1976; Klein and Cruz-Uribe 1996, 2000)—the predation of medium- to large-sized game represents an optimal behavior in terms of foraging efficiency. Early hominins were capable of such predation with simple technology and tactics, and available faunal evidence suggests that it was fairly pervasive and stable over the bulk of the Pleistocene. In contrast, what seems to mark the first large-scale shift in terms of behavioral complexity is the intensive predation of small prey, which is characterized by much lower return rates but which turn over much more rapidly. The exploitation of small animals maps onto other aspects of subsistence intensification, which
Fear and Loathing in Paleolithic Faunal Analysis 207
are usually associated with both social and technological strategies for reducing foraging risk. Archaeological manifestations of risk-reduction strategies, including complex weapon technologies and the production of symbolic objects, are often those that have been used historically to define the Upper Paleolithic and modern human revolution scenarios. The intensive exploitation of small prey becomes a widespread global phenomenon only at the end of the Pleistocene and into the Holocene, as an element of “post-Pleistocene adaption” (Binford 1968; Flannery 1973; Klein and Cruz-Uribe 2000; Klein et al. 2004; Stiner, Munro, and Surovell 2000). Although no one would seriously argue that there were any biologically based cognitive changes between these post-Pleistocene adapted foragers and their immediate ancestors, this transition nonetheless represents a major shift in foraging ecology that occurred at a global scale and laid the groundwork for even more fundamental cultural changes witnessed during the Holocene. I have also argued previously that there is substantial evidence for periods of subsistence intensification that occurred during various periods of the later MSA in southern Africa, especially during the Howiesons Poort period (McCall and Thomas 2012), which has been the subject of intense archaeological attention by virtue of the precocious appearance of various Upper Paleolithic-like features. If I am correct in my supposition of foraging intensification during the Howiesons Poort period, such an increase implies that we should view the emergence of these various striking archaeological phenomena, especially in terms of symbolic objects, within the framework of social strategies for risk reduction rather than simply as the result of increasing cognitive or cultural sophistication. Furthermore, the occasional appearances of symbolic objects within earlier MSA and Middle Paleolithic contexts may also relate to these sorts of dynamics in terms of foraging ecology, in the absence of any evidence for biological shifts related to increased cognitive capabilities. More broadly, I argue that such periods of technological and social innovation may be systematically related to organizational shifts having to do with foraging ecology and subsistence intensification, and they should be viewed against the backdrop of basal Upper Pleistocene patterns of foraging behavior and technology evident across large swaths of the MSA and Middle Paleolithic world. Historically, much of Paleolithic faunal analysis has focused on addressing paradigmatic questions concerning the relative sophistication, modernity, and humanity of hominins in various times and places. Inherent within this approach was the assessment of faunal acquisition strategies as proxies for issues of cognition and cultural complexity. This strategy led to the asking of questions in terms of faunal analysis that were, at best, extremely difficult to address and, at worst inappropriately
208 Chapter 5
framed. This is not to say that tremendously productive insights were not achieved through this research trajectory. We have made immense progress in understanding issues of taphonomy, bone modification, foraging ecology, and the like. What I am proposing is the abandonment of the theoretical baggage concerned with assessing the relative modernity of Paleolithic hominins and the analysis of faunal exploitation strategies on their own terms within the framework of resource ranking, optimal foraging theory, and behavioral ecology.
Faunal Acquisition Patterns and the Organization of Foraging Systems In the absence of other forms of information, understanding what prehistoric populations ate is not sufficient to make any specific inferences concerning the organization of foraging systems. In combination with other types of data, however, it may be an extremely powerful source of information concerning a wide range of phenomena, including environmental conditions, demographics, technology, landscape use, mobility, settlement systems, and patterns of social organization in terms of food sharing. Studying technological organization speaks to certain kinds of economic activities in which prehistoric peoples engaged, the circumstances under which individuals made and maintained tools, and the mobility patterns that structured these technological activities. Faunal analysis may add fundamental varieties of information concerning the nature of subsistence opportunities and their rankings, which will help flesh out the organizational skeleton provided by studies of technology. In terms of foraging ecology, faunal assemblages provide key information concerning both the structure of available resources in the environment and concomitant strategies of economic behavior. Some of this information is environmental in nature, including the ecology of prey and predator communities, the landscapes that hosted them, and the dynamics of seasonality that structured them. In addition, we can learn a great deal about the operation of foraging economies in holistic terms from the characteristics of the prey animals, as well as the ways in which they were acquired, processed, consumed, and shared. The organizational approach is fundamentally based on the premise that all aspects of human cultural systems are interrelated. Understanding faunal exploitation is one crucial aspect of this approach, and behavioral ecology offers one way of integrating this set of information into our broader considerations of the organization of foraging systems. From its earliest days, ethnoarchaeological research has potently illustrated the ways in which faunal acquisition and resulting animal bone assemblages relate to the broader organization of foraging
Fear and Loathing in Paleolithic Faunal Analysis 209
behavior. In his work with the Ju/’hoansi, Yellen (1977) demonstrated relationships between the hunting technologies and tactics employed, the varieties of animals targeted in hunting activities, the butchering and sharing of carcasses, and the formation of faunal assemblages with certain forms of compositional and spatial patterning (see also Kent 1993). In his study of the Nunamiut, Binford (1978) illuminated the intimate links between the seasonality of caribou resources as a highly migratory prey species, the structuring of logistical hunting trips, the field butchery prey, the transport of carcasses segments, and the differential formation of faunal assemblages at hunting camps, residential bases, and meat caches. The comparison of these two cases demonstrates profound structural differences in the composition of faunal assemblages, including their various axes of variability, stemming from the organizational characteristics of foraging economies and social systems (see Binford 1981 for further discussion). In this way, such studies clearly show that the characteristics of faunal assemblages in archaeological contexts are key sources of information about the holistic organization of foraging behavior. The importance of faunal assemblages in reconstructing organizational issues of foraging ecology is not unique to the studies of hominins. It stands to reason that nonhuman carnivore prey selection, hunting tactics, and resulting bone assemblages have much to say about ecological conditions in terms of surrounding animal communities, predator and prey densities, seasonality, and the like. Carnivore ecologists have made very productive use of prey characteristics (and bone assemblages) as a source of information about variability in ecological and demographic context (Sunquist and Sunquist 1989; Kunkel et al. 1999; Biswas and Sankar 2002; Bowen et al. 2002; Bagchi et al. 2003; Hayward and Kerley 2005, 2008; Hayward 2006; Hayward et al. 2006). This approach has also been utilized in paleontological studies of predator ecology and predator-prey relationships. For example, assemblage characteristics and patterns of bone modification have been used to reconstruct the ecology of predatory dinosaurs (Kowalewski 2002; Barrett and Rayfield 2006). In this way, paleontologists share an interest in the reconstruction of the ecology of the hunting behavior of carnivores on the basis of their bone accumulations, and they also share the same general optimal foraging approach. No dinosaur paleontologist or lion ecologist would evaluate the nature of predator bone accumulations in terms of cognitive sophistication or similarity with the hunting practices of modern foragers. Likewise, in our approach to Paleolithic faunal assemblages, we should embrace the full range of ecological information that they may offer, as well as their complementary roles in reconstructing organizational patterning in
210 Chapter 5
combination with other forms of archaeological data. While big game hunting by early hominins is an old rhetorical strain with a great deal of gut-level popular appeal, deeper ecological thinking and consideration of the nature of the organization of foraging systems has much more to offer our understanding of Pleistocene hominin evolution.
Conclusion Since the 1960s, the archaeology of animal bones has played important roles both in the development of the processualist approach or “New Archaeology” and in various debates concerning the nature of early hominin evolution. The maturation of this line of investigation has involved a massive amount of actualistic research in terms of both ethnoarchaeology and experimentation, leading to significant insights concerning issues of taphonomy, the role of bone density in faunal accumulation and preservation, and other problems associated with equifinality. In concert with these developments, we have had to confront a number of major methodological issues, ranging from the counting of bones and the quantification of assemblages to the interpretation both hominin and nonhominin patterns of bone modification. This increasing maturity has resulted in a series of strong disagreement about our theoretical goals and methodological procedures. While some of this debate has been quite healthy and productive, much of it has failed to advance our understanding hominin faunal exploitation patterns, foraging strategies, and ecology. To a great extent, fights over this spectrum of both substantive and methodological issues have tended to play out along the party lines of the hunting-and-scavenging debate. This conflict has occurred in spite of the fact that these arguments often have little directly to do with hunting or scavenging behavior. In this chapter, I have proposed a two general ways of proceeding in light of this situation. I have argued that we should, in a sense, blur our eyes and examine existing evidence in general comparative terms. On the one hand, this way forward involves letting go of the approach of taking individual assemblages as proxies for the behavior of hominins of some sort of putative evolutionary stage. For example, the FLK 22 assemblage has played a seminal role in the hunting-and-scavenging debate based on the assumption that it somehow epitomizes early hominin behavior always and everywhere. This broad comparative approach allows for the recognition of various scales of spatial and temporal variability. On the other hand, assessment of faunal assemblage variability is fundamentally necessary to the diagnosis of chronological trends, itself the lifeblood of building better evolutionary theory. In addition, understanding faunal assemblage variability is at the heart of making inferences about
Fear and Loathing in Paleolithic Faunal Analysis 211
hominin ecology and the organization of foraging systems. It does so in facilitating the examination of issues of environmental context, seasonality, predator and prey community relationships, and the ranking of hominin foraging opportunities. At a theoretical level, I have also advocated the abandonment of the paradigmatic questions inherent within the hunting-and-scavenging debate in favor of scientific research problems framed by the field of human behavioral ecology. It seems increasingly clear to me that many of the questions asked within the hunting-and-scavenging debate framework are not answerable on the basis of available evidence, and their investigation results in the collection of ambiguous evidence. By refocusing our attention on the return rates and resource structures associated with hominin faunal exploitation, we may think more broadly about the nature of hominin ecology, its synchronic variability, and its patterns of change. In combination with other forms of archaeological data, the examination of faunal data from the perspective of behavioral ecology and optimal foraging may lead to crucial insights about patterns of both economic and social organization. Such insights, in turn, can serve as a basis for developing new directions in terms of building evolutionary theory that goes beyond the implications of big game hunting, home base use, and food sharing. What emerges from the evidence presented in the next two chapters is a new view of Lower and Middle Pleistocene hominin foraging strategies focused on high-ranked faunal resources, irrespective of their acquisition through scavenging or hunting with simple weapons. These analyses also suggest that there were no major shifts in faunal exploitation patterns in the transition from the ESA to the early MSA, although there may have been differences in patterns of transport and butchery associated with home base use and increasingly complex patterns of food sharing.
Chapter 6
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition
T
he composition of Paleolithic faunal assemblages has been a hotly debated topic in terms of understanding early hominin foraging behavior and building evolutionary models. In the previous chapter, I reviewed the foundations of this debate, stemming from the confrontation of the taphonomic complexity of major sites and the specter of equifinality. I also proposed some ways forward with the respect to the development of new methodological approaches and theoretical perspectives. This chapter presents a reanalysis of some major Lower and Middle Pleistocene assemblages in a comparative perspective (mostly) separate from the baggage of the hunting-and-scavenging debate. Density-mediated attrition is one of the main agents of equifinality acting on animal bone assemblages, potentially obscuring many important aspects of compositional patterning and underlying many methodological problems inherent in the field of zooarchaeology. This chapter begins by examining problems associated with density-mediated attrition focused on the famous FLK 22 locality at Olduvai Gorge, which has served as the basis for so much debate (Leakey 1971; Binford 1981; Bunn and Kroll 1986; Potts 1988; Monahan 1996; Domínguez-Rodrigo 2003; Faith, Marean, and Behrensmeyer 2007). My analysis confirms the longheld suspicion that density-mediated attrition has strongly conditioned patterns of assemblage composition at FLK 22 and most other Olduvai localities. It also shows that certain elements are present at significantly higher-than-expected frequencies according to bone density alone. Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 213–242 © 2015 Left Coast Press, Inc. All rights reserved. 213
214 Chapter 6
Furthermore, these elements with frequencies higher than expectations based on density tend to be rich in terms of both meat and marrow. I argue that this pattern is consistent with faunal exploitation strategies focused on high-ranked subsistence opportunities, as well as variable patterns of element transport from initial sites of acquisition. Next I present a comparative analysis of animal-part representation patterns for medium-sized bovids for a range of Lower and Middle Paleolithic assemblages. In doing so I use principal components analysis (PCA) to compare sites according the frequencies of various bone elements. This analysis also comprises a number of control samples, including those produced by modern foragers at both kill sites and home base sites, as well as those occurring naturally at game parks in eastern subSaharan Africa. I also include expectations of bone element frequencies based on bone density values. This comparative analysis suggests that there are a few salient patterns in terms of element frequencies likely relating to patterns of carcass transport, density-mediated attrition, and various other taphonomic dynamics. In terms of chronological trends, it is clear that similar processes of site formation were at work across the Pleistocene and that even early animal bone assemblages are consistent with the patterns of high-ranked resource exploitation, and variable element transport argues for the FLK 22 assemblage. In other words, there are no major apparent changes in terms of foraging ecology across the Lower and Middle Pleistocene or in association with the transition from the Early to Middle Stone Age. In concluding this chapter, I review other evidence in terms of the frequencies of elements from very large fauna, the hunting weaponry available to early hominins, and potential implications for the nature of various faunal acquisition tactics. Here, I synthesize an organizational model of foraging behavior and mobility for ESA hominins, arguing that the frequent movement around the landscape within a routed foraging system allowed hominins to gain access to animal carcasses and other high-ranked food resources. I also make the case that the endurance running and persistence hunting tactics may have articulated with this routed foraging mobility systems in terms of the hunting of small and medium-sized game, as well as the scavenging of very large and/or dangerous game that was not attainable using simple weapons. Finally, I consider the implications of the similarities between ESA and MSA faunal assemblages from the perspective of organizational patterning. On the one hand, the overall similarities between the two suggest broad continuity in terms of the technologies and tactics used to acquire faunal resources. On the other, certain subtle changes may point to new patterns of transport stemming from innovations in terms of mobility and settlement systems with the origins of the MSA.
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 215
Density-Mediated Attrition at Olduvai Gorge: A Case Study Archaeological research at Olduvai, especially at the FLK 22 locality, has contributed much of our current knowledge concerning the characteristics of ESA faunal assemblages. In fact, FLK 22 has often been taken as a modal representation of early hominin faunal accumulations and, by implication, of the nature of faunal acquisition habits. Zooarchaeological research at Olduvai started with Mary Leakey (1971) and has been subject of an enormous amount of additional work and debate since then (Binford 1981; Bunn 1981; Potts and Shipman 1981; Potts 1988; Bunn and Kroll 1986; Marean et al. 1992; Blumenschine 1995; Monahan 1996; Capaldo 1997, 1998; Domínguez-Rodrigo 1997, 1999; Faith and Behrensmeyer 2006; Faith, Marean, and Behrensmeyer 2007). Within this body of research, examination of element frequencies has been an important source of evidence concerning patterns of hominin carcass transport. Among the most important of these studies was that of Bunn and Kroll (1986), who argued that the high frequency of upper limbs relative to axial elements reflects the transport of meaty pieces away from initial kill sites to hominin home bases. This inference has been the subject of considerable debate since its initial publication, and numerous researchers have invoked density-mediated attrition as an alternative explanation for this pattern (Potts 1988, Blumenschine and Marean 1993; Monahan 1996; Lupo and O’Connell 2002; O’Connell et al. 2002). In short, it was unclear whether the patterns of assemblage composition at FLK 22 were the result of early hominin carnivory and carcass transport behavior or were taphonomically derived equifinality. Although the hominin role in the accumulation of the FLK 22 faunal assemblage is no longer seriously debated, questions remain about the implications of the evidence for patterns of early hominin faunal exploitation and especially transport. Given the importance of density-mediated attrition as a prime taphonomic factor, our evaluation of claims about early hominin carcass transport behavior involves generating predictions of bone element frequencies based on the differential densities of elements and the comparisons of these predictions with actual observations. On the basis of early studies showing the importance of density-mediated attrition in affecting the frequencies of anatomical elements (Brain 1969; Binford and Bertram 1977), researchers have conducted numerous studies seeking to quantify the density of bone elements for various species of animal as a foundation for evaluating element frequencies within archaeological assemblages (Lyman 1984, 1994; Elkin 1995; Pavao and Stahl 1999). Early measurement of density reference samples, such as that of Binford and Bertram (1977), involved the calculation of density
216 Chapter 6
for whole elements using water displacement. Later studies, such as that of Lyman (1994), measured density at more specific landmarks on bones using photon densitometry. The most recent work of Lam and colleagues (1999) has used computer-assisted tomography (CT) scanning of specific bone landmarks, resolving many of the technical problems associated with photon densitometry. Thus, the study by Lam and colleagues represents the most accurate measurement of bone density values yet published and is a vital resource in generating expectations for element frequencies based on density. Another problem with bone density reference samples has been the limited number of animal species studied so far. Early applications of bone density reference values frequently involved the use of one species, often either Binford and Bertram’s (1977) sheep or Lyman’s (1984) deer, to examine the archaeological representation of other unrelated species. Although some have argued that there is relatively little variation in bone density values across quadruped taxa (for example, Binford 1981), it is clear that this practice reduces the accuracy of modelingdensity values for animal bones in archaeological contexts. For studying medium-sized African bovid faunal remains, such as those present in high frequencies within the FLK 22 assemblage, Lam and colleagues’ study (1999) represents a significant improvement, because it reports density values for blue wildebeest (Connachaetes taurinus). This species is relatively closely related to many bovid species found at FLK 22 and other African archaeological sites. The new bone density values for blue wildebeest provided by Lam and colleagues offer an important opportunity to reexamine the effects of density-mediated attrition on element frequencies at early hominin archaeological sites. To test the effects of density-mediated attrition on assemblage composition at FLK 22 and other Olduvai localities, I have performed a series of linear regressions comparing the frequencies of elements (in % MAUs) reported by Bunn and Kroll (1986; FLK 22), Potts (1988; DK 3, DK 2, FLK NN 3, FLK NN 2, FLK 22, and FLK North 6), and Monahan (1996; HWK E1-2, BK, and MNK) with the density values from Lam and colleagues (1999) for individual elements or element segments. Here, one aspect of the FLK 22 assemblage warrants further examination in terms of analytical considerations. This assemblage is heavily dominated by cranial elements and teeth, which have very high bone-density values. The high frequency of teeth is especially problematic, because teeth are extremely durable and often most difficult to relate to hominin activities. Even excluding teeth, however, the frequency of cranial specimens is still extremely high relative to other elements. Binford (1981) saw this pattern as one of the more significant aspects of assemblage patterning, arguing that it resulted
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 217
from head collecting by scavenging hominins— which is still worth considering, since head collecting has been observed among a range of modern foragers (see also Binford 1984). This observation may be due partly to a bias in terms of identification, with cranial pieces being easier to identify at certain levels than postcranial elements, especially given the degree of fragmentation at the Olduvai sites. However, regardless of the cause, the extremely high frequency of heads presents problems for regression analyses comparing element frequencies with expectations based on density, creating a misleading bimodal data distribution. For this reason, I have conducted the series of analyses discussed here both including and excluding heads. Figures 6.1a, b, and c, respectively, show the regressions for the Bunn and Kroll (1986; no heads), Potts (1988), and Monahan (1996) data. These analytical results clearly show that faunal assemblages from FLK 22 and the other Olduvai localities are heavily influenced by densitymediated element destruction, contrary to certain earlier claims. Yet, there are some aspects of the faunal patterning that may signify hominin carcass exploitation and transport behavior. One of the interesting 100.00
HUM PSH PEL V MCM PSH
80.00
MTM PSM SCAP RUL DSH
% MAU
60.00 TIB DSH MTM DSH 40.00
TIB PSH
MCM DSH FEM PSH
FEM DSH
RUL PSH HUM DSH
20.00
RIB VERT .00 .40
.50
.60
.70
.80
.90
1.00
Bone Density (after Lam et al. 1999)
Figure 6.1a Graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22 (data from Bunn and Kroll 1986, heads excluded)
218 Chapter 6
100.00
CRAN
80.00
HUM 60.00
MAND
% MAU
RDUL MCAR SCAP TIB MTAR
40.00
PEL V FEM
20.00
PHAL
TARS CARP
FIB
PTLA VERT
.00
RIB
HYOID .25
.50
.75
1.00
1.25
Bone Density (after Lam et al. 1999)
Figure 6.1b Graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22 (data from Potts 1988, heads included)
aspects of these data is the relatively high frequency of humeri and tibiae. Figures 6.2a, b, and c show the studentized residual values for element frequencies plotted against the adjusted predicted values for the linear regressions for the Bunn and Kroll (1986), Potts (1988), and Monahan (1996) data. The raw residuals for humeri range between two and four times the frequencies predicted by the density-mediated attrition regression models. In addition, the one Olduvai locality where element frequencies and bone density are not correlated at statistically significant levels (FLK NN 2) has the highest frequency of humeri relative to other bones. Humeri and (to a lesser extent) tibiae are the only elements that occur in significantly higher frequencies than would be expected as a function of bone density. One key possibility for this pattern is that the elevated frequencies of these elements resulted from dynamics of carcass transport. Other elements appearing with greater than expected frequencies include mandibles, radioulnae, and tibiae—all meat- and marrow-rich animal parts. This pattern may indeed suggest some level of selection for certain elements on the part of hominins during processes of
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 219
100.00
HUM
TIB CRAN
80.00 FEM
RDUL
MAND
MTAR 60.00 % MAU
MCAR
PEL V 40.00 SCAP
20.00
CVERT SCAR LVERT
RIB
TVERT .00 .40
.60
.80 1.00 1.20 Bone Density (after Lam et al. 1999)
1.40
Figure 6.1c Graph showing the relationship between maximum bone element density and the percentage MAU of bone elements at FLK 22 (data from Monahan 1996, heads included)
carcass dismemberment and subsequent transport. Such selection would have resulted in the accumulation of meat- and marrow-rich elements, especially humeri and tibiae, in significantly higher frequencies relative to other animal parts. In sum, it is clear that density-mediated attrition is an extremely important dynamic for all the Olduvai localities, especially FLK 22. In spite of this problem, certain meat- and marrow-rich bone elements appear in significantly higher-than-expected frequencies, especially humeri and tibiae. Although further analysis and other lines of information would certainly help to clarify the causes of this pattern of assemblage composition, it could have resulted from selective transport of elements according to their nutritional productivity. If true, this patterning would imply that early hominins, through one set of foraging tactics or another, had access to rich bone elements and engaged in forms of transport behavior that resulted in their accumulation in higher frequencies relative to other animal parts. This set of findings contributes to a picture of early hominins at Olduvai having consistent access to high-ranked faunal resources.
220 Chapter 6
3
PEL V
Studentized Residuals
2
HUM PSH
TIB DSH
1
SCAP
FEM PSH MTM DSH
RUL DSH
0
MTM PSH
MCM DSH HUM DSH VERT
–1
MCM PSH TIB PSH
RUL PSH FEM DSH RIB
–2 40 60 Adjusted Predicted Values
20
80
Figure 6.2a Graph plotting the residual values against the adjusted residual values for the regression analysis of the Bunn and Kroll (1986) data 3 HUM
Studentized Residuals
2
1
HYOID
TIB PTLA
FEM
SCAP MCAR
PEL V
0
CRAN
RDUL
PHAL MTAR
MAND
TARS CARP
VERT
–1
FIB RIB
–2 –20
0
20
40
60
80
100
Adjusted Predicted Values
Figure 6.2b Graph plotting the residual values against the adjusted residual values for the regression analysis of the Potts (1988) data
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 221
3 HUM
2 Studentized Residuals
TIB FEM
1
RDUL PEL V
0
SACR
MAND
CVERT SCAP
LVERT
–1
MTAR MCAR
TVERT
CRAN RIB
–2 25
50
75
100
125
Adjusted Predicted Values
Figure 6.2c Graph plotting the residual values against the adjusted residual values for the regression analysis of the Monahan (1996) data
A Comparative Approach to Animal Part Representation and Hominin Faunal Exploitation As mentioned, it is certainly problematic that so much discussion and argumentation about early hominin faunal exploitation have centered on the FLK 22 assemblage alone. As Leakey (1971) recognized from the beginning, this assemblage is unique in terms of its size, age, taphonomic context, and degree of early hominin involvement. It cannot, however, be taken as a stand-in for all Lower and Middle Pleistocene hominin behavior everywhere, since there was obviously a great deal of variability in terms of geography, ecological context, and hominin foraging behavior across the Old World Paleolithic. The recognition and acceptance of the enormous cultural variability associated with modern foragers and its diverse environmental causes are widespread at long last (Kelly 1995; Binford 2001). Behavioral variability among chimpanzees (Pan troglodytes) across various ecological contexts in eastern and western Africa is also now well known (McGrew, Baldwin, and Tutin 1981; see also papers included in Boesch, Hohmann, and Marchant 2002). Furthermore, as discussed in the previous chapter, dynamics of environmental context and predator-prey ecology are crucial determining factors in terms of the hunting and scavenging behavior of
222 Chapter 6
large-bodied carnivores. Recognizing this fact, we may compare a range of Lower and Middle Pleistocene faunal assemblages as an approach for documenting and analyzing the potential of both ecological and chronological variation. Materials, Methods, and Analysis Since FLK 22 may be somewhat unique in the extent of hominin involvement in its accumulation, nonhominin dynamics of taphonomy may have played greater roles in the formation of most other important Lower and Middle Pleistocene faunal assemblages. When these sites are examined in comparative terms, questions emerge about how we may go about isolating the hominin behavioral signal inherent within assemblages that have had more extensive nonhominin input and how we go about identifying the variability expressed within these behavioral signals. One of the fundamental premises of the comparative study presented here is that patterns of element frequencies within faunal assemblages stem from a complex but limited set of interrelated variables, including aspects of geological taphonomy, modification by nonhuman carnivores, and various kinds of hominin activity. Furthermore, recognizing variability in terms of both hominin and nonhominin factors of assemblage composition necessitates the examination of a large sample of faunal assemblages spanning a wide range of chronological, geographical, and geological contexts. Table 6.1 presents the sample of faunal assemblages examined in this analysis (along with associated references). The sample includes seven ESA or Lower Paleolithic assemblages, four MSA or Middle Paleolithic assemblages, and seven control samples for use as frames of reference. The control samples include two European Upper Paleolithic assemblages associated with specialized hunting and domestic processing/ consumption; surface collection samples from Amboseli National Park, Tanzania, a Kua campsite from the Kalahari; a Kua scavenged kill site; and values predicted by density alone (see Table 6.1 for references). The control samples represent a wide range of known human behavioral and taphonomic contexts and provide frameworks for the interpretation of the archaeological assemblages. The nature of this study necessitates the use of multivariate patternrecognition statistical methods capable of reducing a large dataset with many cases and variables, so principal components analysis (PCA) was again selected as an appropriate form of factor analysis. The frequencies of elements were standardized by calculating % MAUs from published element counts. Given the constraints of published element frequencies for the sites in this analysis, I examined only Size Class 2 and 3 animals
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 223
Table 6.1 List of zooarchaeological cases considered in this comparative analysis Zooarchaeological Case
Location
Age
Reference
Olduvai FLK 22 Duinefontein Ambrona cervid Torralba cervid Untermassfeld cervid Gesher Benot Ya’aqov Hayonim Kobeh Mauran Klasies River Size II Klasies River Size III Porc Epic Le Flageolet Pincevent Kua camp Kua kill Amboseli Size II Amboseli Size III Wildebeest density predictions
Tanzania South Africa Spain Span Germany Israel Israel Israel France South Africa South Africa Ethiopia France France Botswana Botswana Kenya Kenya n/a
ESA ESA LP LP LP LP LP MP MP MP MP MP UP UP Modern Modern Modern Modern Modern
Bunn and Kroll 1986 Klein et al. 2007 Klein 1987 Klein 1987 Kalhke and Gaudzinski 2005 Rabinovich et al. 2008 Stiner 2005 Marean and Kim 1998 Farizy et al. 1994 Klein 1976 Klein 1976 Assefa, Lam, and Mienis 2008 Enloe 1993 David and Enloe 1993 Bartram and Marean 1999 Bartram and Marean 1999 Faith and Behrensmeyer 2006 Faith and Behrensmeyer 2006 Lam, Chen, and Pearson 1999
from each assemblage. In the cases of Klasies River and the Amboseli surface collections, I treated the two size classes separately owing to their sufficiently large sample sizes. In the cases of Torralba, Ambrona, and Untermassfeld, I included only the cervids in order to adhere to the size class cutoff. From studies of these assemblages, we composed a dataset of % MAUs for each element for the purpose of exploring patterns of element frequencies. The main purpose of PCA as a data-reduction tool is to find patterns of covariation in the frequencies of elements. In other words, I hope to establish which elements appear together in assemblages as interrelated groups. By establishing statistical relationships between various elements within faunal assemblages, I hope to learn about the variables responsible for selective destruction or introduction of bones (that is, why some elements appear in elevated frequencies and others do not). I also use the PC scores for each variable for the purpose of clustering element frequencies. Finally, I also calculated a series of regression scores for each faunal assemblage by performing a varimax rotation of the PC solution. These scores, in essence, indicate how strongly each assemblage relates to a given PC, providing a way of evaluating which assemblages have relationships with the groupings of elements represented by the PC
224 Chapter 6
loadings. These scores also are suitable for clustering the archaeological assemblages along with the control samples with known characteristics as a method for making inferences about past dynamics of site formation and hominin behavior. Results In this analysis, the PCA produced four PCs with eigenvalues greater than 1.0. The first PC explains 47.8% of the variance, the second PC 21.4%, the third 9.8%, and the fourth explains 7.2%. The PC loadings for each variable from the rotated solution are shown in Table 6.2, and the eigenvalues for each PC are given in Table 6.3. Figures 6.3 and 6.4 show a cluster analysis based on the PC loadings for various elements and a generalized quadruped skeleton representing the affiliation of elements with the four major PCs. The results of this part of the analysis suggest some important axes of variation in terms of the frequencies of bone elements in Lower and Middle Pleistocene animal bone assemblages. Foremost, PC 1 is composed of cervical, thoracic, lumbar, and sacral vertebrae, ribs, pelves, and femurs. The reason for the dominance of PC 1 has to do with the relatively uniform rarity of these elements in the assemblages Table 6.2 Rotated PC matrix for comparative faunal assemblage data Rotated Component Matrixa Component cranium mandible cervical thoracic lumbar innom sacrum ribs scapula humerus radioulna metacarp femur tibia metatars
1
2
3
4
.312 .068 .741 .882 .916 .732 .961 .956 .326 .437 .565 –.063 .689 .439 –.440
.075 .654 .056 –.037 –.010 .252 .044 .009 .003 .693 .746 .900 .454 .567 .749
–.041 .297 .563 .404 .347 .296 .084 .060 .907 .325 .062 –.060 –.220 –.140 –.226
.860 .405 .097 –.034 –.006 .073 .172 .153 .015 –.246 .085 .026 –.327 –.599 –.172
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a Rotation converged in 9 iterations.
7.173 3.215 1.476 1.078
1 2 3 4
47.817 21.432 9.837 7.184
% of Variance
47.817 69.249 79.086 86.270
Cumulative %
Extraction Method: Principal Component Analysis.
Total
Component
Initial Eigenvalues
7.173 3.215 1.476 1.078
Total 47.817 21.432 9.837 7.184
% of Variance 47.817 69.249 79.086 86.270
Cumulative %
Extraction Sums of Squared Loadings
Total Variance Explained
6.122 3.439 1.843 1.537
Total
40.815 22.928 12.283 10.244
% of Variance
40.815 63.743 76.026 86.270
Cumulative %
Rotation Sums of Squared Loadings
Table 6.3 Eigenvalues and percentages of variation explain for PCA of comparative faunal assemblage data
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 225
226 Chapter 6 Dendrogram Using Average Linkage (between Groups) Rescaled Distance Cluster Combine
0
5
10
15
20
25
Sacrum Ribs PC 1
Thoracic Lumbar Cervicle Innominate
PC 3
Scapula Metacarp Metatars Humerus
PC 2
Radioulna Tibia Femur
PC 4
Mandible Cranium
Figure 6.3 Hierarchical cluster analysis of PC regression scores for element frequencies based on the analysis of selected animal bone assemblages
Figure 6.4 Carcass segments associated with PC regression clusters
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 227
considered in this analysis—the only exception being the Kua campsite (Bartram and Marean 1999). One clear possibility for this pattern is that it may stem from density-mediated attrition, because these elements are generally the least dense in the skeleton (Binford and Bertram 1977; Binford 1981; Lyman 1994; Lam, Chen, and Pearson 1999). The femur is an interesting partial exception to the low-density generalization concerning this PC (although the femur is also far less dense than the densest elements). Another interesting aspect of this PC is that all the elements are connected, raising the potential for some (for example, femora) to be included on this list as “riders.” The second PC is composed of mandibles, humeri, radioulnae, tibiae, and metapodials. Although many of these elements are quite dense (for example, mandibles and metapodials), this fact offers only a partial explanation of their high frequencies at sites, given the high frequency of less-dense elements (for instance, humeri, radioulnae, and tibia). Because these elements share a common richness in meat and/or marrow, a combination of explanations may make the most sense. As discussed in the previous section, the elevated frequencies of these elements may have been the result of selective transport and introduction by hominins. The issue of “rider” elements is potentially problematic here, because the joints connecting the lower limb elements and foot bones are extremely strong. This suite of elements is by far the most difficult to interpret, as will become more clear when we discuss the clustering of assemblages considered in this analysis. The third PC is defined mainly by the presence of scapulae, which are rare in all the archaeological assemblages except Klasies River. This meat-rich element is characterized by low density, which at least partially accounts for its general scarcity within these assemblages. It is also commonly destroyed by the activities of large nonhominid carnivores, although it is interesting that the other assemblage with a relatively high frequency of scapulae is the Kua scavenged kill site (Bartram and Marean 1999). One possibility here is that this element is present in elevated frequencies at sites where large carnivores were few in number, as is the case in the Kalahari today. Note also that the scapula acts as a biomechanical strut in the joint connecting the forelimbs and the axial skeleton, which is the weakest joint in the skeleton. As such, scapulae are weakly connected and easy to separate from both the humerus and the rest of the axial skeleton. Thus, as meat-rich and easily removable elements, scapulae are targeted as sites of disarticulation by both modern human hunters and nonhominin carnivores. The fourth PC is characterized mainly by high frequencies of crania. The cranium is highly significant in terms of both density-mediated
228 Chapter 6
attrition and its richness in fat from the brain (Binford 1981, 1984; Bunn and Kroll 1986; Blumenschine 1988). A great deal of ink has been spilled outlining the facts that (1) this dense element is difficult for nonhominin carnivores to destroy or exploit, (2) it is an important potential source of fatty brains for hominids, (3) it is resistant to various aspects of taphonomic destruction, and (4) it is the most easily identifiable element during the process of analysis. It is also notable that the mandible loads weakly on PC 4, which is not surprising given its high density and attachment to the cranium. Thus, PC 4 may be taken broadly to represent heads. It is important to recognize that high frequencies of these elements characterize many Lower and Middle Pleistocene assemblages and that this fact carries important implications in terms of taphonomy, identification bias, and hominin behavior. Despite this multivariate complexity, I believe that these clustered packages of elements offer useful units of analysis for examining the characteristics of archaeological faunal assemblages. Clustering Faunal Assemblages The PCA just described also offers a basis for comparing archaeological and actualistic assemblages through the calculation of PC regression scores for each individual case as a basis for parsing out patterns of hominin behavior from the complexity of taphonomic processes. Here, some might complain that this general approach combines assemblages with different formation contexts and degrees of hominin involvement. In other words, I could be accused of comparing apples and oranges. I would respond that this technique offers a way of confronting problems of taphonomic complexity through the identification of sites that stand out from the background of equifinality, as well as the kinds of modern control samples that they most resemble. As the results presented in the following pages show, certain major sites do, in fact, differ in subtle but important ways from the products of purely nonhominin processes of accumulation and preservation. Furthermore, the nature of these differences holds promise for understanding variability in hominin faunal exploitation over space and time. Let me begin by briefly explain the statistical procedures used. Through varimax rotation of the PC solution, I generated a series of regression scores for the individual faunal assemblages for the purposes of clustering. In essence, these scores indicate how strongly each faunal assemblage relates to each of the four PCs with eigenvalues greater than 1; Figure 6.5 shows a hierarchical cluster analysis based on PC regression scores. These results show a series of salient relationships between the archaeological and control samples.
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 229 Dendrogram Using Average Linkage (between Groups) 0
5
Rescaled Distance Cluster Combine
10
15
20
25
Ya’aqov Amboseli II Untermassfeld PC 4 Hayonim Amboseli III Ambrona Torralba Weighted density Duinefontein PC 2 Princevent Mauran FLK 22 Neg. Kobeh Le Flageolet PC 2&3 Porc Epic Klasies II PC 3 Klasies III Kua kill Kua camp PC 1 Wildebeest Density
Figure 6.5 Hierarchical cluster analysis based on PC regression scores for various animal bone assemblages
The only case that has a strong relationship with PC 1 is the Kua campsite, with the Kua scavenged kill site having a weaker relationship. The characteristic of this site that causes it to relate so strongly with PC 1 is its relative evenness; in other words, it has close to the same frequency for each element. This pattern has a number of possible explanations: First, the observation of a modern forager campsite means that the dynamics leading to density-mediated attrition had not been acting on preservation at the time of the observation by the ethnoarchaeologist (Bartam and Marean 1999). Second, we know that these faunal remains were the result of hunting and the introduction of whole carcasses to the campsite. Similarly, the Kua scavenged kill site represents nearly whole carcasses that were subsequently acted on by human scavengers and where density-mediated attrition was not very strongly expressed. Perhaps the most important lesson to be learned from this comparison is that none of the archaeological assemblages resembles the Kua campsite assemblage very closely, and it seems that taphonomic processes resulting in densitymediated attrition are mostly responsible for this pattern. Although this conclusion may not be shocking, I argue that further consideration is warranted when one attempts to visualize the complex factors conditioning the formation and preservation of archaeological faunal assemblages. In many ways, the cluster of faunal assemblages associated with PC 2 is the most difficult to interpret. This cluster includes FLK Zinj,
230 Chapter 6
Duinefontein, Mauran, and Pincevent. To some degree, density still offers a partial explanation of this pattern, since many of the sites show evidence for significant degrees of density-mediated attrition, especially FLK 22 (discussed earlier in this chapter; see also Faith and Behrensmeyer 2006). In contrast, density-mediated attrition does not offer a good explanation of the observed pattern at Pincevent, where several nondense elements are preserved in high frequency (David and Enloe 1993). Here, differential transport is the most parsimonious explanation, with some axial elements apparently left at kill sites and the PC 2 elements transported to Pincevent—a seasonal home base residential camp and jumping-off point for logistically structured hunting trips targeting caribou migration routes. Indeed, some amount of selective transport and/or destruction of elements by hominins makes sense for the other sites as well. Even Binford (1981) noted that several aspects of the observed pattern at FLK 22 cannot be explained in the absence of modification by hominins, suggesting instead that marrow-rich elements were selectively destroyed by hominins in the process of marrow extraction. Given the high frequencies of the elements constituting PC 2 at these sites and the relationship with Pincevent, it seems we should also keep an open mind toward some aspects of selective transport as an explanation of the patterns present at such sites as FLK 22. The only archaeological site associated with PC 3 is that of Klasies River, along with the control sample of the Kua scavenged kill site. Klasies River stands out from the other archaeological sites because it has relatively low frequencies of crania and mandibles (at least for Size Class 2 and 3 animals, in contrast with the larger size classes). Klasies River is also unique among the archaeological sites for its extremely high frequency of scapulae, which is why it groups with PC 3. This aspect of patterning at Klasies River is its main point of similarity with the Kua scavenged kill site, which has a much higher frequency of heads. It is difficult to draw firm conclusions concerning the high frequency of scapulae in these two cases, and a strong possibility exists that the two share this feature for different reasons. For hominins, the scapula represents a meaty element at the articulation between the front limbs and the cranial end of the tenderloin. For this reason, it is a site of disproportionate attention by butchers in a broad range of modern and archaeological contexts, which may explain its high frequency at Klasies River. However, it is a very delicate element (paper thin in places) and is frequently destroyed by carnivores and other agents of density-mediated attrition. As discussed, the Kua scavenged kill site shows relatively limited manifestations of density-mediated attrition, and this fact may explain the elevated frequency of scapulae, along with the other less dense axial elements.
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 231
In contrast, the sites associated with PC 4 are the easiest to interpret. This component represents high frequencies of crania and mandibles and explains very little of total variation in element frequencies at these sites. The simple reason for this situation is that most of the sites have very high frequencies of heads, which is certainly not a new finding for Paleolithic zooarchaeology. In terms of the control samples, this fact is manifested in the grouping of the Amboseli assemblages and the element frequencies predicted by density with this component. Although heads certainly have a great deal of economic value for their hominin consumers, this clustering demonstrates that there is relatively little in terms of element frequency at Gesher Benot Ya’aqov, Hayonim, Untermassfeld, Torrabla, and Ambrona that is different for what might be thought of as “background” patterning. This is not to say that hominins did not interact with these bones—the cut mark and percussion morphologies on them are sufficient to rule that out. This pattern simply makes clear that equifinality is once again a significant problem in terms of the construction of inferences based on assemblage composition in these cases. The final cluster of sites pairs Le Flageolet and Kobeh Cave and does not directly relate to any of the PCs. In fact, these two sites cluster together by virtue of their negative regression scores for PCs 3 and 4, which suggests that they are like Klasies River in having low frequencies of heads but unlike it in terms of their low frequencies of scapulae. They also share relatively high frequencies of limb bones and especially tibiae, which is in fact the most common element by MNE at both sites. Kobeh Cave is an interesting site to consider here, because it has received different methodological treatment from the other archaeological cases in the sample though the refitting of long bone shaft fragments (Marean and Kim 1998). It is a distinct possibility that Klasies River might share the same general pattern with Kobeh if the same methods were applied there, and the high frequency of scapulae relative to humeri at Klasies River may point in this direction. Le Flagolet is regarded as a Solutrean residential campsite, where animal parts were processed and consumed away from initial kill locations (Enloe 1993). The emphasis on limb bones and the rarity of heads may be a signature of a particular kind of differential transport, which may also implicate Kobeh Cave as a residential location of consumption. These patterns of faunal assemblage clustering and relationships with control assemblages offer some intriguing starting points for comparison and explanation. Perhaps the most striking aspect of this analysis is that it succeeded in clustering sites with common geographical, chronological, and taphonomic characteristics. This finding suggests that the patterns of element representation do, in fact, relate to a manageable set of dynamics
232 Chapter 6
in terms of both hominin transport and nonhominin conditions of site formation and are thus useful for building inferences about the past. Implications for Hominin Faunal Exploitation and Change over Time Like the Olduvai Gorge case study, this multivariate comparative analysis also suggests that most, if not all, major archaeological faunal assemblages are heavily conditioned by nonhominin taphonomic forces in terms of their accumulation and preservation. The shear amplitude of the difference between the Kua campsite and all other archaeological assemblages underscores the fact that there are no straightforward cases that may be considered unambiguous examples of residential camps provisioned with hunted game that would be discernible on the basis of their faunal remains alone. Clearly, Paleolithic zooarchaeologists must continue the current program of disciplined research on the taphonomy of archaeological faunal assemblages, as well as the actualistic research on which this set of inferences is based. It is equally clear that certain sites stand out in terms of characteristics that may indicate strong hominin roles in their accumulation. Once again, FLK 22 seems unique for its age with respect to its evidence for hominin activities relative to other taphonomic dynamics. Against the backdrop of other Pleistocene assemblages, the animal-part profile at FLK 22 seems to indicate that hominins selectively introduced meat and marrow-rich bone elements. In other words, early hominins at Olduvai acquired highly productive animal parts and systematically accumulated them at certain landscape locations. Again, this pattern is consistent with foraging activities focused on highly productive and high-ranked subsistence resources, and I will return to the implications of transport for site use dynamics in the next chapter, which deals with bone modification. Perhaps the most interesting aspect of this analysis is the similarity in animal part profiles between FLK 22 and Pincevent. Although it is important to reiterate that certain aspects of the similarity between these two assemblages may have different causes, I am increasingly confident that they resemble one another at least partly because of dynamics of element transport. Magdalenian hunters at Pincevent, faced with the transport of caribou parts from the locations of mass kills (such as Verberie), made butchery decisions targeting the most efficient animal parts and transported those back to their residential bases. I would argue that Olduvai hominins, while almost certainly not faced with the same extreme conditions in terms of transport distance, targeted similar sets of elements during butchery activities and for largely the same reasons having to do with meat and marrow richness. Thus, the animal-part
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 233
profile evident at FLK 22 represents a very provocative archaeological pattern in terms of its implication for early hominin faunal exploitation. This pattern shows that faunal exploitation behavior focused on resources with very high return rates extends back to the very earliest periods of Pleistocene hominin evolution in the evolutionary core of the African Rift Valley. Although later sites differ in the specifics of the frequencies of certain elements, likely because of local ecological and taphonomic contingencies, it would also appear that they are consistent with very similar patterns of faunal exploitation. (Aspects of these similarities will become clearer in the discussion of butchery patterns presented in the next chapter.) Thus, this multivariate analysis would seem to suggest broad continuity in the patterns of medium-sized faunal exploitation across the entirety of the Lower and Middle Pleistocene.
How and When Did Faunal Acquisition Patterns Change? Examining the nature and timing of later prehistoric shifts in faunal exploitation offers an alternative perspective on the patterns apparent during the ESA and MSA. The first major shifts seem to involve an increasing focus on small prey and other low-ranked and/or laborintensive food resources. The time period in which this set of dynamics has received the most attention is the terminal Pleistocene and early Holocene, as an aspect of “post-Pleistocene adaptation” (Binford 1968). As a global phenomenon, this collective shift in foraging patterns witnessed a dramatic rise in the exploitation of small terrestrial game, fish, shellfish, and foul, as well as low-quality plant food resources with either annual availability or high turnover rates. Thus, there is a degree of irony in the fact that the achievement of increasing modernity was not marked by big game hunting, as has often been implied, but rather by a shift away from the kinds of predatory foraging behavior indicated by the evidence presented in this chapter. These shifting foraging tactics are evident in many later prehistoric transitions. For example, in southern Africa it has long been argued that the transition from the MSA to the LSA was marked by much more intensive exploitation of small terrestrial game and aquatic resources such as shellfish (Klein 1976, 1989; Klein and Cruz-Uribe 2000; Klein et al. 2004; Steele and Klein 2008; Klein and Steele 2013). From a technological standpoint, LSA faunal acquisition was apparently aided by poison-arrow hunting weapons, as well as snares and other traps. Bows and poison arrows appear to have served as technologies for reducing risk in the hunting larger game. More controversially, Klein (1989) has also suggested that poison-arrow hunting also opened up possibilities in terms of the hunting of dangerous game animals, such
234 Chapter 6
as cape buffalo and other Size Class 6 species (contra Faith 2008). Snares and other traps effectively reduce the handling costs associated with the acquisition of small game, allowing small terrestrial fauna to be taken more frequently, reliably, and in large quantities. Elsewhere in sub-Saharan Africa, communal net hunting serves similar functions in terms of the targeting and mass-collecting of small game, which might not otherwise constitute a viable subsistence resource (Lupo and Schmitt 2002, 2005; see also Lupo 2007 for a more general discussion). Thus, the southern African LSA, and especially its Holocene component, is characterized by various trajectories of subsistence intensification facilitated by more labor-intensive economic activities, lower overall foraging return rates, greater foraging risk, and increasingly elaborate tools and weapons. Similar shifts in foraging behavior may be found around the globe at the boundary between the Pleistocene and Holocene. On the eastern Mediterranean coast, Stiner and colleagues (2000) also found astonishingly similar patterns to those discussed for the African LSA associated with the later Upper Paleolithic and Epi-Paleolithic. They argue for pulses of population size and concomitant subsistence intensification based on periods characterized by reductions in the size of shellfish and dramatic increases in the exploitation of small terrestrial game. Similar trends are also discernible in Mesolithic of northwestern Europe (Price 1991), the Jomon of Japan (Habu 2004), the Archaic period of North America (Bayham 1979), the Initial Period of western South America (Jerardino et al. 1992), and even various late Holocene contexts of Australia (Hiscock 1994). Although these forms of subsistence intensification are most obvious at the boundary between the terminal Pleistocene and the early Holocene, they have also occurred in some isolated Upper Pleistocene contexts. I have argued that certain aspects of the later MSA archaeological record from Howiesons Poort shows evidence for such forms of subsistence intensification, especially in terms of an increasing focus on a broad spectrum of small game (McCall and Thomas 2012). Others have argued for the use of snares and traps, or other similar technologies for dealing with the exploitation of small terrestrial fauna (Clark and Plug 2008; Wadley 2010). Likewise, Stiner and colleagues (2000) have also suggested that earlier Paleolithic periods on the eastern Mediterranean coast were also characterized by similar forms of subsistence intensification as witnessed during the terminal Paleolithic, although much reduced in amplitude. Even late Neanderthals in Western Europe seem to have undergone some mild forms of subsistence intensification shortly before the arrival of anatomically modern humans (Barton 2000; Brown, Finlayson, and Finlayson 2011). Although such precocious
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 235
occurrences of subsistence intensification are rare and much milder than those seen during the period of true post-Pleistocene adaptation, there is some evidence for important shifts in global hominin foraging ecology during the Upper Pleistocene, becoming more profound and widespread over time. The next obvious question concerns what caused these forms of change to occur when and how they did. The canonical answer to this question with regard to the period of global post-Pleistocene adaption, which would still seem largely valid, is that population increases in combination with richer postglacial environments encouraged new forms of subsistence organization (Binford 1968; Flannery 1973; Stiner, Munro, and Surovell 2000; Klein et al. 2004; Klein and Steele 2013). I would argue that this perspective (excepting its environmental dimension) also holds much promise in explaining earlier outcrops of intensified foraging economies. By 60 ka, early modern human populations were clearly expanding across the Old World, while technological changes were occurring in various regions at a rapid pace. It seems likely that these population movements and various forms of technological innovation accompanied new forms of subsistence behavior brought about by more densely packed populations. If true, this scenario holds some important implications for earlier hominin populations living during the Lower and Middle Pleistocene. For one thing, it corroborates the view that hominins during these periods lived in much lower population densities than did later Upper Pleistocene and early Holocene foragers. In addition, it also brings into focus my view that increasingly sophisticated technologies and foraging tactics were not purely the result of hominins possessing ever more brain power but rather were the outcome of hominin populations living in denser populations and overexploited environments. Finally, it illustrates certain fallacies having to do with the role of big game hunting in hominin evolution. As a high-ranked subsistence activity typical of hominin populations living in low densities and in lesser exploited environments, the hunting of medium-sized prey animals with simple technologies was commonplace. It is therefore inappropriate to take this particular form of behavior as a proxy for similarity with modern human foragers. If anything, it suggests that Lower and Middle Pleistocene hominin foraging ecology was organized in fundamentally different ways compared with that observed among modern forager groups.
Afterthoughts on Hunting Tactics and Technologies The models of mobility, settlement, and technological organization presented in the previous section hold a few interesting implications
236 Chapter 6
for the hunting tactics and weapons employed by Pleistocene hominins. To this point, I have argued that (1) the transition from the ESA to the MSA represented a major technological reorganization caused by new patterns of landscape use; (2) there was not a major shift in faunal acquisition that accompanied the transition from the ESA to MSA, but instead hominins engaged in a similar set of economic activities tightly focused on resources with high return rates. If this is the case, we must question how hunting tactics may have changed (or not changed) across this important organizational transition. One simple hunting tactic that has received a great deal of attention over the last decade is persistence hunting (Bramble and Lieberman 2004; Liebenberg 2006; Pickering and Bunn 2007; Lieberman and Bramble 2007; Lieberman et al. 2009). This idea posits that the unique endurance running capabilities of modern humans had their origins in earlier members of the genus Homo, who hunted medium-sized fauna through persistence tactics, essentially running prey animals to death. It is indeed true that, in comparison with most other predators and prey animals, humans are capable of maintaining moderate running speeds over extremely long distances. In this regard, Lieberman and Bramble (2007) see this evolutionary process as responsible for the modern human marathon running capabilities. This perspective is attractive, because it offers some explanations for certain aspects of modern human anatomy and physiology, as well as the only form of athletic behavior in which humans exceed other animal species. This perspective is not without its detractors, however. For example, Pickering and Bunn (2007) have raised a number of concerns with this hypothesis (see also Lieberman and Bramble 2007 for rebuttal). On the one hand, they argue that persistence hunting would have required a knowledge of tracking that exceeded the cognitive capabilities of early Homo. On the other hand, Pickering and Bunn also have also argued that persistence hunting would have been tremendously inefficient relative to other hunting tactics. Furthermore, they offer these points as explanations for why persistence hunting is so rare among modern forager groups. It is true that persistence hunting is relatively unusual among modern forager groups, who generally employ complex technologies in targeting game of various sizes. However, I see other reasons for this pattern than those argued for by Pickering and Bunn (2007). I agree with them that persistence hunting is inefficient in some fundamental ways, yet I see it as being inefficient largely with respect to issues of carcass transport and the feeding of consumers at residential camps. The energetic costs of a persistence hunt conducted by a single hunter are trivial compared with the total energetic return of even a small prey animal. One might
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 237
imagine a lone hunter chasing an antelope on the African savanna and killing the antelope by bringing it into a state of hyperthermia after some considerable running distance. The hunter may have consumed at most a few thousand calories, but the antelope contains perhaps hundreds of thousands of calories. The hunter could then consume that animal where it died and greatly exceed the metabolic costs involved in its pursuit. The problem comes in returning the animal to the hunter’s residential camp, where other consumers await a meal. To begin with, a single hunter is quite limited in the amount of meat he/she can transport, and persistence hunting usually has the effect of driving prey animals long distances away from residential camps. Adding more hunters/porters to the equation increases bulk transport capabilities but also radically increases total energetic costs. For these reasons, the combination of having a number of hunters participating in persistence hunts in combination with subsequent transport costs makes this hunting tactic inefficient relative to others involving complex technologies, whereby corporate labor may be coordinated and carcass transport dynamics maximized. It is for this reason, I suspect, that persistence hunting is usually practiced only in circumstances where either other technologies are unavailable or under circumstances of immanent resource failure. In other words, it is often practiced as a last resort. For example, Hitchcock and Bleed (1997) have discussed several examples of persistence hunting in the Kalahari conducted when poison for poison-arrow hunting was unavailable. Working in the same region, I have also heard stories from Ju/’hoansi veterans of the Namibian war for independence in the 1980s who recalled practicing persistence hunting activities when supply lines were cut off and ammunition was scarce. Indeed, Hitchcock and Bleed have demonstrated that persistence hunting, although unusual, is quite effective in terms of its success rates (see also Liebenberg 2006). From my perspective, persistence hunting is much in line with other hunting tactics that typify processes of subsistence intensification, since it effectively reduces the risk involved in hunting activities but dramatically lowers return rates. This situation, however, would be quite different if routed foraging mobility and settlement systems were employed. Under such circumstances, limitations and costs associated with carcass transport would be completely eliminated. While each consumer would be required to move to the location where a prey animal died, each would likely be rewarded with a meal that would greatly exceed the costs associated with mobility. Lieberman and colleagues (2007) offer an estimate for the caloric content of a 200 kg ungulate of 240,000 calories. Even assuming group sizes at the extreme end of the range of modern forager variation, this is clearly enough food to provide large meals for all consumers,
238 Chapter 6
even taking into account the costs of moving the animal carcass. Thus, routed foraging may have provided an organizational context in which the persistence hunting of medium-sized prey animals was much more efficient than it is among modern forager groups, who employ various versions of home base use systems. This is not to say that I think all early members of the genus Homo hunted with persistence tactics all the time. All predators employ a mix of hunting strategies selected according to differing hunting conditions, including characteristics of targeted prey animals, proximity of other predators, elements of environmental context, time of day, and so on. Rather, I am proposing that persistence hunting was one of a number of tactics employed by ESA hominins, alongside various ways of ambushing and disadvantaging prey and other methods involving the use of simple hunting weapons. However, I would also agree that persistence hunting occurred on a relatively common basis and that this was conceivably the evolutionary basis for modern human endurance-running capabilities. Endurance running may have had uses for early hominins beyond persistence hunting alone. The capability of moving around the landscape rapidly is a common feature of large-bodied carnivores that scavenge regularly, since it facilitates both the monitoring of the landscape for scavenging opportunities and the arrival at prey carcasses before other competing carnivores (see Brantingham 1998 for further discussion). Endurance running would have been useful in moving through environments and between locations where carcasses commonly occur, such as between waterholes during the dry season. Likewise, once carcasses were detected (through the observation of vultures or the calls of other predators), endurance running would have been an important way of arriving at carcasses before other potentially competing carnivores. In fact, such accounts fit well with ethnographic observations of modern forager scavenging. For example, O’Connell and colleagues describe Hadza scavenging in this way: Scavenging is a standard part of Hadza foraging. . . . All Hadza monitor the flight of vultures and listen carefully to the nighttime calls of lions and hyenas. While hunting, adult men often visit areas where lions have been active, especially during the dry season, when they are likely to be operating near the same water sources. If they suspect a possible scavenging opportunity, Hadza abandon other activities and move quickly to the spot, often at a run. (1988b: 357; emphasis added)
If we fancy that hominins achieved early access to animal carcasses through this sort of scavenging activity, then it seems likely that endurance running would have played an important role. And once again, it is important to point out that this sort of scavenging activity takes
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 239
on different organizational dynamics and holds numerous advantages within routed foraging mobility systems. With the transition to the residential site use pattern beginning with the MSA, it may have been the case that persistence hunting and scavenging began to decline in frequency relative to other hunting tactics. In other words, like many modern forager groups, hominins may have staged hunting trips to more effective ambush and/or disadvantaging locations in the vicinity of residential camps. Under such circumstances, the locations of kill sites and the dynamics of carcass transport would have been predictable and known a priori, enabling hunters to make informed decisions in order to optimize foraging return rates relative to carcass transport and other handling costs associated with hunting activities. It is also interesting to suppose that the origins of stone-tipped spear technology (and perhaps also the decline in Acheulean biface production) may have reflected increasing predictability of foraging tasks related to hunting activities and the need for more effective hunting weapons within ambush and disadvantaging contexts. It seems likely to me that this shift in the frequency of differing hunting tactics may have led to increasing risk in hunting activities, perhaps lowering rates of hunting success in the aggregate over time as prey densities in the vicinity of residential camps declined. I would also argue that the shift to the home base pattern of site use offered benefits in terms of foraging efficiency that offset any potential decreases in hunting success rates. Furthermore, I suggest that it was declining foraging efficiency within the routed foraging framework that brought about the origins of the residential mobility system with the transition from the ESA to the MSA in the first place. I return to this issue in the concluding chapter and ultimately argue that a combination of demographic pressure and environmental fluctuation created conditions that favored the transition from routed foraging to home base site use. For the time being, my main point is that the types of prey and hunting technology possessed by early hominins changed very little across the bulk of the Pleistocene. Instead, this transition may have merely brought about a subtle shift in the kinds of hunting tactics employed. Finally, it is also worth considering the implications of the origins of complex hunting weapon technologies, especially the bow and arrow and spear-thrower, occurring late in the Upper Pleistocene. These are examples of what Binford and Binford (1966) have called the “predatory revolution.” As I discussed earlier, the origins of increasingly complex hunting weaponry should not be taken as simply an indication of increasing cognitive sophistication (cf. Shea 2006; Sisk and Shea 2009). Rather, it tightly maps onto the dynamics of hunting risk and subsistence intensification discussed earlier.
240 Chapter 6
Complex weapons are defined by much higher costs in terms of manufacture and maintenance, while also generally requiring the use of much more labor-intensive hunting tactics. I have already mentioned poison-arrow hunting, which has predominated from at least the early Holocene onward in southern Africa. In certain cases, such complex technologies are associated with strategies of risk reduction in environments in which prey is scarce and in which other foraging resources are prone to occasional failure. In other cases, such technologies are associated with risk reduction in environments with seasonally abundant prey but long periods of scarcity or absence. These sorts of technological considerations have been discussed for the Nunamiut by Binford (1977, 1978, 1978b, 1979, 1980) and Bleed (1986). Archaeological examples of complex hunting weapons used for the purposes of risk reduction within highly seasonal environments are also evident during the Upper Paleolithic of Europe (especially the Magdalenian; Straus 1993) and possibly also in terms of the manufacture of modular hunting weapons during the Howiesons Poort industry (McCall 2006b, 2007; McCall and Thomas 2012). In short, the development of these technologies should be taken as markers of increasing hunting risk associated with dynamics of subsistence intensification and were likely associated with much larger human populations often living in marginal environments. In comparative perspective, the contrast between the simple hunting weapons and tactics possessed by various hominin species throughout most of the Pleistocene and the costly complex technology appearing late in the Upper Pleistocene clearly holds important implications for the organization of hominin foraging systems. As I have argued up to this point, this pattern would seem to imply that the hominins responsible for producing ESA and MSA stone tool industries lived in much lower population densities than did humans living from the late Upper Pleistocene through the present. This would also seem to indicate that ESA and MSA hominins lived under conditions in which the reduction of risk during hunting activities was not a dominant concern. In turn, this pattern implies that these hominins lived in environments in which either game was abundant enough so that risk reduction was not a significant concern or where other highly ranked foraging resources offered reliable alternatives—or likely both. As I argue further in the conclusion, Upper Pleistocene hominins began shifting their technologies as populations became increasingly packed in core regions of environmental productivity and as groups were pushed into increasingly marginal environments, often with profound levels of seasonality and resource scarcity.
Implications of Lower and Middle Pleistocene Faunal Assemblage Composition 241
Conclusion This chapter began with a discussion of problems associated with densitymediated attrition and faunal assemblage equifinality by examining the case study of the FLK 22 assemblage from Olduvai Gorge. On the one hand, this case study showed that density-mediated attrition is a serious problem in terms of making inferences on the basis of the FLK 22 fauna. On the other hand, this case study also demonstrated that, in spite of such issues of taphonomic ambiguity and equifinality, certain meat- and marrow-rich elements occur in higher-than-expected frequencies. I have argued that this pattern implies a significant degree of early hominin involvement in the accumulation of this animal bone assemblage and also that early hominins had access to carcasses with significant quantities of nutritional resources, one way or another. Rather than focusing on the implications of these finding for the timing and ordering of early hominin access to carcasses relative to other carnivores within the hunting-andscavenging debate, I have instead focused on the implications of these patterns of early hominin faunal exploitation and the ways in which they represented an economic activity with extremely high return rates— in fact, much higher than those generally associated with more recent hunting activities utilizing complex hunting technologies and laborintensive hunting tactics. In sum, this and other analyses of the FLK 22 fauna should not been seen as indications of the relative modernity or humanity of the early hominins that helped to accumulate it but rather a subsistence economy focused squarely on food resources with very high return rates. In a second case study, I used multivariate statistics to compare faunal assemblage patterning across a range of geographical contexts and time periods. This comparative study implied that, while there is a great deal of variability in terms of taphonomic dynamics and the nature of hominin involvement in the formation of various Paleolithic sites, there is a great deal of continuity in faunal assemblage pattern across the bulk of the Pleistocene. In terms of the transition from the ESA to the MSA, I have argued for a shift in the frequencies of various hunting tactics employed, especially in terms of the prevalence of ambush and disadvantaging tactics. In contrast, there does not appear to have been a revolutionary appearance of new hunting weapons or tactics until late in the Upper Pleistocene, long after the transition from the ESA to the MSA. I take the results of this comparative case study to indicate that most of Pleistocene was characterized by early hominins gaining access to fleshy carcasses through hunting using simple weapons and scavenging. I have also argued that these patterns point to hominins having regular access
242 Chapter 6
to highly ranked subsistence resources, likely implying the exploitation of productive environments by populations living in low densities. In contrast, the subsequent emergence of increasingly complex hunting weapons and various associated forms of subsistence intensification was likely brought about by combinations of demographic pressure, the occupation of increasingly marginal environments, and the increasing environmental overexploitation. In contrast with traditional views of the emergence of complex weapons and other foraging technologies, I have argued that this shift should not be taken simply as an outcome of increasing cognitive sophistication but rather as occurring within the context of these broader sets of organizational changes. Of course, this chapter has approached various issues of early hominin carnivory in extremely broad ways while ignoring many productive avenues of investigation in terms of prey choice (mortality profiles, species profiles, sex ratios, seasonality, and so forth). Critics may also make the case that the evidence presented in this chapter is not adequate to the task of proving its major points. I certainly acknowledge aspects of these shortcomings. My goal, however, was to present these case studies as a way of demonstrating the productivity of moving beyond the use of early hominin faunal exploitation patterns as proxies for cognitive capability or cultural sophistication. Such information is clearly better suited for making inferences concerning the foraging ecology of hominin populations, which is, in turn, a much firmer basis for building evolutionary theory.
Chapter 7
Implications of Lower and Middle Pleistocene Hominin Bone Modification Patterns
C
hapter 6 discussed assemblage composition as a source of information about hominin faunal exploitation. The other main source of information concerning such exploitation is the observation of bone modification patterns. As with the evidence concerning assemblage composition, patterns of bone modification are complicated to document and often rendered ambiguous by various processes of taphonomy. This form of evidence has also been used frequently to assess the ways in which hominins acquired animal carcasses and the ordering of this access relative to other carnivores within the hunting-and-scavenging debate. In this chapter, I examine hominin patterns of bone modification across various regions of the Pleistocene Old World. Once again, I attempt to turn the tables on the research questions most often asked of these data, which originated within the hunting-and-scavenging debate and took on their current form in the 1980s. The use of bone modification data to examine issues resulting from the hunting-and-scavenging debate involves problems similar to those dealt with in the last chapter concerning assemblage composition. In short, researchers have sought to examine how hominins acquired carcasses based on modification patterns, which have long been thought to reflect the degree of carcass ravaging at the time of acquisition and therefore the order in which hominins gained access. To a somewhat lesser extent, bone modification patterns have also been used to assess the ways in which hominins segmented carcasses for the purposes of Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 243–280 © 2015 Left Coast Press, Inc. All rights reserved. 243
244 Chapter 7
transport and the ways in which they removed meat, marrow, and brain tissue for consumption. Last, some innovative new research has sought to use hominin butchery patterns to examine social issues surrounding carcass segmentation and food sharing. As with assemblage data, there is considerable complexity involved in examining bone modification data, largely resulting from the difficulties in developing adequate referential frameworks in combination with ever-present problems of taphonomy. Ethnoarchaeological studies of bone modification carried out among modern foragers have contributed enormously to our current capabilities of drawing inferences from archaeological data. Yet, if anything, these studies have shown how variable patterns of bone modification may be, as well as the ways in which nonhominin carnivore activity and density-mediated element deletion may complicate the use of this data source. In addition, as with the identification and quantification of bone elements, inconsistencies persist in terms of the ways in which bone modification patterns are analyzed, quantified, and reported, in spite of several decades of methodological growth and development. Within the confines of the hunting-and-scavenging debate, these problems have resulted in findings that have often been controversial and sometimes ambiguous. To get around these problems and limitations, I once more blur my eyes and take a broad, holistic, and general approach in looking for significant changes over space and time. Similarly, in moving beyond traditional test implications, such as the cognitive sophistication or cultural complexity of hominin butchers, I turn my attention to the ways in which bone modification patterns may have articulated with other aspects of foraging ecology and economic organization. To do this, I focus my efforts on understanding the ways in which the economic contexts of butchery activities affected resulting bone modification patterns. This approach, in turn, may act as a source of information about the foraging ecology of early hominins, as well as potential changes over space and time. First, I present a comparative review of research on Lower and Middle Pleistocene bone modification patterns, as well as a range of actualistic frames of reference. The results of this review show dramatic variability and inconsistency in the ways in which hominins in various places and times butchered animal carcasses (and, of course, the ways in which zooarchaeologists have studied bone modification). I argue that these patterns are not consistent with normative cultural butchery patterns at broad spatial or temporal scales, which should not come as a surprise in the cold light of the 21st century. In contrast, I make the case that these patterns can be explained only in terms of situational variability, suggesting that early hominins, governed by context-specific economic
Lower and Middle Pleistocene Hominin Bone Modification Patterns 245
circumstances, manifested a high degree of variability in butchery practices based on very small-scale and immediate contingencies. Finally, I discuss the case study of Qesem Cave presented by Stiner and colleagues (2009). This Acheulean Middle Pleistocene site shows some provocative evidence for carcass segmentation, element transport, and subsequent food preparation at a location separate from the initial site of carcass acquisition. However, these authors also show—using the orientations and morphologies of cut marks—that butchery activities were very unsystematic and irregular. They ultimately conclude that numerous individuals may have engaged in butchery activities at the same time. This situation is quite different from the social context in which butchery activities are conducted among modern human groups, and it perhaps indicates quite different forms of site use and social structure. Based on this analysis, I present a brief comparison between the cut mark data from Qesem Cave and the FLK 22 assemblage, showing that there are many points of similarity between the two sites. I also conclude that these patterns of bone modification are not consistent with a pattern of home base use but rather indicate more ephemeral uses of sites as either special activity areas or very short-term residential sites.
Pleistocene Bone Modification Variability As discussed in Chapter 5, there are many factors that make the comparison of Paleolithic bone modification patterns and changes over space and time difficult. Some of these are methodological in nature, and others involve the usual suspects in terms of density-mediated element attrition, variability in bone fragmentation, and even the “shaft critique” (for example, Marean, Domínguez-Rodrigo, and Pickering 2004; contra Stiner 2002). This section examines some such problems in greater detail and presents a comparative analysis of a bone modification patterns from a range of Pleistocene contexts. While the results of this analysis clearly illustrate the difficulties associated with analyzing and quantifying bone modification patterns in systematic and comparable ways, they also support my findings up to this point that (1) early hominins had access to rich nutritional resources through their faunal exploitation activities and that (2) there was relative continuity in faunal exploitation patterns over the bulk of the Pleistocene. Once again, this analysis suggests that there were only minor and subtle changes in hunting and scavenging activities in the transition from the ESA to the MSA. In part because of the inconsistency with which bone modification patterns are analyzed, quantified, and reported, the size of the sample considered in this section is significantly smaller than that discussed in the last chapter. I present a comparative analysis including both
246 Chapter 7
ethnoarchaeological and archaeological samples. The ethnoarchaeological samples are those of the Hadza and the Nunamiut reported by O’Connell, Lupo, and colleagues (O’Connell and Hawkes 1988a; Lupo 1994; Lupo and O’Connell 2002) and Binford (1978, 1981), respectively. The archaeological samples include the ESA localities of BK, HWK, MNK, and FLK 22 at Olduvai Gorge; the later Acheulean sites of Gesher Benot Ya’aqov, Qesem Cave, and Hayonim Cave in Israel; the French Middle Paleolithic site of Combe Grenal; the MSA site of Die Kelders in South Africa; and the MSA site of Porc Epic in Ethiopia (Bunn and Kroll 1986; Marean et al. 2000; Assefa 2006; Stiner 2005; Stiner, Barkai, and Gopher 2009). This sample, while limited, covers a range of time periods spanning the Pleistocene. It also spans a range of environmental, taphonomic, and site use dynamics. Further Thoughts on Analytical Problems in the Analysis of Cut Marks In Chapter 5, I outlined a number of analytical problems in terms of the documentation, analysis, and quantification of bone modification patterns, as well as the construction of productive referential frameworks. The comparative analysis presented in this section may help sharpen some of this discussion, while also pointing a few ways forward in terms of shifting analytical procedures. A certain set of problems stem from that nature of faunal assemblages themselves and overlap with the taphonomic issues discussed at length in Chapter 5. Bone element identification and fragmentation are yet again at the core of this set of difficulties. On the one hand, it does little good to identify cut marks (and other damage morphologies) on bones that cannot be accurately identified to element. For example, in the analysis of the cut marks on bone from Die Kelders, Marean and colleagues (2000) report that 70.9% of all cut marks occurred on bones that could not be identified. Although most of these cut marks occurred on fragments that could be narrowed down to long bones, it is still a serious problem that less than a third of the assemblage’s cut marks could be linked to a specific element. This situation is also applies to the other assemblages presented in this chapter, including some of the ethnoarchaeological examples (see Binford 1981 for further discussion). Furthermore, most of these analyses fail to mention when in the bone identification process patterns of modification were documented. It may make a substantial difference if unidentified bones were not examined for damage morphologies and if a level of identification was associated with bone modification analysis. Fragmentation also plays a key role in affecting the analysis of cut marks and other damage morphologies. It should be obvious that it is easier to identify cut marks on whole bone than it is on fragmentary
Lower and Middle Pleistocene Hominin Bone Modification Patterns 247
bits and that cut marks become increasingly difficult to identify with greater fragmentation. Fragmentary assemblages also raise other potential complications, such as the double-counting of cut marks split between conjoining fragments. In addition, high degrees of fragmentation make the identification of bone elements themselves difficult, leading to the analytical problems just mentioned. Perhaps more important, fragmentation also creates problems for quantifying cut marks, since having a large number of identified fragments effectively decreases the proportion of cut marks relative to the number of total number of identified specimens (for instance, Lyman 1987). Thus, many researchers have eschewed the reporting of cut mark frequencies relative to the NISP count for a given element, preferring instead to report cut mark frequencies relative to MNE or MAU (Binford 1981; Marean et al. 2000; Assefa 2006). This approach, however, may have the opposite consequence of inflating quantifications of cut mark frequencies in situations of highly fragmentary assemblages with large numbers of identified bone fragments but low MNE counts. Finally, the usual suspects in terms of taphonomic issues and densitymediated element deletion also affect the analysis and quantification of cut mark patterning. For example, Binford (1981) makes a passing but daunting observation that certain cut mark counts for the Nunamiut bone assemblages may be inaccurate owing to the deletion of certain elements (and any cut marks that might have been on them) by dogs and other destructive agents operating along the lines of bone density. In this case, Binford is especially sensitive to this problem for the pelvis, scapula, and vertebra. It is also noteworthy that he declines to report cut mark counts for ribs, which were a problematic set of elements to deal with because of the intensity of dog ravaging under many circumstances.1 Such axial bone elements may be key in the determination of butchery patterns, given that high-quality meat masses attach to axial elements. Because there is great variability in the intensity of density-mediated destruction and other related taphonomic processes affecting assemblage composition, such processes may also play tremendously important roles in the assessment of bone modification patterns. The next set of analytical problems revolves around practices of quantification and reporting. This topic has already been touched on in terms of the ways in which fragmentation may create biases in terms of the reporting of cut mark frequencies relative to bone counts at various levels of identification. In terms of element counts, some researchers prefer to report cut mark frequencies relative to NISP counts (Bunn and Kroll 1986; Lupo 1994; Lupo and O’Connell 2002; Stiner 2005; Stiner, Barkai, and Gopher 2009), while others report them relative to MNE or MAU counts (Binford 1981; Marean et al. 2000; Assefa 2006).
248 Chapter 7
Many researchers also report cut mark frequencies on long bones in terms of shafts and articular ends, although there are once again many ways of doing this and many levels of specificity. In my comparative analysis, I had to make some hard decisions about whether or not to consider the various ways of reporting cut mark frequencies as similar enough to warrant productive comparisons. After much soul searching, I eventually reasoned that they were, especially in cases where researchers differentiated between shafts and articular ends. My logic was that in such cases there would not be profound differences between MNE counts and NISP counts for specific elements, although I acknowledge that this situation is likely more applicable to certain elements (such as long bones) than others (for example, ribs). I also ruled out some potentially useful case studies when I was not convinced that this assumption was fulfilled satisfactorily. Another problem regards the ways in which cut mark frequencies have been quantified relative to individual elements or element segments. The standard practice, largely stemming from Binford’s (1981) influential early treatment of the subject, has been to report cut mark frequencies in terms of the percentage of cut marked specimens, elements, or element segments. Many have observed that this practice omits potentially useful information in terms of specimens that have multiple cut marks on them. Within the sample discussed shortly, for example, Assefa (2006) goes as far as reporting cut mark frequencies in terms of the mean number of cut marks per animal unit (cf. MAU counts). Although I largely agree with the logic behind this presentation, and Assefa is laudable for the detail with which his data are reported, this approach renders the Porc Epic data incomparable with the other samples considered in this study. In addition, while Assefa reports some more standard forms of cut mark frequency data, I found it impossible to standardize or convert the Porc Epic data into formats comparable with the other studies considered here. To do this, I would have needed the cut mark counts to be broken down by individual specimens in order to determine cut mark frequencies as percentages. In this case, it appears that good analytical intentions interfere with my goals of systematic comparison. These problems, while frustrating, could be easily resolved through several simple steps. A detailed and comprehensive solution would be to include as appendices spreadsheets containing detailed data for each individual cut marked specimen. This approach would allow for the consideration of cut mark frequencies as both percentages and mean counts per specimen, element, element segment, or animal unit. It could also be fruitfully combined with further detail in terms of bone landmarks, which would allow for the assessment of potentially problematic relationships with bone density using control datasets
Lower and Middle Pleistocene Hominin Bone Modification Patterns 249
such as those presented by Lam and colleagues (1999). This approach would also facilitate the inclusion of other emerging forms of cut mark analysis, such as the observation of cut mark orientations and depths, in combination with GIS computer applications (for instance, Bello and Soligo 2008; Stiner, Barkai, and Gopher 2009). While the publication of such data sets in print may have been problematic, electronic publication renders this task quite feasible for enthusiastic researchers. At a minimum, more detail must presented in terms of NISP counts for individual elements or element segments, MNE counts, and cut mark frequencies in terms of both means and percentages. These problems highlight the fact that, as with other types of zooarchaeological research, the analysis of bone modification patterns remains problematic in terms of getting all its participants on the same page. Cut Marks by Anatomical Region My approach in terms of the analysis of these cut mark data was to once more consider similarities with other actualistic control data sets as a way of examining the archaeological data, as well as potentially significant changes over time. In this respect, I include datasets from two ethnoarchaeological studies: the influential studies of the Nunamiut presented by Binford (1978, 1981) and the Hadza presented by O’Connell, Lupo, and colleagues (O’Connell and Hawkes 1988a, 1988b; Lupo 1994; Lupo and O’Connell 2002). The Nunamiut ethnoarchaeological data are important from both substantive and historical perspectives. On the one hand, the Nunamiut are a hunter-gatherer group that practices a particularly specialized form of caribou hunting, which, as I discuss shortly, articulates with some equally specialized and standardized butchery practices. On the other hand, Binford’s (1981) discussion of analytical methods and taphonomic dynamics has influenced generations of later researchers. The Hadza have served as an important case by virtue of their occupation of an eastern African savanna environment similar to that in which early members of the genus Homo emerged. The Hadza have also been a case of keen interest by virtue of the fact that they engage regularly in both hunting and scavenging activities, offering rare actualistic frames of reference for distinguishing between the two on the basis of archaeological patterning (Lupo 1994). For the purposes of this discussion, the Hadza studies are important in the sense that they have been consistently conducted by many researchers over a long period of time, having now amassed a considerable sample size with which to assess patterns of variation. Binford’s (1978, 1981) observations are very extensive and cover a range of site use contexts, which serve as frames of reference for different types of butchery activities. These contexts include, among
250 Chapter 7
others, residential sites, hunting camps, and associated butchery stands where field processing of carcasses occurred before transport or caching. For a variety of practical reasons, while Binford’s field observations focused on butchering stands associated with hunting camps, the faunal remains analyzed for cut marks were mainly collected from residential camps.2 Binford (1981: 98) also offers some important bits of wisdom based on discrepancies between these two contexts, observing that the cut marks observable on the bones from residential sites seemed to have little relationship with the butchery activities he witnessed. This observation was perhaps an early acknowledgment of problems of equifinality that pervade zooarchaeological research, including studies of bone modification, because this pattern stemmed from the temporal and spatial averaging of assemblage composition in terms of cut mark frequencies, as well as the usual suspects of taphonomy and densitymediated attrition. I return to this topic shortly. In terms of the Hadza bone assemblages, Lupo and O’Connell (2002) present the frequencies of cut marks for three site types: (1) a residential camp, (2) a hunting blind where animals are frequently ambushed, and (3) several butchering stands where carcasses were processed in the field before being taken to the residential camp. Bones were accumulated at residential camps through the transport of animal parts after initial carcass segmentation and processing at blinds or butchering stands (and were also further modified at residential sites). Bone accumulations at hunting blinds were primarily the remains of animals killed in ambush hunts, some parts of which were consumed locally and others transported to residential camps. Finally, the butchering stands were locations to which the carcasses of mostly scavenged prey animals were transported from the location of initial acquisition (that is, a carnivore kill site) for processing before either local consumption or transport to residential sites. These data clearly hold great potential as frames of reference in representing a diversity of site use patterns, modes of bone accumulation and modification, and relationships with initial faunal acquisition strategies (the most obvious being the difference between hunting and scavenging; see Lupo 1994 for further discussion on this point). Comparing the Nunamiut and Hadza faunal assemblages offers an interesting starting point for this analysis. As one might predict, these two contexts are quite different from each other—however, not necessarily in the ways I might have expected. For the sake of this analysis, I have broken down the cut mark frequencies by anatomical region in terms of heads (cranial bones and mandible), axial bones (vertebrae, ribs, pelvis, and sacrum), upper limb bones (scapulae, humeri, and femora), and lower limb and foot bones (radioulnae, tibiae, fibulae, metapodials, podials, and phalanges). Figures 7.1 and 7.2 compare cut mark frequencies for
Lower and Middle Pleistocene Hominin Bone Modification Patterns 251
0.6
% Cut marked
0.5 0.4 Kakinya
0.3
Bear
0.2 0.1 0
Heads
Axial
Upper limbs
Lower limbs + feet
Figure 7.1 Graph showing the percentages of cut marked bones for various carcass segments for the Kakinya and Bear sites
0.9 0.8 0.7 0.6
Residential camp
0.5
Hunting blind
0.4
Butchery stands
0.3 0.2 0.1 0
Upper limb shaft
Upper limb end
Lower limb + feet shaft
Lower limb + feet end
Figure 7.2 Graph showing the percentages of cut marked bones for various carcass segments for Hadza sites
these anatomical regions for the Nunamiut and Hadza samples. Binford (1978, 1981) goes to great lengths to illustrate the organizational variability responsible for the conduct of butchering activities at various Nunamiut sites, and one would hope that there would be differences in the butchering behavior of Hadza between residential camps, hunting blinds, and butchery stands dominated by scavenged fauna. Comparison of the Nunamiut and Hadza site groups, however, shows this situation to be more complex than what I might have thought.
252 Chapter 7 Dendrogram Using Average Linkage (between Groups) Rescaled Distance Cluster Combine
0
5
10
15
20
25
Hadza camp
Hadza blind
Hadza butchering
Nunamiut Kakinya
Nunamiut Bear
Figure 7.3 Graph showing a hierarchical cluster analysis of ethnoarchaeological sites based on cut mark frequencies for bones from various carcass segments
Figure 7.3 presents a hierarchical cluster analysis demonstrating that all sites belonging to either the Nunamiut or Hadza groups are more similar to one another than any is to a site belonging to the other group. One might have naively expected (or hoped) that sites with similar organizational dynamics—in this case, especially residential sites— would be similar in terms of their cut mark patterning. Likewise, one might have expected greater differences between different organizational contexts. For example, it might have been the case that the Hadza residential camps were more similar to Nunamiut residential camps than to Hadza scavenging butchery stands. The fact that these expectations are not met may have significant implications for our understanding of bone modification patterning and site formation processes. There are a couple of major possibilities for why this pattern may be the case. First, harkening back to more culture-historical theoretical orientations within archaeology, some may argue that these differences represent cultural modes of butchery behavior. In other words, the
Lower and Middle Pleistocene Hominin Bone Modification Patterns 253
Nunamiut and Hadza simply have different ways butchering animals, in the same way that modern French, American, and Chinese butchers all segment various types of animal carcasses differently. Such a view, however, would ignore much of the sophistication offered by actualistic research on animal butchery since the 1970s. Specific to these cases, Binford (1978, 1981) and O’Connell, Lupo, and colleagues (O’Connell and Hawkes 1988a, 1988b; Lupo 1994; Lupo and O’Connell 2002) all go to great lengths to show that, although ethnographic interviewing regarding some idealized butchery pattern might elicit some form of “cultural” response (that is, the “Hadza way” to butcher a zebra), contextual variability is far more important in shaping how butchers approach their immediate tasks. To a degree, this situation is also apparent in looking at the Nunamiut and Hadza data. For example, for the Hadza, all cases are similar in terms of the frequencies of cut marks on upper limb bones. However, the butchery stands have much lower frequencies of cut marks on heads, axial elements, and lower limbs and feet. Although taphonomic factors likely play a major role in shaping this aspect of the observed patterning, there are real dynamics in terms of what bones Hadza foragers access through scavenging that are responsible for the differences between hunted and scavenged faunal assemblages (see also Lupo 1994) and also between residential camps, hunting blinds, and butchery stands. Likewise, Binford (esp. 1978) offers detailed descriptions of the organizational differences between residential sites that might account for the observed variation between the Kakinya and Bear residential camps. These are important differences in terms of archaeological patterning that hold great implications in terms of the construction of inferences on the basis of faunal assemblages. In contrast, if these differences do, in fact, stem from contrasting cultural dispositions in terms of butchery behavior, then this conclusion would offer little substantive insight with which to approach the archaeological record of Pleistocene animal exploitation, in which such cultural norms would be mostly unknowable (O’Connell 1995). The other major possibility is more subtle, although I find it more likely. It could also be the case that differences in terms of depositional context and site formation processes account for the observed variability between the Nunamiut and Hadza cases. On the one hand, the Nunamiut residential sites are (1) relatively permanent (at the scale of decades, at least), (2) they contain many bone traps with qualities promoting preservation over reasonably long time spans (for example, adjacent to or underneath buildings), and (3) they are located at high latitudes with little direct sun and with low temperatures. These qualities of Nunamiut residential sites mean that bones likely were better preserved and lasted
254 Chapter 7
longer before collection by Binford or his colleagues. On the other hand, dogs were an ever-present agent, preferentially destroying bones according to well-known sets of principles in terms of bone density and nutritional composition. In addition, as Binford noted repeatedly, short of destroying elements or element portions, dog (and other carnivore) ravaging can erase human-produced bone modification morphologies. Thus, the Nunamiut bone assemblages are heavily time-averaged in terms of their compositional patterning, likely spanning several decades of accumulation while also showing fairly extreme signs of dog-derived bone attrition and modification. A close inspection of the Nunamiut data shows very low frequencies of axial bones relative to appendicular elements and, as Binford (1981) noted, this is a pattern that is not immediately attributable to human butchery patterns. This situation is quite different with the Hadza sites. Here, the destructive forces acting on bone assemblages are much more profound, affecting both dynamics of bone preservation and therefore the nature of the samples of bones collected by ethnoarchaeologists. Bones deposited at open-air sites on the eastern African savannas are subject to intense direct sunlight, high temperatures, extreme temperature fluctuations, high levels of insect and microbial activity, and (of course) the activities of the many carnivores common to the region. In addition, the Hadza sites studied by O’Connell, Lupo, and colleagues (O’Connell and Hawkes 1988a, 1988b; Lupo 1994; Lupo and O’Connell 2002) were used or occupied for much shorter durations (at most, at the scale of years) and in much more ephemeral ways. Thus, bone collections represent much shorter time spans and are therefore much less time-averaged. Likewise, because bones were generally collected by ethnoarchaeological fieldworkers shortly after the deposition, they were much less subject to taphonomic element deletion. This pattern can also be seen within the Hadza data, which tend to have much higher frequencies of axial elements relative to appendicular elements, as well as relatively low frequencies of crania and mandibles. For these reasons, I suspect that it may be unfortunately true that much of the difference between these two important ethnoarchaeological datasets may derive more from dynamics of taphonomy than from human butchery behavior. This problem actually becomes more evident when other archaeological data are considered. Figure 7.4 shows a hierarchical cluster analysis including the larger sample of archaeological cases based on cut mark frequencies by anatomical region. The results of this analysis show certain expected patterns. For example, the Olduvai localities all cluster together, with BK, HWK, and MNK being most similar and FLK 22 diverging slightly. This finding is consistent with various findings that FLK 22 shows by far the most hominin involvement in bone
Lower and Middle Pleistocene Hominin Bone Modification Patterns 255 Dendrogram Using Average Linkage (between Groups) Rescaled Distance Cluster Combine
0
5
10
15
20
25
BK MNK HWK FLK 22 Ya’aqov Hadza camp Hadza blind Hadza butchering Nunamiut Kakinya Nunamiut Bear Combe Grenal
Figure 7.4 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on cut mark frequencies for bones from various carcass segments
accumulation and modification of any of these localities (Leakey 1971; Bunn and Kroll 1986; Potts 1988; Blumenschine 1995; Monahan 1996; Capaldo 1997; Domínguez-Rodrigo 1997). It also, however, shows that in spite of this potential variability in terms of hominin activities, all the Olduvai bone assemblages considered here are similar to one another. In addition, it is interesting that the Gesher Benot Ya’aqov sample clusters with the other Olduvai assemblages, since they were all accumulated by early hominins during the ESA or Lower Paleolithic time periods and both share ecological similarities in belonging to the arid grassland or savanna biomes of eastern Africa and the Near East. Finally, as I discuss shortly, they may all also share important qualities in terms of their site formation processes. The Hadza assemblages cluster together and are almost equally divergent from both the Olduvai/Gesher Benot Ya’aqov cluster and the Nunamiut/Combe Grenal cluster. This finding underscores the
256 Chapter 7
fact that these groups of cases have substantial differences in terms of environmental, organizational, and taphonomic contexts. As mentioned earlier, the hunting blind and residential camp sites are both more similar than either is to the Hadza scavenging butchery stand sites. This pattern once again suggests that these sites were characterized by recognizably different patterns of butchery behavior on the basis of their animal bone assemblages. However, relative to the other archaeological cases considered here, this signal seems to be dwarfed by other factors, which may not be immediately attributable to hominin behavioral dynamics. This situation has the unfortunate implication that even behavioral differences at the scale of those manifested at residential camps, hunting blinds, and butchery stands may be relatively minor in comparison with the broader conditions of environmental context and taphonomy that seem to drive the variability described by this statistical analysis. Finally, it is also interesting that the two Nunamiut residential camps and the Combe Grenal samples all cluster together. For one thing, this similarity may speak to the importance of methodological idiosyncrasy in terms of individual analysts, since all three of these samples were analyzed by Binford (1981). Also, they are similar in terms of their prey species, with both being dominated by caribou, and they share cold-weather environmental contexts. There may also be some similarities in terms of the issues of time-averaging and taphonomy already discussed. As a rock shelter, Combe Grenal shows variable but substantial degrees of timeaveraging based on the length of time marked by the accumulation of the layers containing cultural material (Binford, personal communication 2005). Likewise, while neither of the Nunamiut camps are rock shelters, they share some qualities in terms of the ways in which bones were accumulated and preserved. What accounts for these statistical patterns, and what do they mean in terms of hominin faunal exploitation patterns? First, it is evident that the Olduvai localities and Gesher Benot Ya’aqov cluster together by virtue of their low overall cut mark frequencies. The Olduvai cases range between 2.6 and 8.5% of the total assemblage cut marked, and Gesher Benot Ya’aqov has 14.4% cut marked (both cases are percentage by MNE). Second, they cluster together by virtue of the relative evenness of their distribution across the anatomical regions. Next, the Nunamiut cases and Combe Grenal cluster together by virtue of having low overall cut mark frequencies (15.5%, 9.7%, and 6.9%, respectively) but an uneven distribution across the anatomical regions. All three show higher frequencies of cut marks on heads, followed by axial elements, then by upper limbs, and last by lower limbs. Finally, the Hadza cases are marked by high overall cut mark frequencies, ranging between 28.3 and 41.0%, with a relatively even distribution across the postcranial anatomical regions but low frequencies of cut marks on heads.
Lower and Middle Pleistocene Hominin Bone Modification Patterns 257
These patterns warrant some further consideration in terms of their implications for hominin faunal exploitation. It is reasonable to consider why the Olduvai localities have such low cut mark frequencies. As has often been suggested, this may be the case for the BK, HWK, and MNK localities by virtue of the limited role of hominins in accumulating these bone assemblages (Monahan 1996). If hominins never interacted with significant portions of the bone assemblages, then they could not have deposited cut marks on the bones, lowering the overall frequency of cut marks. This argument becomes more problematic for FLK 22, which has long been thought to represent an assemblage accumulated primarily by hominins. Indeed, while FLK 22 has an overall cut mark frequency of 8.5%, which is nearly twice that of the next most cut marked Olduvai assemblage, its cut mark patterning does not seem consistent with an assemblage produced exclusively through either hominin hunting or scavenging. The elephant in the room in this situation is the significant role of density-mediated bone attrition and other destructive taphonomic forces at FLK 22, which are illustrated in greater detail in the previous chapter. Two dynamics appear likely to me in explaining the low frequencies of cut marks in this case: (1) destructive taphonomic forces effectively deleted cut marked elements and/or obscured cut marks, perhaps doing so in a biased manner; (2) there was more nonhominin involvement in bone accumulation and destruction in this case than is generally recognized. If I am correct, does this conclusion mean that the FLK 22 cut mark frequency data are useless in assessing faunal exploitation patterns? Although taphonomy complicates this picture, it is likely that the FLK 22 patterning holds significant evidence concerning hominin faunal acquisition behavior. Figure 7.5 shows cut mark frequencies by anatomical region for the Olduvai assemblages. At FLK 22, the fact that cut marks occur on limb bones in higher frequencies than on heads or axial elements articulates well with some of the findings of the previous chapter. Humeri, scapulae, and tibiae all appear at higher-than-expected frequencies according to bone density, and it is the anatomical segments that contain these elements that have the highest cut mark frequencies. Indeed, the pelvis (which I included here with the axial skeleton) is the most abundant element relative to the predictions of bone density, and it also has the highest frequency of cut marking (26.9%). Thus, the assemblage composition and cut mark data would seem to match up with strikingly complementary results. The combined implication of these analyses is that early hominins were preferentially accumulating meat and marrow-rich upper limb bones and in processing these deposited the greatest cut mark loads. More broadly, this comparative study of bone modification once more
258 Chapter 7 0.18 0.16
% Cut marked
0.14 0.12 FLK 22
0.1
BK HWK
0.08
MNK
0.06 0.04 0.02 0 Heads
Axial
Upper limbs
Lower limbs + feet
Figure 7.5 Graph showing the percentages of cut marked bones for various carcass segments for the Olduvai archaeological localities
shows that the FLK 22 faunal assemblage is quite complex in terms of its site formation history, but it does also offer evidence that early hominins, through their faunal acquisition activities, had access to highly productive nutritional resources in terms of meat and marrow. A quick examination of some of the other Paleolithic sites helps to illustrate that this was true over large portions of the Pleistocene. Figures 7.6 and 7.7 show cut mark frequencies by anatomical region for 0.3
% Cut marked
0.25
0.2
0.15
0.1
0.05
0
Heads
Axial
Upper limbs
Lower limbs + feet
Figure 7.6 Graph showing the percentages of cut marked bones for various carcass segments for Gesher Benot Ya’aqov
Lower and Middle Pleistocene Hominin Bone Modification Patterns 259 3
Mean Cut mark frequency
2.5
2
1.5
1
0.5
0
Heads
Axial
Upper limb
Lower limb + feet
Figure 7.7 Graph showing the mean cut mark frequencies for bone elements belonging to various carcass segments
Gesher Benot Ya’aqov and Porc Epic Cave, respectively (which I have excluded from this analysis so far because of the reporting of cut mark frequencies in terms of mean cut marks per animal unit by Assefa 2006). Both these figures appear to be similar while also recalling certain aspects of the data from FLK 22. In fact, Gesher Benot Ya’aqov and Porc Epic differ from FLK 22 primarily in having higher overall frequencies of cut marks and having higher frequencies of cut marks on axial elements. As discussed, both these patterns may be greatly attributable to taphonomic dynamics, with these cave sites having better preservation and sometimes being less affected by carnivore ravaging and/or density-mediated attrition. At least on the basis of cut mark placement by anatomical region, this analysis would seem to suggest a great deal of continuity from early contexts, such as FLK 22, to MSA contexts, such as Porc Epic. Although the evidence may not be unequivocal in terms of the hunting-andscavenging debate, it does suggest that early hominins enjoyed access to nutritionally rich carcass segments, supporting my previous arguments concerning Pleistocene faunal acquisition strategies as highly ranked subsistence opportunities. Cut Marks on Shafts versus Articular Ends The relative frequency of cut marks on long bone shafts versus articular ends is another way of assessing the nature of hominin faunal exploitation. While this research tactic has played a prominent role in the hunting-and-scavenging debate, it remains as a key method for many researchers as a way of putatively distinguishing defleshing and
260 Chapter 7
dismemberment activities. For example, in their study of the FLK 22 fauna, Bunn and Kroll (1986) took the relatively high frequencies of cut marks on long bone shafts as a sign that hominins had early access to fleshy carcasses through hunting activities. Although actualistic research has raised questions about the various implications of cut marks on shafts and articular ends (Abe et al. 2002; Lupo and O’Connell 2002), this data source remains central to the examination of hominin bone modification patterns. This approach, however, is also not without its analytical problems. Once again, some of these stem from inconsistencies in terms of the ways in which cut mark locations are reported. As Abe and colleagues (2002) observe, it is ideal to present cut mark locations in terms of articular ends and shafts in as much detail as possible, since small differences may hold great implications in terms of different butchery activities. For example, a cut mark placed on a true articular surface produced through limb dismemberment may be located quite close to a cut mark produced during defleshing activities in which a cutting stroke might target a muscle insertion near the articular end. Here, Abe and colleagues offer a GIS-based approach for this sort of analytical observation and quantification, although this approach has not turned out to be very practical for all researchers everywhere. There are also different degrees of detail in terms of how zooarchaeologists deal with the placement of cut marks on long bones. A number of studies considered in this analysis break down cut mark locations in terms of epiphyses, near-epiphyses, and midshaft (Monahan 1996; Marean et al. 2000; Lupo and O’Connell 2002; Assefa 2006). Many earlier studies, however, frequently distinguish between only shafts and articular ends (for instance, Bunn and Kroll 1986). For this reason, I have collapsed categories and consider distinctions between only shafts and articular ends, though I acknowledge that this approach may be missing important information. More important, there are also major problems of taphonomy and element identification associated with the study of cut mark locations on long bone elements. Here, the shaft critique rears its head again (Marean and Kim 1998), because of the greater durability of long bone shafts relative to articular ends. In addition, short of the actual destruction of long bone portions, carnivore chewing is often focused on the articular ends of long bones in an effort to both consume greasy cancellous bone tissues located at epiphyses and gain access to marrow contained in medullary cavities (Binford 1981). In such cases, carnivore chewing may obscure or erase cut marks on articular surfaces without totally destroying them. Finally, hominin activities may also play a role in this situation, with bone cracking for marrow perhaps profoundly affecting
Lower and Middle Pleistocene Hominin Bone Modification Patterns 261
the identifiability of cut marks on shafts relative to articular ends. Thus, many of the usual suspects of taphonomy, including density-mediated attrition, fragmentation, and carnivore ravaging, may significantly bias observed frequencies of cut marks on various long bone portions. For the purposes of the current study, it is once more useful to first consider the implications of actualistic studies on the subject. Figure 7.8 shows the frequencies of cut marks on shafts and articular ends for both upper limb (humeri and femora) and lower limb (radiae, ulnae, tibiae, and metapodials) bone elements for the Hadza samples. Again, there is statistically significant variation among the three site use contexts, which is driven mainly by the divergent patterning associated with the butchery stands. While the residential camps and hunting blinds share a relatively even distribution of cut marks between shafts and articular ends, the butchery stands are characterized by much higher frequencies of cut marks on articular ends than on shafts. The butchery stand assemblages are also characterized by higher frequencies of cut marks on meaty upper limb elements relative to lower limb and foot bones. Although there are many potential reasons for this pattern, it is likely mostly attributable to the goals of the butchers at butchery stands in terms of segmenting carcasses for transport back to residential camps. In other words, limb disarticulation was a primary activity, whereas defleshing was much less prominent. The further removal of masses of flesh for consumption occurred subsequently at residential camp sites, accounting for the more even distribution of cut marks between articular ends and shafts. 1 0.9 0.8 0.7 0.6
Residential camp Hunting blind
0.5 0.4
Butchery stand
0.3 0.2 0.1 0 Upper limb shaft
Upper limb end
Lower limb + feet shaft
Lower limb + feet end
Figure 7.8 Graph showing the percentages of cut marks on shafts versus ends for long bones at the Hadza sites
262 Chapter 7
This patterning involves two important points: First, in this case at least, the distribution of cut marks on long bones seems not to directly relate to dynamics of hunting versus scavenging. Instead, butchery goals in terms of carcass segmentation for transport seem to play a much more important role in determining cut mark location. Second, it would seem that the location of cut marks on long bone portions are more useful in assessing differences in butchery activities in terms of dismemberment and defleshing than for making inferences concerning the amount of flesh remaining on carcass elements. For such reasons, this type of data would appear to be quite effective in distinguishing between different forms of site use and the different kinds of butchery activities those brought about. Thus, although this kind of information does not articulate very well with the issues revolving around the hunting-and-scavenging debate, it holds substantial potential for recognizing variability in site use patterns and special activity areas, which are both keys to building better understanding of the organization of prehistoric foraging systems. Turning our attention to the archaeological cases, we notice some interesting patterns. Table 7.1 reports cut mark frequencies for upper and lower long bone shafts and articular ends. Figure 7.9 shows the results of a hierarchical cluster analysis based on the raw percentages of cut marked upper and lower limb shafts and ends; Figure 7.10 shows the results of a cluster analysis based on the ratios of cut marks on upper limb ends relative to upper limb shafts and lower limb ends relative to lower limb shafts, which I did to standardize the data so that differences in overall cut mark frequencies were not the primary factor in determining clustering. The cluster analysis based on raw cut mark frequencies (Figure 7.9) groups all the Hadza cases together, with the butchery stands being the outlying case within the cluster. In addition, all the archaeological cases cluster together, with Die Kelders being the outlier. In spite of Table 7.1 Ratios of cut mark frequencies on long bone ends versus shafts for selected zooarchaeological cases
Hadza camp Hadza hunting blind Hadza butchery stand FLK 22 Ya’aqov Die Kelders Porc Epic
Upper Limb Shaft
Lower Limb Shaft
Upper Limb End
Lower Limb End
Upper Limb End/ Shaft Ratio
Lower Limb End/ Shaft Ratio
0.33 0.27 0.26 0.17 0.2 0.436 2.63
0.36 0.27 0.19 0.12 0.1 0.3 1.91
0.46 0.5 0.9 0.22 0.04 0.364 0
0.51 0.63 0.5 0.17 0.05 0.18 0.28
1.39 1.85 3.46 1.29 0.2 0.83 0
1.42 2.33 2.63 1.42 0.5 0.6 0.15
Lower and Middle Pleistocene Hominin Bone Modification Patterns 263 Dendrogram Using Average Linkage (between Groups) Rescaled Distance Cluster Combine
0
5
10
15
20
25
Hadza camp
Hadza blind
Hadza butchery
FLK 22
Gesher Benot Ya’aqov
Die Kelders
Figure 7.9 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on the raw percentages of cut marks on long bone shafts versus ends for long bone elements
some significant differences between the Hadza assemblages, they group together largely because of their overall high frequencies of cut marks on limb bones. The close similarity of FLK 22 and Gesher Benot Ya’aqov mirrors the findings already presented in this section based on cut mark frequencies by anatomical region. Finally, Die Kelders, which also has high overall frequencies of cut marks, appears as the outlier of this group owing to its high frequency of cut marks at long bone shafts relative to articular ends, which is the opposite of the pattern seen at the Hadza sites. The cluster analysis based on ratios of cut marks on upper and lower limb bone ends to shafts shows some other dimensions of this case. For one thing, this approach allowed me to tentatively include the Porc Epic data, which were reported in terms of mean number of cut marks per animal unit rather than percentages of cut marked element portions. First, the Hadza butchery stands appear as an extreme outlier here, since this is the only case in which cut marks appear in much higher frequencies on limb bone ends relative to shafts. Next, Gesher Benot Ya’aqov, Porc Epic, and Die Kelders cluster together by virtue of having high frequencies of cut marks on limb bone shafts relative to articular ends. Finally, the Hadza residential camp and hunting blind assemblages
264 Chapter 7 Dendrogram Using Average Linkage (between Groups) Rescaled Distance Cluster Combine
0
5
10
15
20
25
Hadza camp
FLK 22
Hadza blind
Gesher Benot Ya’aqov
Die Kelders
Hadza butchery
Figure 7.10 Graph showing a hierarchical cluster analysis of archaeological and ethnoarchaeological sites based on the ratios of cut marks on long bone shafts versus ends for long bone elements
cluster with FLK 22 on account of relatively even ratios of cut marks on long bone shafts and articular ends. What do these patterns mean? First, the high frequencies of cut marks on shafts at Gesher Benot Ya’aqov, Porc Epic, and Die Kelders is probably attributable to the primacy of defleshing activities relative to limb disarticulation. As the most extreme case, Porc Epic has virtually no cut marks at articular ends, and this pattern is somewhat puzzling. It may mean that (1) dismemberment activities occurred elsewhere before the transport of limbs to the Porc Epic site, (2) large segments of carcasses were accumulated whole and were defleshed without much joint disarticulation, (3) taphonomic forces obscured cut marks at articular ends, or (4) some combination exists of all three. Of four possibilities, the second and third seem most likely, given that field processing and transport in the Hadza cases seem to result in much higher frequencies of cut marks at articular ends than what is evident at Porc Epic. Although I excluded the cut mark data from Size Class 1A and 1 animals, the Porc Epic assemblage is still dominated by Size Class 2 and 3 prey animals, which are likely small enough to have been transported whole without previous carcass segmentation. In any case, the patterns associated
Lower and Middle Pleistocene Hominin Bone Modification Patterns 265
with Gesher Benot Ya’aqov, Die Kelders, and Porc Epic imply access to nutritionally rich carcass portions and high frequencies of defleshing activities relative to dismemberment. The even proportions of cut marks on shafts and articular ends at the Hadza residential camp, hunting blind, and FLK 22 also form a provocative pattern. From this, one might jump to the simplistic conclusion that FLK 22 was a residential campsite, as has often been suggested. While there are many lines of evidence that suggest that FLK 22 was not a home base site, there may be several reasons for its similarity to the Hadza sites. In the previous chapter, I argued for some amount of selective transport of certain meat and marrow-rich bone elements at FLK 22. This approach would have necessitated disarticulation before transport in concert with further defleshing activities, which were perhaps performed at the FLK 22 site. This kind of butchery behavior would account for its similarity with the Hadza sites, although the actual site use dynamics were quite different from one another. Also, the taphonomic dynamics evident at FLK 22 might have lowered the numbers of observed cut marks on articular ends. Thus, FLK 22 may have initially had more similarity with the Hadza butchery stand than its current archaeological characteristics might suggest. More important, these statistical patterns once again corroborate the suspicion that early hominins were systematically accumulating nutritionally rich bone elements at FLK 22. Implications for Change over Time In this analysis, FLK 22 once again stands out as a particularly important case because of its early date and its striking archaeological patterning. Based on the analyses in the previous chapter and here, it is apparent that early hominins at FLK 22 (1) had access to the meat and marrowrich bone elements of medium-sized prey animals and (2) selectively transported and accumulated these elements in higher-than-expected frequencies relative to predictions based on bone density. In terms of the foraging ecology of early hominins, this finding suggests that faunal acquisition represented an extremely efficient, productive, and highly ranked subsistence opportunity. In terms of site use patterns, these data pointing to limb dismemberment as a common activity may also lend some support to the argument that FLK 22 was a location to which animal parts were selectively transported from the locations of initial carcass transportation, yet they do not necessarily imply home base use. Skeptics might well ask what the difference here really is, and I am partly sympathetic to this question. However, I would argue that FLK 22 has more in common with what O’Connell and colleagues (2002) refer to as “near-kill accumulations,” which are locations near animal acquisition sites to which carcass parts were transported to avoid confrontations
266 Chapter 7
with other carnivores and to find water or shade. Furthermore, this type of site use is consistent with the routed foraging mobility system I have argued for earlier in this book. Of the remaining archaeological sites considered in this analysis, Gesher Benot Ya’aqov would seem to be most comparable with FLK 22, with similarities in terms of cut mark frequencies broken down by anatomical region (though suitable data are unavailable for Die Kelders and Porc Epic). However, Gesher Benot Ya’aqov has a significantly higher proportion of cut marks on long bone shafts relative to articular ends, suggesting that dismemberment was not as common a butchery activity as was defleshing. This observation may suggest the accumulation of whole or nearly whole carcasses in this context, which is easy to imagine given that the dataset provided by Rabinovich and colleagues (2008) deals only with fallow deer—a small enough animal to transport without much carcass segmentation. Were larger animals included in this analysis, the cut mark patterning at Gesher Benot Ya’aqov might more closely resemble that observed at FLK 22. In terms of thinking about site use patterns at Gesher Benot Ya’aqov, special activity area use rather than true home base use once again would seem to make sense. As Alperson-Afil and colleagues (2009) have demonstrated, there is a surprising degree of spatial organization and segregation of activity areas within the Gesher Benot Ya’aqov cave (which FLK 22 basically lacks). I return to this issue later, but for now it is worth observing that these data are consistent with a series of ephemeral uses of the site in which hominins butchered and consumed meat from animal carcasses while also engaging in some stone knapping activities and consuming a variety of other food resources. It is also the case that this site is located on the fringes of a freshwater paleo-lake shore, which would have been an attractive feature for hominins consuming animal carcasses and other food resources. In short, although Gesher Benot Ya’aqov clearly has better spatial resolution and less time-averaging in terms of its Middle Pleistocene archaeological remains, it may well have many contextual and hominin behavioral features in common with FLK 22. The MSA assemblages considered in this analysis also have much in common in terms of cut marks focused on meaty anatomical regions and on long bone shafts instead of articular ends. At a minimum, this situation again demonstrates that MSA hominins also had access to nutritionally rich carcass portions and that dismemberment activities were less common relative to defleshing. Issues of site use patterns and bone accumulation processes in these cases are also quite complex. In the case of Die Kelders, which is characterized by relatively high frequencies of large prey such as eland (Taurotragus oryx), frequencies of cut marks on articular surfaces would seem to be consistent with a fair amount
Lower and Middle Pleistocene Hominin Bone Modification Patterns 267
of limb dismemberment activity, since eland would be too large to transport as whole carcasses. In combination with other spatial data, Die Kelders would seem to be a fairly clear case of a residential site to which animal carcass portions were selectively transported and accumulated. However, the Porc Epic data, which are dominated by smaller prey species, would seem to imply the transport and accumulation of relatively whole carcasses, while the defleshing activities at Porc Epic appear to have been fairly similar to those at FLK 22, Gesher Benot Ya’aqov, and Die Kelders. Thus, there are several general conclusions that may be drawn from this comparative analysis: 1. Much of the observed variation within these datasets seems to stem from interrelated issues of taphonomy, density-mediated attrition, assemblage fragmentation, problems associated with the shaft critique, and inconsistencies in terms of analytical procedures. Likewise, variability in conditions such as the environmental context of sites and size characteristics of prey clearly influenced hominin butchery activities and bone modification patterns, perhaps in many cases more than any particular aspect of foraging behavior that might be intrinsic to a particular time period (or stage of hominin evolution, although I would avoid thinking in such terms). These problems obviously limit the detail with which past patterns of hominin behavior may be inferred, and they are especially important to remember when individual sites, such as FLK 22, are used as proxies for large units of time and space in the interest of building evolutionary theory. 2. In spite of these problems, there are some aspects of archaeological patterning that are relative to actualistic data in ways that imply certain forms of faunal acquisition, butchery, transport, and accumulation. Although the application of such actualistic referential frameworks to these archaeological cases is not totally straightforward owing to the analytical ambiguities discussed earlier, they are useful in making inferences at very general scales. 3. There is little change over time in terms of faunal acquisition, butchery, and bone accumulation that is securely demonstrable on the basis of these data. With that said, it appears that all the sites considered here at which hominins were the primary bone accumulators offer evidence indicating that hominins routinely acquired nutritionally rich carcass portions of medium-sized and sometimes large-bodied prey. In other words, although there are some aspects of these data that obviously warrant further scrutiny, there are no unequivocal changes apparent in the transition from the ESA or Lower Paleolithic to the MSA or Middle Paleolithic.
268 Chapter 7
Alternative Directions in the Analysis of Bone Modification Patterns The comparative analysis offered in this chapter illustrates some of the difficulties and ambiguities associated with the use of cut mark data to make inferences concerning hominin faunal exploitation activities. Although more precise and technologically sophisticated analytical approaches may help in this situation, it may well be the case that the site-specific and idiosyncratic components of situational context shaped archaeological bone modification as much as the key ecological and economic dynamics of interest to zooarchaeologists. It may well be the case that site-specific and idiosyncratic components of situational context shaped archaeological patterns of bone modification as much as broader dynamics of ecology and economy did. In the wake of shifting analytical interests on this topic, more attention is now afforded to aspects of bone damage morphology that have been previously neglected. I have already mentioned studies examining the depth and crosssectional shape of cut marks as an approach for inferring issues of the sorts of tools used in butchery activity, butcher skill, and even, according to some, cognitive capabilities (Bello and Soligo 2008). I also believe that these sorts of technique hold much promise in adding to our analytical toolkit, though appropriate referential frameworks are badly needed to fulfill their great potential. Similarly, work such as that of Lemke (2013) aimed at categorizing the morphologies of cut marks and other damage morphologies may add further nuance and depth to our observations, widening the range of phenomena that may be examined through these sorts of zooarchaeological analysis. For now, I focus on one specific manifestation of these new directions in the study of cut mark shape and orientation—that of Stiner and colleagues’ (2009) work at the Middle Pleistocene site of Qesem Cave in Israel dating between 400 and 200 ka. In this study, Stiner and colleagues (2009) present data concerning the orientation and placement of cut marks on the faunal remains of fallow deer at Qesem Cave. In comparison with cut marks from later Middle and Upper Paleolithic sites in the region, as well as ethnographic and ethnoarchaeological observations of modern foragers, they found that the cut marks tend to occur in much higher frequencies and to be more randomly oriented. Stiner and colleagues (2009: 13211) characterize these cut mark patterns as “heavy-handed” and in a state of relative “disorder” in comparison with those produced by later human groups. As they observe, these patterns would seem to be consistent with less skilled butchers, multiple butchers working simultaneously, repeated interruptions of butchering episodes, or some combination of the these factors. In any case, this approach would contrast with the butchery behavior of modern human foragers, in which butchery tends
Lower and Middle Pleistocene Hominin Bone Modification Patterns 269
to be conducted by one or a few skilled individuals in a systematic sequence of operations and as an entree to subsequent social food sharing, cooking, and consumption activity. This study may imply some important behavioral dynamics inherent within later Middle Pleistocene hominin groups. On the one hand, there is fairly unequivocal evidence that early hominins (1) hunted prey animals such as fallow deer, (2) transported them away from the initial kill site to a secondary location where processing and consumption took place, (3) butchered carcasses intensively and thoroughly, and (4) cooked resulting nutritional resources with controlled fires. On the other hand, micromorphology and other analytical approaches to site structure suggest that Qesem Cave had relatively structured activities areas focused on hearths in ways that are similar to ethnographically observed forager groups. All these features have been taken, at one time or another, as markers of behavioral modernity and were once thought to have originated within anatomically modern human populations in the late Upper Pleistocene (and for that matter, in Western Europe). Among other things, this evidence suggests that late Middle Pleistocene hominins created manifestations of site structure and subsistence behavior that resemble those of much later human groups, yet the cut mark orientation data suggest substantially different carcass butchery dynamics and, therefore, food-sharing practices. To begin with, this evidence seems to support decoupling the various archaeological phenomena that have been traditionally used as markers of behavioral modernity. Although certain aspects of site use behavior are consistent with canonical definitions of modernity, the cut mark data would seem to imply radically different forms of social organization in terms of the butchering, sharing, and consumption of animal carcasses. While Stiner and colleagues (2009) are appropriately cautious in their speculations concerning the ways in which the Qesem Cave hominins interacted with fallow deer carcasses, one can imagine a disorganized group of consumers removing masses of flesh for their own consumption, rather than the (often complex) process of meat distribution that occurs among modern forager groups. Indeed, other large-bodied carnivore species that regularly hunt medium- to large-sized prey animals frequently engage in this type of interaction with carcasses, and it may not be beyond the pale to suppose that late Middle Pleistocene hominins did so as well. In certain respects, this approach to defleshing and consuming carcasses is also similar to the ways in which great ape species interact with food resources, although they clearly lack a parallel in terms of the package size of these sorts of prey animal. In short, while Stiner and colleagues (2009) are rightly tentative in their conclusions, awaiting both additional
270 Chapter 7
actualistic referential frameworks and comparable archaeological data from other Paleolithic contexts, this data source offers a novel window on early hominin faunal consumption patterns. Furthermore, while these patterns are quite different from our observations of modern forager groups, they are well within our range of expectations based on various nonhuman points of comparison. In their initial accounts of the cut marks on the faunal remains from FLK 22, Bunn and Kroll (1986) show several clear cases of complex superpositions of cut marks with different orientations (as well as superposition with carnivore toothmarks; for example, Figure 7.11). This complexity in terms of overlapping cut marks with variable orientations has long been noted as a feature of the FLK 22 faunal assemblage (see also Domínguez-Rodrigo 1997), although its significance has not generally been recognized. In fact, beginning with Potts and Shipman (1981), the apparent “heavy-handed” (borrowing the language of Stiner and colleagues 2009), deep, and disorganized cut mark patterns have been taken more as an index of the intensity of early hominin butchery behavior (see also Bunn and Kroll 1986; Blumenschine 1995; Capaldo 1997; Domínguez-Rodrigo 1997). This assumption may be true, yet it is also worth considering these patterns in terms of their implications for the social dynamics associated with butchery, sharing, and meat consumption. Once again, while more data are sorely needed to make any substantive statements on this subject, I might predict patterns that would support substantially different forms of social interaction with animal carcasses.
Afterthoughts on Site Use Patterns and Food Sharing The analyses presented in Chapters 6 and 7 have offered good evidence that early hominins had access to meat- and marrow-rich bone elements, that they transported these elements preferentially from their locations of initial acquisition, and they accumulated them at secondary locations of some sort or another. This fact, by itself, has been taken by many modern researchers as an indicator that early hominins used home base sites in pretty much the sense implied by the Washburn-Isaac synthesis (Washburn and Lancaster 1968; Isaac 1978a). Once again, the FLK 22 site and other faunal assemblages from Bed 1 at Olduvai Gorge are at the center of the debate. Other Lower and Middle Pleistocene sites with striking spatial patterning and evidence for the controlled use of fire have also been argued to have been home bases, including Gesher Benot Ya’aqov (Goren-Inbar et al. 2004; Alperson-Afil, Richter, and GorenInbar 2007; Alperson-Afil et al. 2009). However, I would argue that a great deal of caution is warranted in the evaluation of these claims.
Lower and Middle Pleistocene Hominin Bone Modification Patterns 271
Figure 7.11 Repeated/conflated cut marks on bones from FLK 22 (micrographs from Bunn and Kroll 1986)
The question concerning whether the FLK 22 site was an early hominin home base goes back to the initial fieldwork of Mary Leakey (1971) and came to be a centerpiece of the hunting-and-scavenging debate (Binford 1981, 1984, 1987; Isaac 1983; O’Connell and Hawkes 1988a,
272 Chapter 7
1988b, O’Connell et al. 2002; Potts 1988, 1992; Schick and Toth 1993; Blumenschine 1995; Capaldo 1997; Domínguez-Rodrigo 1997, 2003). I have already argued, given other forms of archaeological data, that Lower Pleistocene assemblages of this sort were not home bases but rather special activity areas used within routed foraging mobility systems. However, we need to consider whether the selective animal part transport argued for here should be taken as evidence for home base. Furthermore, if such transport does not indicate home base site use, then what types of behavior does it indicate, and what are its implications for our understanding of early hominin foraging ecology and social systems? To begin with, it may be taken as a given that all large-bodied carnivores engage in some form of carcass transport. As an anecdotal example, Figure 7.12 shows a lioness (Pathera leo) I observed at the Kgalagadi Transfrontier Park, South Africa, dragging the torso of a gemsbok (Oryx oryx), which it scavenged from an earlier kill, to the cover of a nearby shade tree. Even though the transport costs were high in this case relative to the amount of meat remaining, the lioness still transported the carcass to shade and cover to avoid confrontation with other carnivores. Such patterns of carcass transport and site use by carnivores are characterized by Blumenschine and colleagues (1994) as a “refuge” strategy. Such sites have much in common with the “near-kill” accumulation model of site use proposed by Binford (1981, 1984), which is a key element of the routed foraging mobility systems.
Figure 7.12 A lioness scavenges a gemsbok carcass from a lion kill in the Kgalagadi Transfrontier park, dragging the torso to the cover of a shade tree
Lower and Middle Pleistocene Hominin Bone Modification Patterns 273
Furthermore, drawing on Hadza ethnoarchaeological data, O’Connell and colleagues (2002) have demonstrated that such sites have strategic functions for modern foragers living in similar environments in subSaharan Africa today. To be clear: there are a number of aspects of Binford’s (1981, 1984) near-kill accumulation model that would seem to be incorrect in light of all of the accumulated data. Binford viewed near-kill accumulations as the result of early hominins transporting animal parts scavenged from carcasses that were already heavily ravaged by previous predators. Again, in the cold light of the 21st century, there would seem to be ample evidence suggesting that, one way or another, early hominins had access to carcasses with large masses of flesh remaining. I now strongly doubt that FLK 22 and other similar early hominin faunal accumulations were result of marginal scavenging activities, and, therefore, this version of the near-kill accumulation seems unwarranted. However, as O’Connell and colleagues (2002) state, the use of such sites still makes sense for modern foragers who have access to intact carcasses through both hunting and scavenging activities. Likewise, they also point out that, among the Hadza, such sites are consistently situated in shady locations near water sources and have high levels of taxonomic diversity of prey animals— both characteristic of FLK 22. Finally, O’Connell and colleagues also point out that the Hadza never use such sites as residential camps and/ or sleeping places, likely because of the risk of predation, and such locations are not where food-sharing behavior of the sort implied by the Washburn-Isaac evolutionary models takes place. There are also some important differences between Hadza near-kill accumulations and FLK 22, many of which have manifested themselves in the analysis presented in this chapter. By virtue of being locations at which carcasses were initially field processed for further transport to residential camps, the Hadza butchery stands are characterized by the scarcity or absence of certain nutritionally rich bone elements and high frequencies of cut marks on long bone articular ends relative to shafts. The obvious difference here is that sites such as FLK 22 were apparently not locations of dismemberment and processing in advance of subsequent transport to residential camps. Rather, they were the locations of consumption, serving as locations to which carcass segments were transported from their locations of initial acquisition. Thus, they show a combination of cut mark patterns resulting from both dismemberment and defleshing activities. In short, when all their archaeological characteristics in terms of landscape location, taphonomy, spatial structure, and stone tool/faunal assemblage composition are considered, sites like FLK 22 represent a combination of activities that do not have clear analogues among modern forager groups (O’Connell et al. 2002;
274 Chapter 7
Domínguez-Rodrigo 2003; Domínguez-Rodrigo and Barba 2006) but that make a great deal of sense in terms of the comparative ethology and ecology of other large-bodied carnivore species. Some other lines of archaeological research on the FLK 22 assemblage also speak to these issues in some important ways. For example, several studies have made progress in terms of understanding the environmental context of FLK 22 (Ashley et al. 2010; Blumenshine et al. 2012a). Such studies have shown that this site was indeed located in a riparian woodland in association with a water source. Likewise, the work of Blumenschine, Njau, and colleagues (Blumenschine et al. 2012a; Njau and Blumenschine 2012) has demonstrated the presence of carnivore feeding traces on important hominin fossils from Olduvai Bed 1 sites, including FLK 22. Their environmental reconstructions show that the FLK 22 site, by virtue of its exceptional topographic and environmental characteristics within the broader Olduvai landscape, was attractive to both large-bodied nonhominin carnivores and early hominins. This context would have made it an extremely dangerous and unsuitable site for a residential camp/sleeping place. On another front, Faith and colleagues (2009) have also offered evidence based on relative element abundances that, although FLK 22 is exceptional among Olduvai’s Bed 1 sites in showing evidence for the selective transport of carcass segments, its transport distances were minimal in comparison with MSA and Middle Paleolithic sites, where home base site use is much more certain. This evidence may well reflect the kinds of transport decisions facing hominins in removing carcass portions from sites of acquisition to nearby secondary consumption locations. As Faith and colleagues point out, by themselves, short transport distances say little about the ways in which early hominins used FLK 22. In my view, however, they complement a growing body of contextual evidence suggesting nonresidential site use patterns and, in particular, near-kill refuge strategies of the sort consistent with routing foraging mobility systems. Other ESA and Lower Paleolithic sites have begun to enter into this discussion because of certain aspects of spatial patterning and evidence for the controlled use of fire. Of definite significance to the analysis in this chapter, Gesher Benot Ya’aqov is one such site that has received a great deal of attention (Goren-Inbar et al. 2004; Alperson-Afil et al. 2007, 2009; Alperson-Afil and Goren-Inbar 2010). To begin with, if the evidence from Gesher Benot Ya’aqov does indeed indicate the controlled use of fire by early hominins, then this use is significantly older than the next oldest known cases. In addition, although researchers working on the use of fire at Gesher Benot Ya’aqov have presented admirably detailed spatial and geological data (esp. Alperson-Afil and Goren-Inbar
Lower and Middle Pleistocene Hominin Bone Modification Patterns 275
2010), it may be worth questioning certain aspects of the interpretation of this site as a home base centered on hearths. A number of authors have criticized the conclusion that the distribution of burned microartifacts at Gesher Benot Ya’aqov indicates the presence of so-called phantom hearths (Alperson-Afil et al. 2009). For example, certain authors complain that no micromorphological analysis of sediments was performed (Sandgathe et al. 2011; Berna et al. 2012), which may be expected, because this technique seems to be increasingly the coin of the realm in determining the presence of anthropogenic ash at Paleolithic archaeological sites. But, as Pickering (2012) has pointed out, the pitfalls associated with the distinction of hominin-controlled use of fire from other types of burning at open-air sites, such as Gesher Benot Ya’aqov, remain problematic in the absence of well-defined hearth features (see also James 1989 for an earlier discussion). In short, there are still some good reasons for moderate skepticism with regard to such claims for early controlled use of fire. More important, even if the evidence from Gesher Benot Ya’aqov does indicate the presence of “phantom” hearths used by early hominins there, associated patterns of site use remain complex and not necessarily consistent with home base site types. In their assessment of spatial patterning, Alperson-Afil and colleagues (2009) take the discrete concentrations of various artifact types to indicate residential site use. For one thing, the evidence presented concerning the use of “phantom” hearths suggests that they were not used very systematically or repeatedly over time. Rather, they seem to have been used ephemerally during a relatively short-term period of site use. For another, it is difficult to see why artifact concentrations of the sort documented by Alperson-Afil and colleagues (2009) necessarily indicate residential site use rather than other types of activities. As O’Connell’s (1987) ethnoarchaeological research in Australia has shown, to accurately diagnose spatial structure resulting from long-term residential site use in open-air contexts, excavation sizes of 300–400 m2 would be necessary. In contrast, I propose keeping an open mind to the possibility that Gesher Benot Ya’aqov was an activity area at which a number of tasks were performed. To me, the evidence suggesting the conflation of several consecutive short-term and ephemeral uses of the site is consistent with the patterns that might be expected of special activity areas within the routed foraging mobility system. It seems that Gesher Benot Ya’aqov was a location to which animal carcasses were transported for processing and consumption, which possibly involved the use of fire. It seems possible that other locally available food resources were consumed and some knapping activity was conducted. Because of this evidence, I might imagine scenarios in which hominins acquired the carcasses of
276 Chapter 7
prey animals and transported them to the Gesher Benot Ya’aqov site, where water and other attractive features were located. Carcasses were then butchered (perhaps using debitage produced on site) and consumed (perhaps in association with the ephemeral fire features). Finally, some retooling activities occurred, which, in combination with the production of flakes for performing local economic activities, explains the presence of knapping debris. Furthermore, it is once again apparent that the arguments for certain kinds of residential site use have been posed as proxies for the precocious cognitive and cultural sophistication of hominins in this particular case. For example, Alperson-Afil and colleagues conclude their study of spatial organization at Gesher Benot Ya’aqov with this sentiment: The evidence from Gesher Benot Ya’aqov suggests that early Middle Pleistocene hominins carried out different activities at discrete locations. The designation of different areas for different activities indicates a formalized conceptualization of living space, often considered to reflect sophisticated cognition and thought to be unique to Homo sapiens. . . . Modern use of space requires social organization and communication between group members, and is thought to involve kinship, gender, age, status, and skill. (2009: 168)
As with much of the hunting-and-scavenging debate, the particular forms of behavior and site use discussed in this case are not viewed as most significant in their own right but rather as proxy for other complex social phenomena. Likewise, as is the case with many elements of the hunting-and-scavenging debate, it is difficult to see how any of the spatial patterns identified at Gesher Benot Ya’aqov relate to issues such as kinship and gender, which are difficult enough to address on the basis of much more recent archaeological sites with more straightforward taphonomic histories and closer connections with modern ethnographic contexts. These kinds of argument are also substantially related to issues of food sharing and their implications for hominin evolutionary scenarios. Beginning with Isaac (1978a, 1978b), the mere occupation of home base sites in combination with the predation of large animals was taken as sufficient evidence for social food sharing of the sort practiced by modern foragers. For example, Clark characterizes this logic as follows: “It is incontrovertible that parts of a number of different animals were brought together at one place, and there is every reason to accept that these archaeological collections were made by hominids at favourable strategic places providing other resources and protection where the meat could be processed and shared” (1996: 323, emphasis added). Although I suppose that I agree with the first part of this statement, I do not think
Lower and Middle Pleistocene Hominin Bone Modification Patterns 277
it is at all self-evident that social food sharing automatically follows from the acquisition of large packages of food such as the carcasses of largebodied ungulates in combination with the occupation of home bases. I have two main reasons for holding this opinion. First, much of the importance of food sharing within the Washburn-Isaac synthesis was based on observations of modern hunter-gatherer groups, especially the Ju/’hoansi and the Hadza (Lee 1968, 1979; Wiessner 1982, 2002; Woodburn 1982; Hawkes, O’Connell, and Blurton Jones 2001; Marlowe 2010). In these cases, food sharing between nonkin individuals is seen as predicated on egalitarian social systems and strong social norms promoting equality in terms of access to resources. Furthermore, egalitarian social systems of this sort have been assumed to characterize prehistoric hominin forager groups going back into out deep evolutionary past (Isaac 1968, 1978; Lee 1968, 1979; Washburn and Lancaster 1968), and this assumption remains prevalent today. More recent considerations of modern hunter-gatherer variability, however, have shown that egalitarian social systems of this sort are certainly not universal and may actually be somewhat exceptional. In fact, studies such as that of Wiessner (2002) help to demonstrate that egalitarian social systems and reciprocity practices originate in the context of forager groups living in marginal environments with much larger populations than would likely have been present in Paleolithic times (see also Enloe 2003 for more detailed discussion). Under such circumstances, egalitarian social structures and food-sharing practices serve to mitigate risky subsistence practices, such as hunting in such marginal environments as the modern Kalahari, where prey animals are overexploited and game densities are low. Furthermore, such risky subsistence practices may take on their real importance only during periods in which other more reliable subsistence resources fail, which may occur on a scale of decades. Thus, I would argue that the egalitarian sharing practices of groups like the Ju/’hoansi are not the fossilized remains of the innate behaviors of our hominin ancestors. Instead, they are the outcomes of stressed populations living in sparse environments, existing primarily to reduce risk during periodic resource failures (which are almost never observed by anthropologists). This puts them squarely within the realm of fairly extreme “post-Pleistocene adaptation,” and therefore these forms of behavior are not a good evolutionary basis for the origins of early members of the genus Homo. In addition to this ethnographic critique, there are also archaeological reasons for doubting the prevalence of such food-sharing practices prehistorically, even within Paleolithic contexts where home base use has been more firmly demonstrated. For one thing, as Enloe (2003) has discussed at length, there are substantial analytical problems in terms of
278 Chapter 7
recognizing food sharing on the basis of archaeological remains. Based on a comparative reading of the ethnoarchaeological literature, the approach advocated by Enloe involves the contextualization of faunal remains through fine-grained spatial mapping, taphonomic analysis, and carcass refitting. Furthermore, very few archaeological sites have appropriate characteristics in terms of their formation processes for food sharing to be at all discernible. Enloe presents the French Magdalenian site of Pincevent as a case study in this discussion with the obvious implication that very few archaeological sites are at all like it in terms of their site formation processes. For these reasons, if prehistoric food sharing is to move beyond its current status as an axiomatic expectation based on the large package size of hunted food resources, radically innovative archaeological methods grounded in strong actualistic research are required. In addition, there are also a number of cases that would seem to offer fairly clear evidence that early hominin butchering and sharing patterns were not analogous with those of modern human foragers. This chapter has already discussed the case of Qesem Cave, which might benefit now from some further consideration. On the one hand, there may be stronger evidence that Qesem Cave was a residential site. This evidence includes micromorphological studies of sediments showing the repeated use of fire features over fairly long periods of time and other complex forms of spatial patterning (Karkanas et al. 2007; Stiner, Barkai, and Gopher 2009). If this set of references is true, this would be an interesting pattern given that Qesem Cave is a terminal Lower Paleolithic dominated by elongated debitage and has virtually no bifaces. On the other hand, these studies show that, in spite of a range of complex spatial patterning, animal butchery practices associated with sharing behavior were structured in profoundly different ways than what is known of modern forager groups. Although Stiner and colleagues (2009) make the case that some basic form of delayed food consumption and sharing is indicated by the Qesem Cave evidence, they are equally clear about the divergence of this patterning with the forms of social food sharing found in both later prehistoric and modern forager contexts. Based on my examination of a variety of evidence, I think it makes sense to decouple social food sharing from both home base use and the acquisition of large fauna. The origins of this theoretical expectation go back to a phase of anthropological theory in which modern foragers were seen as watered-down versions of those that lived during the Paleolithic and were therefore the outcome of early phases of hominin evolution (see Binford 2001 for a longer discussion of this so-called founder’s effect in the study of modern foragers). In this framework, egalitarian social structures and sharing practices were seen as the natural state of
Lower and Middle Pleistocene Hominin Bone Modification Patterns 279
“pure” foraging groups and were absent only in modern hunter-gatherer populations that had been tainted by recent contact with farming societies. Over the last half century, this view has been turned on its head through a program of ecologically oriented research, which has instead suggested that egalitarianism and associated varieties of food sharing serve very important roles in mediating risk in terms of subsistence practices. Thus, the striking forms of reciprocity witnessed among such modern foragers as the Ju/’hoansi and Hadza need not necessarily have deep antiquity, nor is reciprocity automatically implied by the beginnings of large game hunting and/or home base use. Furthermore, until adequate methods of recognizing food sharing in the archaeological record are in place (cf. Enloe 2003), the role of this type of behavior in our evolutionary theory remains largely untestable.
Conclusion This chapter has examined patterns of bone modification and especially cut mark data as a tactic for understanding hominin faunal exploitation and butchery patterns. The comparative analysis presented here complements those presented in the previous chapter and suggests several conclusions: (1) a combination of methodological and taphonomic issues make the use of cut mark data difficult and resulting inferences ambiguous relative to important questions in terms of the development of evolutionary theory; (2) both ESA and MSA hominins had access to the meat- and marrow-rich carcass segments of medium-sized prey animals, and hominins selectively transported and accumulated these elements at certain sites, leaving damage from resulting butchery activities that bespeak dismemberment in certain cases but defleshing more universally; (3) although the methodological and taphonomic limitations mentioned earlier complicate this examination, there are no major changes over time before the Upper Pleistocene in terms of faunal acquisition strategies; (4) although there may be some changes in terms of carcass transport behavior and social dynamics shaping butchery activities, currently available datasets are not adequate to demonstrate these changes in clear-cut ways. Further methodological developments, many of which may involve emerging analytical approaches and technologies, are necessary before these sorts of issues can be resolved, although these new techniques clearly offer tremendous promise in inferring the economic practices and foraging ecology of Paleolithic populations in organizational perspective. Furthermore, the literature on this topic remains pervaded by evolutionary implications regarding the cultural and cognitive sophistication of early hominins. Many such studies attach their significance to
280 Chapter 7
claims concerning the antiquity and implied modernity of certain types of behaviors, including big game hunting, home base use, food sharing, controlled use of fire, the spatial segregation of activities within sites, and the like. I continue to believe that the data considered in these studies are not adequate to the task of assessing the modernity or sophistication of early hominins, whose behavior was only as complex as it needed to be. In contrast, these data are appropriate to the examination of a range of issues in terms of the organization of subsistence behavior and foraging behavioral ecology. Although such issues may not be perceived as being exciting as putative evidence for early complex behavior, I strongly believe that they are a strong basis for building ecologically grounded evolutionary theory.
Chapter 8
Alternative Perspectives on Hominin Biological Evolution and Ecology
S
o far this book has considered archaeological evidence associated with our Pleistocene hominin ancestors in the interest of learning about the organization of their economic practices and foraging ecology. This pursuit, of course, is not the sole, ultimate goal of paleoanthropology. Rather, it is a means to the ends of building theory concerning the evolutionary dynamics associated with Pleistocene hominin populations, as well as the evolutionary scenarios through which modern human populations emerged. However, I find it problematic that there are so few considerations of evolutionary theory within paleoanthropology that seriously integrate archaeological data and the ecological inferences drawn from these data. There are two main arenas of evolutionary theoretical discussion that have been brought to bear on the issues considered in this book. The first set revolves around the biological origins of Homo erectus and its descendants—characterized by major increases in brain size and body size, as well as a reorganization of the hominin body plan into its (more or less) modern human form. Here, the Washburn-Isaac synthesis represents one of the earliest and grandest attempts at integrating data from the fossil and archaeological records while also making use of comparative primate ethology. This theoretical scenario has also had some important adjuncts that make use of the expensive tissue hypothesis, aimed at explaining shifting hominin energetics in terms of organ sizes and the quality of nutritional resources (Aiello and Wheeler 1995; Aiello
Before Modern Humans: New Perspectives on the African Stone Age by Grant S. McCall, 281–322 © 2015 Left Coast Press, Inc. All rights reserved. 281
282 Chapter 8
and Wells 2002). Among other things, such studies have offered a firmer footing for thinking about changes in brain size and hominin diet with the origins of Homo erectus. Likewise, some alternative theoretical scenarios, such as the “grandmothering” hypothesis (O’Connell, Hawkes, and Blurton-Jones 1999), have challenged important elements of conventional wisdom on this topic. The second set of frequently debated evolutionary questions revolves around the origins of anatomically modern humans and the fate of other Upper Pleistocene hominin species, especially the Neanderthals. Rather than being principally ecological in its orientation, this debate has focused almost exclusively on using certain skeletal features as a way of establishing relationships of ancestry and descent among our various hominin ancestors. Within the last two decades, in large part because of advances in both modern and ancient genetic studies, there has been a growing consensus that anatomically modern humans originated early in the Upper Pleistocene in sub-Saharan Africa, likely in the northern Rift Valley. These populations subsequently expanded out of sub-Saharan Africa during the later Upper Pleistocene, apparently swamping other archaic populations such as the Neanderthals (although moderate interbreeding seems likely). While advocates of the multiregional continuity perspective continue to offer useful insights, the growing and diversifying evidence on this subject has provided many solid points of agreement on Upper Pleistocene population movements and histories. The first of these debates focuses on very large-scale change and may indeed involve a saltational evolutionary event. The second of these debates centers on relatively small-scale skeletal characteristics, especially those having to do with cranial robusticity but also those perceived as having major implications for issues of ancestry and descendants. There are, however, some important anatomical changes that occurred within Pleistocene hominin populations after the emergence of Homo erectus and separate from issues associated with scenarios of modern human origins that hold important ecological and evolutionary implications. This chapter focuses on two of these dynamics: shifting body size and brain size. Body size has long been recognized as a biological characteristic with profound implications stemming from dynamics of metabolic energetics, climate, social organization, and other ecological variables. Furthermore, there are clear shifts in body size over the course of the Pleistocene that may offer important information for thinking about patterns of hominin ecology. Using variability in modern human forager groups as a frame of reference, I argue that the shifts toward smaller body size across the later Middle and Upper Pleistocene indicate larger and increasingly packed human populations. I also make the case that the denser populations
Alternative Perspectives on Hominin Biological Evolution and Ecology 283
indicated by these changes in body size were the crucial dynamic causing the various forms of subsistence intensification discussed earlier in this book. Specifically, I propose that the first major decrease in body size corresponds with the transition from the ESA to the MSA, along with the origins of the modern pattern of residential site use. This pattern lends credence to my view that this transition was among the earliest major organizational changes related to economic intensification and was brought about by larger hominin populations. Next, I examine changes in brain size within Pleistocene hominin populations. In this case, I use data on nonhominin carnivores and primate species as frameworks for considering the causes of the changes in brain size seen within hominin populations. Using these data, I show that hominin brain size increased as foraging practices diversified, intensified, and took on greater strategic complexity. This evolutionary trend also had implications for levels of technological sophistication and, ultimately, the complexity of social dynamics in terms of the use of language and the structuring of social roles in terms of gender, age, status, and the like. However, I argue that the increasing sophistication of cultural behavior and complexity of social structures were not the ultimate causal factors in bringing about larger brain size. Instead, I propose that such changes in social behavior were consequences of larger brain sizes, which had other evolutionary roots in terms of changing ecological roles and contexts. Finally, in concluding this chapter, I consider the implications of my findings in terms of body size and brain size for the origins of Homo erectus at the beginning of the Pleistocene and then anatomically modern humans in the Upper Pleistocene. With regard to brain size, I argue that the initial increase in brain size seen in the transition from the australopithecines to early members of the genus Homo represents a major ecological reorientation associated with the occupation of increasingly arid and seasonal environments. I briefly consider some potential ways in which robusticity relates to the variables influencing body size, and I discuss the implications of reductions in skeletal robusticity evident within hominin lineages over the course of the Pleistocene and especially prominent in anatomically modern human populations. Specifically, I propose that robusticity reduced as population sizes increased. This brought about greater evenness in the proportion of females to males within mating structures, reducing competition between males for mates while also favoring small body size for energetic reasons. Although direct functional reasons for the reduction of skeletal robusticity may be difficult to identify with any degree of certainty, they appear to map on to a range of other changes in hominin subsistence and reproductive ecology.
284 Chapter 8
Pleistocene Changes in Hominin Body Size Body size has important consequences for the ability of organisms to deal with a wide range of ecological circumstances, and it is strongly influenced by a number of key evolutionary forces. Among the factors influencing body size are Bergmann’s (1847) and Allen’s (1877) rules, which respectively state that organism body size increases and appendages shorten in order to optimize surface area-to-volume ratio as an adaptation to cold environments. Numerous studies of modern human populations have shown that Bergmann’s and Allen’s rules strongly influence variation in body size (Coon, Garn, and Birdsell 1950; Schreider 1950, 1964; Newman 1953; Roberts 1953; Hiernaux 1968; Hiernaux, Rudan, and Brambati 1975; Eveleth and Tanner 1976; Ruff 1991, 1994, 2000; Katzmarzyk and Leonard 1998), and these principals have also been suggested for fossil hominins (Trinkaus 1981; Ruff 1991, 1994; Ruff and Walker 1993; Holliday 1997). After more than a halfcentury of research on the topic, there is relative certainty about the correlation between human body size and climatic conditions. The effect of nutrition on body size has also long been understood in both clinical and anthropological contexts (Froehlich 1970; Eveleth and Tanner 1976; Lee 1979; Kennedy 1984; Stinson 1992; Leonard and Robertson 1994; Ruff 2002; Foster et al. 2005). Such studies confirm the intuitive supposition that various nutritional deficiencies and/or poor health may lead to reduced body size—a pattern evident in secular increases in historical human body size with improved food availability and health care mainly in the Western industrial world. In addition to the simple availability of food, nutritional qualities also affect body size. Specifically, protein and fat intake appears to facilitate the achievement of large body size in human populations (for example, Froehlich 1970). Clearly, aspects of nutrition and health influence the growth and development of organisms and therefore their ultimate body size. Finally, there is also a substantial literature implicating energetics as an evolutionary determinate of body size, since organisms with smaller body size have lower metabolic rates. Some of this is based on principles of island biogeography in which organisms undergo evolutionary changes in body size as a response to new regimes of food availability and competition (for example, the small hominins, small elephants, large rats, and giant lizards on the Indonesian island of Flores; MacArthur and Wilson 1967; Kalmar and Currie 2006; Whittaker and FernándezPalacios 2007; Quammen 2012). The evolution of organism body size is considered to be the result of balancing the metabolic costs of larger body size with its benefits in terms of inter- and intraspecies competition of various sorts (Lindsey 1966; Lindstedt and Boyce 1985; Peters 1986;
Alternative Perspectives on Hominin Biological Evolution and Ecology 285
Millar and Hickling 1990; Blanckenhorn 2000). This line of research also suggests that periods of starvation are crucial in limiting organism body size, with small individuals having a higher probability of survival under such dire circumstances. Thus, there is an apparent evolutionary relationship between organism energetics and body size. A conceptual difficulty inherent in the study of body size is the conflation of climatic conditions and other aspects of environments that directly influence nutrition and energetics. For example, in cold environments, plant productivity is reduced, and foragers tend to consume higher-protein and higher-fat diets (Kelly 1995; Binford 2001). This situation could potentially mimic the effects of larger body size brought about according to Bergmann’s rule. Environmental conditions also have profound consequences for forager economic strategies, settlement systems, and a range of other characteristics—each of which could itself influence body size. In this section, I examine Binford’s (2001) data concerning the stature and body mass of modern foragers to evaluate the relative effect of such variables. Here, I use partial correlation as a statistical technique for examining the relative influence of environmental, economic, and behavioral variables on modern forager body size. Constructing Frames of Reference: Examining Modern Human Forager Body Size Among the vast range of data arrayed by Binford (2001)1 for the purposes of cross-cultural comparison relative to environmental datasets are values for body size in terms of mean stature and body mass. There are more data available concerning stature, since most of these data were collected by early ethnographers in the first half of the 20th century who might have found the transport of a measuring tape less cumbersome than a scale. The range of cultural and environmental data provided by Binford offer many possibilities in terms of explaining variability in modern forager body size. Of the explanations favored by Binford (2001), climatic variables remain preferred. Indeed, this pattern is evident with some simple bivariate analyses. Figures 8.1a and 8.1b show the relationships between mean annual temperature and forager body mass and stature, respectively (body mass: r2 = 0.427, p < 0.001; stature: r2 = 0.094, p = 0.119). These analyses show that, as with other animal species, foragers with larger body size are to be found at higher latitudes with colder temperatures, although the relationship between climate and stature is much weaker than that with body mass. Figures 8.2a and 8.2b show the relationship between diet in terms of the percentage of calories from
286 Chapter 8
Mean Male Body Mass
70.00
60.00
50.00
40.00
–20.00
–10.00
.00 10.00 20.00 Mean Annual Temperature
30.00
Figure 8.1a Graph showing the relationship between mean annual temperature and mean male body mass for selected modern forager groups
Mean Male Stature
1800.00
1700.00
1600.00
1500.00
1400.00 –20.00
–10.00
.00 10.00 20.00 Mean Annual Temperature
30.00
Figure 8.1b Graph showing the relationship between mean annual temperature and mean male stature for selected modern forager groups
Alternative Perspectives on Hominin Biological Evolution and Ecology 287
Mean Male Body Mass
70.00
60.00
50.00
40.00
.00
.20
.40 .60 .80 % Calories from Hunting + Fishing
1.00
Figure 8.2a Graph showing the relationship between percentage of calories attained from hunting and fishing and mean male body mass for selected modern forager groups
Mean Male Stature
1800.00
1700.00
1600.00
1500.00
1400.00 .00
.20
.40 .60 .80 1.00 % Calories from Hunting + Fishing
Figure 8.2b Graph showing the relationship between percentage of calories attained from hunting and fishing and mean male stature for selected modern forager groups
288 Chapter 8
hunting and fishing and body mass and stature, respectively (body mass: r2 = 0.486, p < 0.001; stature: r2 = 0.346, p = 0.001). These relationships are also clearly direct and strong relationships, showing that foragers with larger body size consume greater quantities of protein-rich animal food sources. In fact, this variable would seem to explain patterns of stature much more effectively than does climate. Finally, Figures 8.3a and 8.3b show the relationships between population density and body mass and stature, respectively (body mass: r2 = 0.382, p = 0.001; stature: r2 = 0.153, p = 0.044). These relationships are also clearly strong inverse relationships, showing that foragers with larger body size tend to be found in contexts with lower population densities. Of course, latitude, mean annual temperature, percentage of calories from animal sources, and population density all have obvious relationships with one another. As Binford (2001) demonstrated, foragers living at high latitudes eat more animal foods and live at lower population densities by virtue of low plant productivity; the inverse of this relationship also applies. Given this information, how do we go about pulling apart these highly autocorrelated relationships and determining the ultimate rather than proximate causes of variation in body size? One statistical approach for solving these sorts of problem is partial correlation. This technique is useful for evaluating the strength of relationships between variables while, in a manner of speaking, holding other variables constant. In this analysis, partial correlation allows me to evaluate the relationships between the independent variables of climate, diet, and population density—and the dependent variables of body size—while holding other potentially confounding independent variables constant. Tables 8.1, 8.2, and 8.3 present matrices of partial correlation results for the various variables considered in this analysis, holding mean annual temperature, percentage of food derived from hunting and fishing, and population density constant, respectively. These tables include some interesting results. First, body mass remains correlated with population density at statistically significant levels when both mean annual temperature and percentage of food derived from hunting and fishing are held constant. Stature, in contrast, shows similar patterns of correlation but fails to reach statistical significance. The latter phenomenon is, no doubt, at least partly due to issues of small sample size. Second, and more surprisingly, neither body mass nor stature correlate with mean annual temperature at statistically significant levels when population density is held constant. Finally, body mass and stature correlate with percentage of food derived from hunting and fishing at statistically significant levels when both mean annual temperature and population density are held constant. Conversely, body mass and stature do not correlate with mean annual temperature at statistically significant
Alternative Perspectives on Hominin Biological Evolution and Ecology 289
Mean Male Body Mass
70.00
60.00
50.00
40.00
.00
20.00
40.00 60.00 80.00 100.00 Population Density (persons / sq. km)
Figure 8.3a Graph showing the relationship between population density and mean male body mass for selected modern forager groups
Mean Male Stature
1800.00
1700.00
1600.00
1500.00
1400.00 .00
20.00
60.00 80.00 100.00 40.00 Population Density (persons / sq. km)
Figure 8.3b Graph showing the relationship between population density and mean male stature for selected modern forager groups
290 Chapter 8
Table 8.1 Partial correlation of mean male body mass and stature with percentage of calories attained through hunting/fishing and population density, holding mean annual temperature constant Correlations Control Variables
Stature
Body Mass
Correlation 1.000 .872 Significance .000 (2-tailed) df 0 24 Body Mass Correlation .872 1.000 Significance .000 (2-tailed) df 24 0 % Hunting/Fishing Correlation .573 .409 Significance .002 .038 (2-tailed) df 24 24 Pop. Density Correlation –.291 –.456 Significance .150 .019 (2-tailed) df 24 24
% Hunting/ Fishing Density
Mean Annual Stature Temperature
.573 .002
–.291 .150
24 .409 .038
24 –.456 .019
24 1.000
24 .241 .236
0 .241 .236
24 1.000
24
0
Table 8.2 Partial correlation of mean male body mass and stature with mean annual temperature and population density, holding percentage of calories attained through hunting/fishing constant Correlations Control Variables % Hunting/ Fishing
Stature
Body Mass
Mean Ann. Temp.
Pop. Density
Stature
Body Mass
Correlation 1.000 .723 Significance .000 (2-tailed) df 0 24 Correlation .723 1.000 Significance .000 (2-tailed) df 24 0 Correlation .265 –.264 Significance .191 .192 (2-tailed) df 24 24 Correlation –.324 –.656 Significance .106 .000 (2-tailed) df 24 24
Mean Ann. Pop. Temp. Density .265 .191
–.324 .106
24 –.264 .192
24 –.656 .000
24 1.000
24 .488 .011
0 .488 .011
24 1.000
24
0
Alternative Perspectives on Hominin Biological Evolution and Ecology 291
Table 8.3 Partial correlation of mean male body mass and stature with mean annual temperature and percentage of calories attained through hunting/fishing, population density holding constant Correlations
Control Variables Pop. Density
Stature
Body Mass
Mean Ann. Temp.
% Hunting/Fishing
Stature
Body Mass
Correlation 1.000 .813 Significance .000 (2-tailed) df 0 24 Correlation .813 1.000 Significance .000 (2-tailed) df 24 0 Correlation –.147 –.515 Significance .474 .007 (2-tailed) df 24 24 Correlation .556 .726 Significance .003 .000 (2-tailed) df 24 24
Mean Ann. Temp.
% Hunting/ Fishing
–.147 .474
.556 .003
24 –.515 .007
24 .726 .000
24 1.000
24 –.761 .000
0 –.761 .000
24 1.000
24
0
levels when percentage of food derived from hunting and fish is held constant. In addition, while body mass does correlate with population density at statistically significant levels when percentage of food derived from hunting and fishing is held constant, stature does not. These patterns seem to suggest several important relationships responsible for determining body size. While all three sets of variables clearly play important roles in influencing body mass and stature, population density and diet may actually be more important than climate alone. Next, although diet seems to be a somewhat stronger factor in promoting large body size, the fact that body mass remains strongly correlated with population density when percentage of calories derived from hunting and fishing is held constant suggests that it has an important and independent effect. Finally, from a methodological perspective, these data may also suggest that both body mass and stature are imperfect indices of other aspects of the human body plan, such as bi-illiac breadth, torso shape, and relative limb lengths, all of which may be more directly sensitive to the effects of Bergmann’s and Allen’s rules. In contrast, body mass remains the best index of metabolic dynamics and thus may reflect evolutionary responses to energetic constraints and diet more directly.
292 Chapter 8
Fossil Patterns and Evolutionary Implications Contrary to what many may have expected, hominin body size seems to have reached its peak during the Middle and early Upper Pleistocene. In his review, for example, Ruff (2002) isolates three major periods of change: first, there is a period of significant body size increase in the transition from the australopithecines to the early members of the genus Homo (see also Ruff, Trinkaus, and Holliday 1997). Although Holliday (2012) has recently demonstrated that initial early Homo individuals were somewhat small relative to modern human populations, later Pleistocene hominin populations saw body size increase to levels significantly surpassing those of modern human populations. Ruff (2002: 216) estimates that Pleistocene hominin populations living before 50 ka had body sizes between 10–20% larger than those of modern humans. Next, Ruff identifies a significant decline in hominin body size after 50 ka, which was associated with the expansion of anatomically modern human populations out of Africa and the widespread origins of increasingly complex foraging technologies. Figure 8.4 is a box plot showing body size data presented by Ruff and colleagues (1997)
80.00
Mean Estimated Body Mass
70.00
60.00
50.00
40.00
30.00 LP
MP
Time Period
UP
modern
Figure 8.4 Graph showing mean estimated body mass for hominins of various ages
Alternative Perspectives on Hominin Biological Evolution and Ecology 293
and discussed further by Ruff (2002). (I have excluded cold-adapted European Neanderthals for the sake of the current discussion.) In explaining these patterns, Ruff (2002) focuses on Bergmann’s and Allen’s rules as explanations for the increase in body size witnessed in the lead up to the Middle Pleistocene, since this is the period in which hominin populations first lived at extratropical latitudes outside of Africa. I am skeptical of this explanation for several reasons. First, given fluctuation in Pleistocene climate according to glacial cycle dynamics, it is not clear that these hominin populations really lived in environments that would have been cold enough to spark adaptations in terms of Bergmann’s and Allen’s rules. Many Middle Pleistocene contexts at relatively high latitudes in Europe have faunal remains from animal species that we would today tend to think of as tropical or subtropical. For example, Schreve and colleagues (2002) have shown the presence of Middle Pleistocene hominin populations, unusual combinations of fauna today mostly associated with sub-Saharan Africa, and temperate floral communities all alive and well during Oxygen Isotope Stage (OIS) 9 in Essex, England. Even Neanderthals, which do show clear signs of adaptations according to Bergmann’s and Allen’s rules (for instance, Trinkaus 1981), occupied Europe in what might be described as a spatial/temporal mosaic, responding to fluctuations in glacial cycle geography (Hublin and Roebroeks 2009). Most evidence suggests that Neanderthals lived in a range of temperate environments with variable associated ecological communities (López García et al. 2012), which are not comparable to modern human populations living in the Arctic (Stewart 2005). Second, although postcranial remains are frustratingly scarce, it is clear that Middle Pleistocene hominins living in sub-Saharan Africa were very large. For example, using regression formulae derived from modern human samples, McHenry (1992) found that the Bodo cranium, belonging to a transitional species between Homo erectus and modern humans and dating to around 600 ka, would predict a body size as great as that of a male gorilla. Although it is obviously unlikely that this hominin was actually the size of a gorilla, there are good reasons for believing that the hominins to which skulls such as those from Bodo, Daka, Kabwe, Ndutu, Saldanha, and Orange River belonged were extremely large relative to recent human populations. Bergmann’s and Allen’s rules offer no explanations for large body sizes in these cases, since these hominin populations lived squarely within the arid tropical and subtropical zones of sub-Saharan Africa. Furthermore, broadly contemporaneous crania from temperate regions of Eurasia are not any larger than these African examples. For these reasons, I would argue
294 Chapter 8
that other explanations must be sought to explain this pattern of large Middle Pleistocene hominin body size. Finally, explanations for the apparent reductions in body size seen at the close of the Upper Pleistocene also seem inadequate to their task, especially in light of the fact that this pattern cross-cuts all latitudinal zones (Ruff 2002). Such explanations are quite varied in their orientations. It has been argued that this post-50 ka reduction in body size resulted from the development of hunting weapon technologies, such as the bow and arrow, that removed the need for large body size and associated physical strength. To me, this scenario seems unlikely given that such complex weapon technologies are intermittent in their appearance and are prevalent only from the terminal Pleistocene onward. More plausible explanations have focused on nutritional stress and overcrowding associated with subsistence intensification and ultimately the origins of agricultural economies (Frayer 1980; Ruff, Trinkaus, and Holliday 1997; Formicola and Giannecchini 1999; Ruff 2002), which are prone to periodic disastrous failures (Cohen and Armelagos 1984). Still, while nutritional explanations make clear sense for Holocene populations engaged in the risky business of complex foraging and agricultural economies and where other health problems were prevalent, they fail to explain the patterns seen among earlier Pleistocene hominins. Theoretical Considerations The fact that modern human forager body size shows strong relationships with population density and diet has some important implications for thinking about Pleistocene patterns of hominin body size and change over time. When Bergmann’s and Allen’s rules are excluded, these other explanations for the apparent surge in body size leading up to the Middle Pleistocene and its subsequent decline beginning in the late Upper Pleistocene seem warranted. Furthermore, I propose that they hold great potential in helping us reconstruct patterns of hominin ecology and its change over the course of the Pleistocene. Before considering the implications of the prehistoric patterns in light of the modern human forager data presented here, I must discuss in greater detail some of the potential explanations for the observed variability in body size. The reasons for the relationships between percentage of calories derived from hunting and fishing, levels of protein intake, and body size operate according to biological principals that have been well understood since the work of Froehlich (1970) and even date back to the early work of Boas (1912) on the reality of racial distinctions. Beginning with Frayer (1980), reductions in body size have been linked with risky subsistence practices that often entail periodic episodes of starvation. Although such
Alternative Perspectives on Hominin Biological Evolution and Ecology 295
discussions have generally revolved around early agricultural systems, which have been dubbed “the worst mistake in the history of the human race” (Diamond 1987), many aspects of intensified foraging economies typical of post-Pleistocene adaptation are also prone to profound resource failures. Under circumstances of starvation, individuals with small body size and concomitant low metabolic requirements have a structural advantage that promotes survival. A more recent historical example helps illustrate this phenomenon. For example, Grayson (1990) has conducted a detailed analysis of the demographic characteristics of the 86 members of the Donner Party wagon train, which was stranded by early heavy snowfall while they were traveling west through the Sierra Nevada of California in 1846. Grayson showed that an individual’s likelihood of survival was primarily dependent on age and sex, with subadults and females surviving in much higher frequencies than adult males. Here, the smaller body sizes and lower metabolic rates of females and children seem to have played a key role in their survival under conditions of starvation. It is easy to imagine how such circumstances might favor selection for individuals with small body size, thus causing actual genetic changes to occur in early hominins (in addition to the results of such periods of starvation on the growth and development of the subadults involved). How do these sorts of food stress and starvation dynamics relate to population density? Among modern forager populations, there are several clear ways in which this works. First and most important, densely packed populations eliminate certain strategies in terms of maintaining foraging efficiency and reducing risk associated with major resource failures stemming from such events as droughts, plant disease outbreaks, and the like. Such strategies typically involve movement and/or the use of alternate residences with kin or reciprocity partners (Kelly 1995; Binford 2001). During “normal” periods of foraging, groups may maintain high foraging return rates by moving once those begin to decline. Thus, foraging groups may move on once high-ranked subsistence opportunities begin to decline. Packed populations, however, are frequently prevented from making use of this strategy, because potential alternative foraging patches are already being exploited by other human populations. As population densities increase, foraging groups must solve subsistence problems by moving down the ladder of ranked food resources, making use of lower-quality resources requiring greater handling costs and also more risk. For these reasons, packed populations frequently depress the availability of high-ranked food resources that have slow turnover rates and thus depend on annual plant resources that are prone to failure. Large human population densities may over-exploit potential prey animals, whose populations may take generations to recover, or they may exhaust
296 Chapter 8
perennial plant food resources that take long periods of time to grow and develop, such as various plant species with underground storage organs. When these low-quality plant food resources fail, there are often few available back-up food sources, since such low-quality annual foods were last resorts to begin with. In addition, by virtue of their territorial circumscription and the focus on low-ranked food resources, packed populations are also more prone to starvation during periods of resource failure. In areas of low population density, an individual or group might simply move to a region that was unaffected by an environmental disaster to find food. In more social terms, individuals or families might also seek to live with other family members or reciprocity partners who happen to live in an unaffected area. Packed populations prevent the use of this strategy, because neighboring territories are already full, thus offering little availability of additional food. In approaching these phenomena, Binford (2001) has identified what he refers to as the “packing threshold,” which is effectively the level of population packing at which major social organizational changes start to occur, such as those seen during post-Pleistocene adaptation. Binford’s packing threshold is 9.028 people per 100 km2. In examining the body-size data, we see that this packing threshold plays a strong role in the relationships with population density. Forager groups with population densities below the packing threshold (n = 16) have a mean body mass of 62.4 kg, whereas groups above the packing threshold (n = 11) have a mean body mass of 51.7 kg (t = -3.719, p = 0.001). Thus, the mean for the sample below the packing threshold is quite close to the means for Pleistocene hominins offered by Ruff and colleagues (1997; Ruff 2002), whereas the sample above the packing threshold is below the confidence intervals for all the Pleistocene hominin groups. For both modern forager groups and apparently also those from the late Upper Pleistocene onward, there is a case to be made that population packing and periodic episodes of starvation created selective pressures for smaller body size. Finally, there may well be a nutritional aspect of this phenomenon in terms of issues of growth and development. I have already discussed the evidence for the consumption of high-quality diets by early hominins, which would have been rich in proteins and fat. Likewise, if I am correct about such hominin populations living in low population densities, focusing on highly ranked subsistence resources, and maintaining simple back-up strategies for dealing with various resource failures, then episodes of starvation would have been extremely rare. This dietary pattern would have been quite literally a good recipe for large body size in hominin populations, and it resembles the nutritional conditions that
Alternative Perspectives on Hominin Biological Evolution and Ecology 297
have recently brought about historical surges in body size in Western industrial societies. In contrast, conditions of episodic starvation in combination with low-quality diets might have represented epigenetic factors limiting the development of large body size outside a selective context. In this sense, the foraging ecology of the Pleistocene hominin might have allowed individual growth and development to proceed to levels that were not realized in later prehistoric time periods (or for that matter, among many Third World populations today). Afterthoughts on Increasing Hominin Body Size in the Lower Pleistocene A final issue I want to consider briefly is why hominin body size increased from the origins of the genus Homo through the Lower and Middle Pleistocene. Within the Washburn-Isaac synthesis, increasing body size was seen as a key adaptive feature (see Washburn 1950 for an early discussion of this phenomenon) that facilitated the origins of carnivory and its consequent evolutionary effects. In this respect, it was the dramatic increase in body size from the diminutive australopithecine grade to the large-bodied specimens of Homo erectus that was seen as significant. Recent studies, such as that of Holliday (2012), have shown that patterns of change in Lower Pleistocene hominin body size are more complex than was once thought. Specifically, recent discoveries suggest that, rather than being a saltational leap between the australopithecines and Homo erectus, body size continued to increase steadily over the course of the Lower and early Middle Pleistocene. This pattern is provocative for several reasons: first, in demonstrating that significant increases in body size continued after the initial appearance of Homo erectus, new fossils finds and analyses demonstrate that increases in body size have ecological and evolutionary significance beyond the origins of the genus Homo; second, evolutionarily significant changes in body size did not stop with origins of Homo erectus but instead continued within all Quaternary hominin populations, including Holocene modern humans. Although I have argued that late Pleistocene and Holocene declines in body size indicate the first major periods of sharply increasing population density and foraging intensification, it is also worth investigating the increase in hominin body size at the onset of the Pleistocene. There are several possibilities that should be explored here, each of which holds significant implications in terms of the ecology of Lower and Middle Pleistocene hominin populations. Larger body size clearly has the effect of changing the characteristics of the prey animals that predators are capable of hunting. Much of the early literature on the
298 Chapter 8
origins of Homo erectus body size and shape, including that couched within the Washburn-Isaac framework, focused on the implications of increased strength associated with larger body size in terms of attacking prey (Ardrey 1961, 1976; Isaac 1968; Washburn and Lancaster 1968; Tiger and Fox 1971; Hill 1982). Thus, larger body size was seen as helping hominins make the transition from hunted australopithecines to early Homo hunters, if I may be pardoned for paraphrasing Brain (1981). A more sophisticated take on the implications of body size for mammalian predators involves the issue of range size and mobility. As Holliday (2012) has argued recently, all known mammalian carnivores show a strong relationship between body size, territory size, and ranging distances (Martin 1981; McHenry 1994; O’Connell et al. 1999; see also Gittleman 1985 for a comparative review of mammalian carnivores). This model would imply that, as hominin body size increased across the Lower and early Middle Pleistocene, hominin ranging and foraging territory sizes were also on the rise. This relationship is useful in terms of thinking about both the initial surge in hominin body size with the origins of the genus Homo and subsequent increases over the course of Lower and Middle Pleistocene. For now, however, I limit my current discussion to the latter. A putative increase in territory size and ranging distance indicated by increasing body size during the Acheulean articulates with a number of other inferences offered in earlier chapters. To begin with, I argued earlier that the bifacial stone tool technologies typical of the Acheulean industry during the Lower and Middle Pleistocene represented a set of strategies for coping with extreme mobility patterns and large territory sizes. Furthermore, I suggested that such extreme mobility patterns were instrumental aspects of routed foraging settlement systems. In addition, I also made the case that certain changes over time within the Acheulean industry itself, such as more efficient bifacial thinning and reductions in handaxe size, related to the peak extremes of mobility experienced before the transition to residential site use with the origins of the MSA. If I am correct in my linkage of Acheulean technologies with mobility, the timing of these patterns of archaeological change would appear to be linking with trends in larger body size. The relationship between increasing hominin body size and the Acheulean archaeological record is also a point astutely made by Holliday (2012). Finally, one must also consider the implications of body size and sexual dimorphism for hominin mating structures. In the absence of energetic constraints, larger body size benefits individuals during episodes of intraspecies competition, which, among mammals, often takes the form of male-male competition for breeding access, especially within harem mating structures (for example, Greenwood 1980). In
Alternative Perspectives on Hominin Biological Evolution and Ecology 299
addition, low population densities and rich resource availability have also been documented as causes of asymmetries between male and female populations in terms of mating structures (see Emlen and Oring 1977 for widely cited discussion). In contrast, denser populations and scarcer food resources seem to encourage more evenness between male and female populations in breeding contexts. A brief anecdote may help illustrate some of these points. On a recent trip to Scotland, I visited the National Museum in Edinburgh and noticed a striking display on the fossil record of early Holocene red deer (Cervus elaphus) and, more specifically, this species’ change in body size over time. The museum display showed that the initial early Holocene red deer populations endemic to Scotland were substantially larger than their modern descendants, perhaps twice as large. As CluttonBrock and Guinness (1982) have argued, when red deer have access to rich feeding environments, as they did in early postglacial Scotland, body size increases owing to the selective benefits imparted to males during episodes of mating competition. As Holocene deer population sizes increased and feeding environments became more restricted, deer body size reduced toward modern levels as energetic constraints began to trump the selective benefits of large body size during episodes of male-male competition. Furthermore, this natural experiment repeated itself during the 19th century, when red deer were introduced to New Zealand. Here again, in the context of rich feeding environments, body size increased radically within a short period of time (Caughley 1971). As Millar and Hickling (1990) discuss, such examples clearly demonstrate the role of energetics and periodic episodes of starvation on the evolution of mammalian body size. I hold that changes in hominin body size over time may well reflect the balancing of the benefits of large body size within reproductive contexts and the benefits of small body size during periods of starvation, as Millar and Hickling (1990) have argued for mammals. Although I am not happy with the application of the term monogamy to later hominin populations (or modern humans, for that matter), it seems equally clear that early hominin populations were characterized by significantly greater levels of sexual dimorphism and competition between males within breeding contexts (Brace 1973; Frayer 1980, Frayer and Wolpoff 1985; Lovejoy 1981; McHenry1994; Aiello and Key 2002; Ruff 2002; Spoor et al. 2007; Anton 2012; though see objections by Plavcan 2012). Lower and early Middle Pleistocene increases in hominin body size may well have been tied to increased sexual dimorphism (for example, Arsuaga et al. 1997; Ruff, Trinkaus, and Holliday 1997), although the scarcity of female hominin fossil remains frustrates the evaluation of this idea (Ruff 2002).
300 Chapter 8
If this is the case, the possibility remains that these patterns indicate substantial differences in social structure and mating systems on the part of earlier hominin grades. In this scenario, larger body size would have resulted from the general lack of limitation on energy budgets through efficient foraging of highly ranked food resources and competition between males within asymmetrical mating systems. Finally, such forms of social competition might have resulted in interpersonal violence, as continues today among modern human populations, and this violence might have been partially responsible for the high levels of skeletal trauma focused on craniofacial regions among our Pleistocene hominin ancestors (see McCall and Shields 2008 for more detailed discussions). In short, there are several likely reasons for the increase in hominin body size in the Lower and Middle Pleistocene and its subsequent decline at the close of the Pleistocene. What these explanations share is the implication that the Middle Pleistocene hominins responsible for making Acheulean technology lived in very large territories, had very low population densities, and employed efficient foraging strategies focused on rich food resources. Furthermore, late Upper Pleistocene declines in body size would seem to map onto other important cultural phenomena in terms of the development of complex weapons and other technologies for reducing risk during hunting activities, and ultimately the whole menagerie of post-Pleistocene adaptations in terms of foraging behavior and social structure. Simply put, after around 50 ka, life got harder for hominins as their populations expanded and reductions in body size likely resulted from this phenomenon. In contrast, Middle Pleistocene hominin maintained large body sizes through the regular availability of high-quality nutritional resources and dealt with potential problems of foraging risk through nontechnological means, especially through mobility within very large territories. Afterthoughts on Robusticity The issue of robusticity in Pleistocene hominins is notoriously complex and troublesome, and I hesitate to address it now; it is beyond the scope of this discussion to go into this issue in any real detail. There are, however, a few aspects of the robusticity phenomenon that may be relevant to the ecological issues examined in this chapter and more broadly in this book. Because of the general decline in hominin robusticity over the course of the Pleistocene (Ruff et al. 1993), its features have been used as a marker of the primitive condition within the hominin lineage for the purposes of phylogeny and even chronology. Furthermore, since certain species, such as Neanderthals and Asian Homo erectus, have distinctive combinations of robust skeletal features, robusticity has also been
Alternative Perspectives on Hominin Biological Evolution and Ecology 301
important for problems of fossil taxonomy. With that said, the exact causes of skeletal robusticity and the implications of its change over time remain obscure, while potentially holding profound implications for early hominin evolutionary dynamics. I do not intend to be definitive about this subject, given its complexity and the controversy about its interpretation. Yet, there are a few generalizations that deserve attention in light of the ecological dynamics discussed in this chapter. Most generally, hominin robusticity seems to be directly correlated with the other archaeological and biological patterns that, I have argued, indicate low population densities and rich resource availability. It is therefore conceivable that robusticity is related to population density, although perhaps in indirect fashion and in association with a range of other autocorrelated variables. How and why would this be the case? Robusticity clearly holds some form of a relationship with body size and may also play an important role in the kinds of intraspecies competition already discussed. Although body size is not, by itself, nearly sufficient to explain complex patterns of skeletal robusticity (and especially craniofacial robusticity), it is apparent that there is some correlation, and robusticity likely holds implications for forms of competition. Postcranial robusticity has been argued to indicate higher levels of strength on the part of early hominins, acting as a mechanical response to stresses produced by larger and/or more powerful musculature (Ruff et al. 1993). If this is the case, it might imply that early hominins had larger masses of metabolically expensive muscle tissue, which was no doubt important for subsistence activities but perhaps more so for intraspecies competition. Robust early hominins could afford these expensive tissues by virtue of their consistent access to high-quality nutrition, and greater muscularity gave them clear selective benefits in terms of breeding competition. In contrast, as hominin population densities increased and high-quality food availability diminished, skeletons became smaller and less robust in response to energetic constraints. Returning to the Scottish red deer example mentioned earlier, the early Holocene deer were also more skeletally robust in addition to having large body size. In this case, robusticity and body size vary together and both reduced over the course of the Holocene in concert with increasing feeding stress. Elsewhere, Brink (2005) has shown an almost identical link between population size, skeletal robusticity, and population density in the evolutionary history of black wildebeest (Connochaetes gnou) in southern Africa. These studies illustrate the complexity of ecological factors affecting the evolution of body size and skeletal structure, and they corroborate my arguments that energetic issues related to population density are key factors in this equation. Thinking more philosophically
302 Chapter 8
for a moment, we can also see that no evolutionary biologist would refer to contemporary red deer or black wildebeest as “anatomically modern.” Instead, such studies suggest that animal species may have variable trajectories of anatomical change in terms of body size and robusticity that are evolutionarily determined by a host of factors, scaling with both population density and reproductive dynamics. Although this scenario may not account for hominin craniofacial robusticity very well,2 other phenomena having to do with demography and gene flow may. For example, the “accretion” model of Neanderthal craniofacial robusticity argues for low population densities and the genetic isolation of Neanderthals in western Eurasia (Hublin 2009; Hublin and Roebroeks 2009). Within this scenario, after Middle Pleistocene hominin populations reached western Eurasia, certain elements of their craniofacial robusticity became amplified through processes of bottlenecking and genetic drift. Over time, certain features came to distinguish Neanderthals from other contemporaneous hominin populations, which could be explained through genetic isolation. Larger and more genetically connected populations would have seen less distinction between regions and more similarity in terms of patterns of robusticity at a global scale—a pattern that characterizes more recent human populations quite well. Finally, certain lines of recent research on craniofacial robusticity have begun to consider possibilities in terms of the genetic linkage of robust features with other aspects of both anatomy and behavior. For example, Franciscus and colleagues (2013) argue that late Pleistocene hominins and especially anatomically modern humans were, in a sense, “self-domesticated.” By this, they mean that increasingly social Upper Pleistocene hominin populations became much less prone to aggression in the same way that dogs and other domestic animals have departed from the behavior of their wild ancestors. As has been documented in recent animal breeding experiments, such as the domestication of silver foxes (Vulpes fulvus) in Siberia, intensive selection against aggressiveness may also manifest itself in terms of unanticipated anatomical changes that are genetically linked with the neurological and endocrine systems (see Trut 1999 for discussion). As human breeders selected for nonaggressive, tame foxes, significant changes occurred in terms of both pelage and skeletal structure. These external anatomical features appear to be tied to the production of such hormones as adrenaline, which stimulate aggressive behavior. Thus, as foxes were selected for nonaggressive behavior, the adrenal system changed, which had consequences for the foxes’ pelts and skeletal structures. As Trut (1999) suggests, this fox domestication experiment obviously holds clear implications for the human domestication of other animal
Alternative Perspectives on Hominin Biological Evolution and Ecology 303
species. In addition, Franciscus and colleagues (2013) observed that as shifting hominin patterns of social structure began to favor reductions in aggressive behavior, anatomical changes like reductions in craniofacial robusticity may have occurred through linkage with neurological and endocrine system changes. Furthermore, many of these anatomical changes are inherently neotonic and effectively involve the retention of adolescent features into adulthood. Neotony is clearly involved in the gracilization of later hominin populations, especially from the late Pleistocene onward. For this reason, there is a great deal of circumstantial support for the proposition that craniofacial robusticity reduced in concert with lower levels of aggressiveness among increasingly social hominin populations. This perspective offers a concrete framework for building a more sophisticated understanding of the relationship between craniofacial robusticity, aggressiveness, and breeding competition. It also offers an increasingly workable linkage with variables influencing hominin body size. In terms of hominin evolutionary dynamics, several phenomena are worth considering. First, as hominin mating structures changed in the context of greater population densities, craniofacial robusticity may have reduced as the incidence of male-male competition declined. Second, as hominin population densities increased and foraging systems intensified, this may have favored patterns of social interaction favoring cooperation, therefore inducing reductions in aggressive behavior. For example, all modern foragers rely on intricate social networks in terms of such things as the patterns of sharing that underlie cooperative foraging activities and other risk-reduction strategies. Among Pleistocene hominins, these types of social dynamics may have created selective pressures favoring reductions in aggressive behavior, which in turn led to reductions in robusticity. More specifically, the adoption of residential site use patterns, which I proposed took place around the Middle-to-Upper Pleistocene boundary, may have been a major Rubicon in terms of hominin social behavior. This shift may have had consequences in terms of both shifting mating structures and more complex cooperative foraging behavior.3 Thus, this cultural transition may have had important consequences for biological phenomena related to robusticity. Here, I would point to the suggestive timing of these changes, with clear reductions in robusticity occurring with the onset of the Upper Pleistocene. In sub-Saharan Africa, early anatomically modern humans with substantially reduced robusticity emerged around 200 ka, which would be shortly after the transition to MSA residential site use patterns and mobility systems. It would also not seem to be a coincidence that anatomically modern humans, defined to a great extent by a reduction in robusticity, emerged
304 Chapter 8
in the northern Rift Valley of eastern sub-Saharan Africa, where hominin populations were likely densest and where many significant cultural changes happened earliest. I would argue that the same ecological dynamics that brought about the cultural changes discussed in this book, especially the origins of residential site use, also brought about the origins of hominin populations with reduced robusticity. In this sense, we may think about certain hominin biological characteristics as having been organizationally related to foraging ecology in much the same way as the technological traditions discussed in the first section of this book.
Implications Increasing Brain Size among Pleistocene Hominin Populations Encephalization is obviously the key feature that distinguishes the genus Homo, and especially its later members, from the rest of the animal kingdom. Large brain size is also seen as the main facilitator of all of forms of hominin cultural behavior, including technology, language, and complex social structures. However, as the expensive tissue hypothesis has helped to illustrate, larger brains represent a complex set of tradeoffs in terms of energetic costs and behavioral benefits (Aiello and Wheeler 1995; Aiello and Wells 2002; Ruff, Trinkaus, and Holliday 1997). While larger brains hold clear advantages in terms of cognition, brains are very metabolically expensive, and many of the same ecological dynamics previously discussed concerning body size are also relevant here. Once again, rather than considering the implications of brain size for the relative modernity of various hominin species, such varieties of information clearly hold valuable clues about a host of ecological dynamics and evolutionary issues. This section begins by examining patterns of change in Pleistocene hominin brain size. On the basis of currently available data, there would seem to be (1) a general pattern of increasing brain size across the Lower and Middle Pleistocene and (2) a moderate inflection point during the early Middle Pleistocene at which time rates of change increase. As in the last section, I argue that certain aspects of this pattern indicate efficient foraging and the reliable availability of rich food resources. In presenting a comparative review of mammalian carnivore encephalization, however, I offer additional inferences concerning hominin foraging behavior. Specifically, I show that among other mammalian carnivore taxa, large brain size correlates with dynamics including diverse and seasonally dependent diets and large and environmentally diverse habitat ranges. I also show that other unusual patterns typify the most encephalized mammalian carnivores, such the mixture of hunting, scavenging, and plant-based feeding habits. I also
Alternative Perspectives on Hominin Biological Evolution and Ecology 305
briefly discuss the fact that many bird species show similar correlations between brain size and ecological variables, while engaging in basic forms of tool use behavior. Among other implications, this comparative case study suggests that social explanations of hominin brain size increase, including those inherent within the Washburn-Isaac synthesis, may be somewhat overblown and presentist in the orientation. I close this section by discussing the idea that Lower and Middle Pleistocene periods of encephalization, in effect, set the stage for the complex forms of social behavior seen during the African MSA and common to all living human groups today. In other words, hominin brain size increased because of shifts in foraging ecology and rich resource availability; large hominin brains were later coopted for complex social behaviors after a significant set of cultural transitions experienced during the Upper Pleistocene. Thus, such complex social behavior was not caused by large hominin brain size but rather resulted from a combination of new forms of social interaction seen during the Upper Pleistocene, with large brains resulting from earlier periods of shifting foraging ecology. Reviewing Patterns of Hominin Encephalization Modern human populations have brains about three times larger than those of our australopithecine ancestors, who had brains only slightly larger than extant ape specifies. Increasing brain size is, therefore, one of the main anatomical changes seen with the origins of the genus Homo and within the genus Homo over the course of the Pleistocene. The Washburn-Isaac scenario clearly involved increasing brain size, which it linked with more complex forms of cooperative social behavior, more sophisticated technologies, and rich nutritional resources resulting from hunting activities (Washburn and Lancaster 1968; Isaac 1978a). Thus, larger brain size related to big game hunting in a number of intrinsic ways and was a basic building block of human culture. There are several problems with this view beyond those discussed in earlier chapters of this book. One of these is that the currently available fossil record shows more complex patterns of hominin encephalization than what was once thought. For example, smallbrained members of the early genus Homo, such as the Dmanisi crania dating to 1.77 ma (Rightmire 2004), show that not all were particularly large-brained. Likewise, the relatively large brain size of certain late australopithecines, such as those from Sediba (Berger et al. 2010), show that brain size increase had already begun during the late Pliocene and was not unique to the genus Homo. Finally, comparisons of cranial remains from other Lower and Middle Pleistocene contexts
306 Chapter 8
show more complexity in secular patterning than what might once have been thought (Ruff, Trinkaus, and Holliday 1997; Ruff 2002; Lee and Wolpoff 2003; Rightmire 2004). Ruff and colleagues (1997) have gone as far as to argue that brain size increased little across the Lower and Early Middle Pleistocene, inflecting only in the later Middle Pleistocene, and also declining after around 50 ka in concert with body size. In addition, Relethford (2001) offered a reanalysis of the data presented by Ruff and colleagues (1997), suggesting that there was a fairly rapid increase in hominin brain size after around 700 ka. Not everyone agrees with this view, however. For example, Lee and Wolpoff (2003) used resampling statistical methods in a reanalysis of the data from Ruff and associates (1997), finding no evidence for any inflection in the rate of brain size increase (which they also take as evidence supporting a single-species view of evolution within the genus Homo). Although these views agree that significant periods of encephalization occurred late in the Pleistocene, the nature and timing of change over time has remained the subject of key debate. Of relevance to earlier discussions in this book, it has sometimes been proposed that the achievement of large brain size during the later Middle Pleistocene played a role in aspects of archaeological patterning during the Acheulean industry. For example, Stout (2002) argues that the finely made handaxes of the Acheulean industry postdating 500 ka could not have been made without social structures of teaching and learning, obviously requiring the use of language. Here, Stout implicated large later Middle Pleistocene hominin brains as a major factor in promoting both the technical skills required in knapping finely made handaxes and the use of language necessary for teaching and learning. The inflected view of hominin encephalization lends further credence to the idea that the Middle Pleistocene was a period of cultural innovation (Ruff, Trinkaus, and Holliday 1997; Relethford 2001), while the gradualist view suggests that cultural changes occurred more slowly and at stable rates over most of the Pleistocene. I would also argue that this period of encephalization holds consequences for our understanding of hominin ecology. As with increasing body size, encephalization implies major metabolic shifts, as brains are extremely energetically expensive organs. Aiello and Wheeler (1995) have made the case that encephalization was, in a sense, paid for by reductions in the size of other tissues (especially guts). Yet, in combination with larger body size, there apparently were increases in the overall metabolic rates of hominins, necessitating the acquisition of more and higher-quality food resources. Thus, Pleistocene hominin encephalization may hold the same set of implications just discussed for increasing body size: that hominins had efficient foraging habits focused on high-return and
Alternative Perspectives on Hominin Biological Evolution and Ecology 307
low-risk food resources. If there was indeed an inflection in encephalization rates during the Middle Pleistocene, this would seem to align with a number of other significant cultural and biological changes already discussed in this book. Is there still strong evidence for such a period of inflection in encephalization rates during the latter half of the Middle Pleistocene or, as Relethford (2001) suggests, after 700 ka? Figure 8.5 shows the distribution of reconstructed cranial capacities for the 94 Pleistocene hominins discussed in Ruff and colleagues (1997). All researchers agree that this distribution has the appearance of inflection in the Middle Pleistocene. Lee and Wolpoff (2003), however, propose that this appearance of curvilinearity was really more an artifact of the dependence of rates of change on absolute brain size. In other words, the same proportion of increase for one large brain and one small brain will produce two very different absolute amounts of change. Therefore, the apparent later Middle Pleistocene inflection in encephalization rates could be due to the fact that brains were larger than they had been in the Lower Pleistocene. Lee and Wolpoff deal with this problem by logarithmically transforming their data and through resampling
1750
Cranial Capacity (cc)
1500
1250
1000
750
1200
1000
800
600
400
200
0
Age (ka)
Figure 8.5 Graph showing cranial capacities for Pleistocene hominin fossils of various ages
308 Chapter 8
methods in order to deal with unevenness in sample sizes at different time periods. While I agree with several key points made by Lee and Wolpoff (2003), I believe that their analysis omits some important details. One is the fact that brain size and body size have a strong allometric relationship, and body size clearly changes over the course of the Pleistocene in complex ways. In an early seminal consideration of this problem, Martin (1981) proposed the calculation of an “encephalization quotient,” or EQ, which is a technique for standardizing brain sizes to body sizes within terrestrial vertebrate taxa. Martin offers the following formula for calculating EQ: EQ = brain mass (g)/(11.22 x body mass (kg)0.76) Comparing brain size to body size is difficult for Pleistocene hominins, since the postcranial elements necessary for estimating body mass (especially bi-illiac breadth) are almost never found in association with intact crania. As Ruff and colleagues (1997) observed originally, the result significantly complicates the analysis of encephalization patterns. However, some interesting patterns emerge when these brain size data are considered from various perspectives. I began by averaging brain sizes according the time period categories for body size presented by Ruff and associates (1997). Figure 8.6 presents for each Pleistocene time period (a) mean raw cranial capacity values, (b) the mean natural log values of brain sizes (after Lee and Wolpoff 2003), (c) mean EQ values, and (d) mean natural logs of these EQ values. Figure 8.6a once again offers the visual implication of inflection in encephalization rate in the later Middle Pleistocene. However, following Lee and Wolpoff’s methods, the natural log values for brain size show very steady change. Yet, both the mean EQ values and the natural logs of these mean EQ values once again seem to suggest a later Middle Pleistocene inflection in encephalization rates. Thus, although not completely unequivocal, these data continue to suggest to me that rates of encephalization did, in fact, accelerate in the Middle Pleistocene before reaching their modern levels in the Upper Pleistocene. It is not my purpose to adjudicate between gradualist single-species and multi-species punctuated equilibria models of evolution within the Pleistocene genus Homo. In addition, I generally disagree with Lee and Wolpoff (2003) that an inflection in encephalization rates would imply multiple evolutionary processes responsible for brain size increase (and therefore multiple species of Pleistocene hominins). For me, this pattern may well indicate a single set of selective forces acting on hominin brain size, although perhaps with varying ecological constraints and limitations. In its close correspondence with patterns of change in body
Alternative Perspectives on Hominin Biological Evolution and Ecology 309
2000.00
Mean Cranial Capacity
1500.00
1000.00
500.00
.00 1.15–1.9ma 600ka–1.15ma 400–550ka
200–250ka 100–150ka