211 69 34MB
English Pages 654 [656] Year 2015
CAMBRIDGE STUDIES IN BIOLOGICAL AND EVOLUTIONARY ANTHROPOLOGY
CLARK SPENCER LARSEN
Bioarchaeology Interpreting Behavior from the Human Skeleton
Bioarchaeology Interpreting Behavior from the Human Skeleton SECOND EDITI ON
Now including numerous full color figures, this updated and revised edition of Larsen’s classic text provides a comprehensive overview of the fundamentals of bioarchaeology. Reflecting the enormous advances made in the field over the past 20 years, the author examines how this discipline has matured and evolved in fundamental ways. Jargon free and richly illustrated, the text is accompanied by copious case studies and references to underscore the central role that human remains play in the interpretation of life events and conditions of past and modem cultures, from the origins and spread of infectious disease to the consequences of decisions made by humans with regard to the kinds of foods produced, and their nutritional, health, and behavioral outcomes. With local, regional, and global perspectives, this up-to-date text provides a solid foundation for all those working in the field.
Clark Spencer Larsen is the Distinguished Professor of Social and Behavioral Sciences and Chair of the Department of Anthropology at The Ohio State Uni versity in Columbus, Ohio. His research is focused primarily on biocultural adaptation in the last 10 000 years of human evolution, with particular emphasis on the history of health, well-being, and lifestyle. He collaborates internationally in the study of ancient skeletons in order to track health changes since the late Paleolithic. He is the author of 200 scientific articles and has authored or edited 30 books and monographs.
Cambridge Studies in Biological and Evolutionary Anthropology Consulting editors C. G. Nicholas Mascie-Taylor, University o f Cambridge Robert A. Foley, University o f Cambridge Series editors Agustin Fuentes, University o f Notre Dame Sir Peter Gluckman, The Liggins Institute, The University o f Auckland Nina G. Jablonski, Pennsylvania State University Clark Spencer Larsen, The Ohio State University
Michael P. Muehlenbein, Indiana University, Bloomington Dennis H. O’Rourke, The University o f Utah Karen B. Strier, University o f Wisconsin David P. Watts, Yale University Also available in the series 53.
Technique and Application in Dental Anthropology Joel D. Irish ft Greg C. Nelson (editors) 978 0 521 87061 0
54. 55.
Western Diseases: A n Evolutionary Perspective Tessa M. Pollard 978 0 521 61737 6 Spider Monkeys: The Biology, Behavior and Ecology o f the Genus Ateles Christina J.
56.
Campbell 978 0 521 86750 4 Between Biology and Culture Holger Schutkowski (editor) 978 0 521 85936 3
57.
Primate Parasite Ecology: The Dynamics and Study o f Host-Parasite Relationships Michael A. Huffman ft Colin A. Chapman (editors) 978 0 521 87246 1
58.
The Evolutionary Biology o f Human Body Fatness: Thrift and Control Jonathan C. K.
59.
Wells 978 0 521 88420 4 Reproduction and Adaptation: Topics in Human Reproductive Ecology C. G. Nicholas
60.
Mascie-Taylor ft Lyliane Rosetta (editors) 978 0 521 50963 3 Monkeys on the Edge: Ecology and Management o f Long-Tailed Macaques and their Interface with Humans Michael D. Gumert, Agustin Fuentes, ft Lisa Jones-Engel (editors) 978 0 521 76433 9
61.
The Monkeys o f Stormy Mountain: 60 Years o f Primatological Research on the Japanese Macaques o f Arashiyama Jean-Baptiste Leca, Michael A. Huffman, ft Paul
62.
L. Vasey (editors) 978 0 521 76185 7 African Genesis: Perspectives on Hominin Evolution Sally C Reynolds ft Andrew Gallagher (editors) 978 1 107 01995 9
63. 64.
Consanguinity in Context Alan H. Bittles 978 0 521 78186 2 Evolving Human Nutrition: Implications fo r Public Health Stanley UKjaszek. Neil
65.
Evolutionary Biology and Conservation ofTitis, Sakis and Uacaris Liza M. Veiga,
Mann, ft Sarah Elton (editors) 978 0 521 86916 4 Adrian A. Barnett, Stephen F. Ferrari, ft Marilyn A. N'orconk (editors) 66.
978 0 521 88158 6 Anthropological Perspectives on Tooth Morphology: Genetics. Eiobaion, Variation G. Richard Scott ft Joel D. Irish (editors) 978 1 107 01145 8
67.
Bioarchaeological and Forensic Perspectives on Violence: Flow Violent Death is Interpreted from Skeletal Remains Debra L. Martin £t Cheryl P. Anderson (editors) 978 1 107 04544 6
68.
The Foragers o f Point Hope: The Biology and Archaeology o f Humans on the Edge o f the Alaskan Arctic Charles E. Hilton, Benjamin M. Auerbach, Et Libby W. Cowgill 978 1 107 02250 8
Bioarchaeology Interpreting Behavior from the Human Skeleton SECOND EDITION
CLARK SPENCER LARSEN The Ohio State University, USA
Cam bridge U N IV E R SIT Y PRESS
Cambridge U NIV ERSITY PRESS
University Printing House, Cambridge CB2 8BS, United Kingdom One Liberty Plaza, 20th Floor, New York, NY 10006, USA 477 Williamstown Road, Port Melbourne, VIC 3207, Australia 314-321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi - 110025, India 79 Anson Road, #06-04/06, Singapore 079906 Cambridge University Press is part of the University of Cambridge. It furthers the University’s mission by disseminating knowledge in the pursuit of education, learning and research at the highest international levels of excellence. www.cambridge.org Information on this title: www.cambridge.org/9780521547482 © Clark Spencer Larsen 2015 First edition © Cambridge University Press 1997 This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published 1997 Second edition 2015 A catalogue record for this publication is available from the British Library Library of Congress Cataloging in Publication data Larsen, Clark Spencer. Bioarchaeology : interpreting behavior from the human skeleton / Clark Spencer Larsen. - Second edition. pages cm. - (Cambridge studies in biological and evolutionary anthropology) ISBN 978-0-521-83869-6 (Hardback) - ISBN 978-0-521-54748-2 (Paperback) 1. Human remains (Archaeology) 2. Human skeleton-Analysis. I. Title. CC77.B8L37 2015 930.1-dc23 2014031787 ISBN 978-0-521-83869-6 Hardback ISBN 978-0-521-54748-2 Paperback Additional resources for this publication at www.cambridge.org/Larsen Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate.
For Chris and Spencer and In memory o f George J. Armelagos (1936-2014), visionary scientist, bioarchaeologist, friend, and mentor
C O NT E NT S
Preface to the Second Edition Preface to the First Edition
page xi xv
1
Introduction
l
2
Stress and deprivation during growth anddevelopment andadulthood
7
2.1 2.2 2.3 2.4 2.5 2.6 2.7
3
4
6
7 8 9 25 30 57 64
Exposure to infectious pathogens
66
3.1 3.2 3.3 3.4 3.5 3.6 3.7
66 67 78 86 96 111 112
Introduction Dental caries Periodontal disease (periodontitis) and tooth loss Nonspecific infection and disruption Specific infectious diseases: treponematosis, tuberculosis,and leprosy Specific infectious diseases: vectored infections Summary and conclusions
Injury and violence 4.1 4.2 4.3 4.4 4.5 4.6
5
Introduction Measuring stress in human remains Growth and development: skeletal Growth and development: dental Skeletal and dental pathological markers of deprivation Adult stress Summary and conclusions
Introduction Skeletal injury and lifestyle Intentional injury and interpersonal violence Medical care and surgical intervention Interpreting skeletal trauma Summary and conclusions
115 115 116 130 168 172 177
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
178
5.1 5.2 5.3 5.4 5.5 5.6
178 179 179 204 206 212
Introduction Articular joints and their function Articular joint pathology: osteoarthritis Nonpathological articular modifications Nonarticular pathological conditions relating to activity Summary and conclusions
Activity patterns: 2. Structural adaptation 6.1 6.2
Bone form, function, and behavioral inference Cross-sectional geometry
214 214 215
x
Contents
_ _______________________ __ ___________________________________________________
6.3 6.4 6.5
7
8
10
256
7.1 7.2 7.3 7.4 7.5
256 256 270 276 300
Introduction Cranial form and functional adaptation Dental and alveolar changes Dental wear and function Summary and conclusions
Isotopic and elemental signatures of diet, nutrition, and lifehistory Introduction Isotopic analysis Elemental analysis Methodological issues in bioarchaeologicalchemistry Summaiy and conclusions
301 301 302 347 355 356
Biological distance and historical dimensions ofskeletalvariation
357
9.1 9.2 9.3 9.4 9.5
357 362 368 389 401
Introduction Classes of biodistance data Biohistorical issues: temporal perspectives Biohistorical issues: spatial perspectives Summary and conclusions
Bioarchaeological paleodemography: interpreting age-at-death structures 10.1 10.2 10.3 10.4 10.5 10.6 10.7
11
246 247 255
Masticatory and nonmasticatory functions: craniofacial adaptation to mechanical loading
8.1 8.2 8.3 8.4 8.5
9
Histomorphometric biomechanical adaptation Behavioral inference from external measurements Summaiy and conclusions
J
Introduction Reconstructing and interpreting age-at-death profiles: it has been mostly about mortality Paleodemographers adopt the life table for age structure analysis Addressing the assumptions of paleodemography New solutions to interpreting age-at-death profiles in archaeological skeletal series: it is really mostly about fertility not mortality The elephant in the room: age estimates in archaeological skeletons Summaiy and conclusions
402 402 404 406 408 410 418 419
Bioarchaeology: skeletons in context
422
11.1 11.2 11.3 11.4
422 424 428 429
Framing the contextual record Framing the problems and questions: it is all about the hypothesis Ethics in bioarchaeology Bioarchaeology looking forward
References Index
Color plates are to be found between pp. 320 and 321
433 593
PREFACE TO THE SECOND EDITION
It has been more than 15 years since the publication of the first edition of Bioarchaeology: Interpreting Behavior from the Human Skeleton. The response following its publication in 1997 was overwhelmingly positive – in reviews and comments to me from virtually every corner of the globe. I credit Robert Benfer for convincing me that a synthesis paper I wrote for Michael Schiffer’s book series, Advances in Archaeological Method and Theory (Larsen, 1987), should be expanded into a book-length treatment of the field. He made the case to me that such a book would serve to define what bioarchaeologists do and give bioarchaeology a sense of identity and mission. Since the publication of the first edition, I have been thrilled to see how the field has matured and evolved, the increasing scientific rigor, the extraordinary volume of work published, the high quality of the literature, the appeal that it has had for new and upcoming generations of bioarchaeologists, the development of new directions and advances, and the impressive increase in international and multidisciplinary collaborative research programs. With regard to new directions, we have seen expansion in areas relating to links between the social and biological, what some call “social bioarchaeology” (Agarwal & Glencross, 2011; Gowland & Knüsel, 2006), and facets of it relating to identity, gender, and social and cultural forces that leave their impression on the skeletal body (Knudson & Stojanowski, 2008, 2009; Larsen & Walker, 2010; Sofaer, 2006). In addition, there have been at least two books published with Bioarchaeology as the primary title, one providing a historical overview with reference to the United States (Buikstra & Beck, 2006) and the other focusing on practice (Martin et al., 2013). The advances in methods for the study of ancient skeletal and dental tissues have expanded our understanding of past population health and lifestyle in ways unfathomable or just on the horizon when the previous edition of the book was published. As shown throughout the present volume, applications of the study of ancient DNA to mobility and residence, disease diagnosis, and biology generally are breathtaking (Kaestle, 2010). The advances made in genome-wide and sequencing technology have given access to remarkable amounts of data, providing new insights and perspectives on the human experience in the past. Similarly, imaging technology has developed at a remarkable pace (Chhem & Brothwell, 2007; Schultz, 2001). These advances have played a central role in the increasingly interdisciplinary orientation of bioarchaeology (Armelagos, 2003; Zuckerman & Armelagos, 2011). Fundamental to the development of bioarchaeology is its comparative approach and its grounding in the scientific method and its approach to discovery and problem solving. These strengths provide perspective on present conditions, such as the human–environment interaction, evolution and adaptation, and success and failure, and understanding of who we are today.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 13 Jan 2017 at 20:01:11, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.001
xii
Preface to the Second Edition
When I wrote the first edition, I had in mind a comprehensive volume, a synthesis outlining what had been accomplished and future directions. So much has been written since the first edition that this new edition does not attempt to consider all advances that have appeared since the mid-1990s. Rather, I have focused on key developments in areas that have more fully progressed in the last couple of decades, as well as new and emerging areas, drawing on my own experience and what has excited me most in bioarchaeological inquiry. In addition, I provide a new stand-alone chapter on paleodemography. Demographic structure of past populations provides insights into age profiles. More immediate to bioarchaeology, age structure of archaeological skeletal series gives important context for interpreting the variation seen in virtually all parameters discussed in this book, ranging from diet and dietary change over the life course to reconstruction of lifestyle and activity via skeletal morphology and degenerative articular pathology. I well understood the potential of paleodemography while I wrote the first edition, but frankly, I thought that the area of study was in such disarray, that I regarded a stand-alone chapter as preliminary and confusing. Since then, however, there have been considerable advances made in paleodemography, especially regarding the meaning of age structure for understanding population dynamics and what is similar and different in comparing age structure of the dead with vital statistics based on the living. I also provide discussion of challenges that were presented in the concluding chapter of the first edition, such as sample representation, the “osteological paradox,” global perspectives, cultural patrimony, and the new world of genomics and its importance to bioarchaeology and the study of the human past. Finally, my own experience in bioarchaeology has widened greatly since I wrote the first edition, especially resulting from the experience gained as codirector of two large collaborative research projects, the Global History of Health Project and the Çatalhöyük Bioarchaeology Project, and a field school in Medieval archaeology and bioarchaeology (Field School Pozzeveri). Major funding from the US National Science Foundation for the global project, the National Geographic Society and the Templeton Foundation for the Çatalhöyük project, and the Italian government for the field school and associated research program made all of this work possible. The preparation of the second edition of Bioarchaeology was an effort that could have been completed only with a considerable amount of help. I received advice on what the new edition should include or not include from Rimas Jankauskas, Dale Hutchinson, Jackie Eng, Gwen Robbins Schug, Mike Pietrusewsky, George Milner, Sam Stout, Richard Scott, Graciela Cabana, Dan Temple, George Armelagos, Tracy Betsinger, Maria Smith, Debbie Guatelli-Steinberg, Marc Oxenham, Joel Irish, Marin Pilloud, Charlotte Roberts, Chris Stojanowski, and Kim Williams. I owe a debt of gratitude to colleagues and students who read and commented on individual chapter drafts. Thanks go especially to Helen Cho, Giuseppe Vercellotti, Charlotte Roberts, Christina Torres-Rouff, Margaret Judd, Pat Lambert, Tiffiny Tung, Michele Buzon, Bonnie Glencross, George Milner, Chris Knüsel, Evan Garofalo, Chris Ruff, Libby Cowgill, Brigitte Holt, Marina Sardi, Rolo
Downloaded from https:/www.cambridge.org/core. University of Florida, on 13 Jan 2017 at 20:01:11, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.001
Preface to the Second Edition
González-José, Noreen von Cramon-Taubadel, Lesley Gregoricka, Sharon DeWitte, Julia Giblin, Jess Pearson, Laurie Reitsema, Rob Cook, Annie Katzenberg, Margaret Schoeninger, Christine White, Tracy Prowse, Mike Pietrusewsky, Chris Stojanowski, Joel Irish, Marin Pilloud, Brian Hemphill, Leslie Williams, Ann Stodder, Séb Villotte, and Britney Kyle. In addition, I benefited from advice from Haagen Klaus, Dan Temple, Josh Sadvari, and Kathryn Marklein, who read the entire manuscript and offered many substantive and helpful comments relating to content and clarity. I thank Tracey Sanderson, formerly of Cambridge University Press, for approaching me to write the second edition, and to her successor, Martin Griffiths, for sticking with me over the years of writing. Thanks also go to Ilaria Tassistro at the Press for her assistance and skill as we moved the manuscript through the production process and to Jeanette Mitchell for her excellent copy-editing. I acknowledge the hard work by Sarah Martin and Kathryn Marklein in preparation of the bibliography. I thank all of my friends and colleagues who provided photographs and other figures. Those who are familiar with the first edition will note the considerable expansion of the number of figures, to include many color images of pathological conditions and other elements of morphology and biological variation. In addition, I have increased the number of data and analysis graphs, largely in order to help readers visualize research results discussed in the text. For their support in providing photographs and graphs, thanks go especially to Chris Ruff, Haagen Klaus, Kate Pechenkina, Tomasz Kozłowski, Valerie DeLeon, Sam Blatt, Megan Brickley, Rachel Ives, Leslie Williams, Sam Scholes, Cory Maggiano, Pat Lambert, Dale Hutchinson, George Milner, Charlotte Roberts, Jesper Boldsen, Eileen Murphy, Kate Domett, Scott Haddow, Bonnie Glencross, Tim White, John Verano, Tiffiny Tung, Margaret Schoeninger, Deborah Bolnick, Shannon Novak, Séb Villotte, Chris Knüsel, Evan Garofalo, Jim Gosman, Richard Scott, Chris Schmidt, Melissa Zolnierz, Lesley Gregoricka, Chris Stojanowski, and Joel Irish. Kathryn Marklein provided considerable time and effort toward the development of the electronic files of the more than 160 graphs, line drawings, and photographs. I thank my parents, the late Leon Larsen and Patricia Loper Larsen, for introducing me at a very young age to old things and the past. I thank my undergraduate professors at Kansas State University, especially my mentor and advisor, Patricia O’Brien, and Professors William Bass and Michael Finnegan, and at the University of Michigan, my PhD mentor and advisor, Milford Wolpoff, and Professors Stanley Garn, Frank Livingstone, Loring Brace, David Carlson, Michael Zimmerman, and Roberto Frisancho for their inspiration and the training I received under their collective direction. My fellowship stints at the Smithsonian Institution, undergraduate and graduate, were strongly influential in the development of my interests in bioarchaeology. I am especially grateful to Douglas Ubelaker, Dale Stewart, Lawrence Angel, and Donald Ortner for their many stimulating discussions, opportunities for research, and advice. Since the publication of the first edition of Bioarchaeology, I moved to the Department of Anthropology at The Ohio State University. At Ohio State, I have
Downloaded from https:/www.cambridge.org/core. University of Florida, on 13 Jan 2017 at 20:01:11, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.001
xiii
xiv
Preface to the Second Edition
been privileged to work with an extraordinary faculty and group of graduate students, and to have access to superb research and teaching facilities. I am grateful to the institution, my colleagues, and students for the stimulating intellectual environment that helped to make this book possible. Columbus, Ohio May 1, 2014
Downloaded from https:/www.cambridge.org/core. University of Florida, on 13 Jan 2017 at 20:01:11, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.001
PREFACE TO THE FIRST EDITION
The writing of this book was fostered by my involvement in a series of interdisciplinary research programs undertaken in the southeastern (Florida and Georgia) and western (Nevada) United States. I thank my collaborators, colleagues, and friends who have been involved in this exciting research. With regard to fieldwork, the following individuals and projects figured prominently in the development of this book: David Hurst Thomas on St. Catherines Island, Georgia; Jerald Milanich and Rebecca Saunders on Amelia Island, Florida; Bonnie McEwan at Mission San Luis de Talimali in Tallahassee, Florida; and Robert Kelly in the western Great Basin, Nevada. A number of individuals deserve special thanks for their valuable contributions to the study of human remains from these regions: Christopher Ruff, Margaret Schoeninger, Dale Hutchinson, Katherine Russell, Scott Simpson, Anne Fresia, Nikolaas van der Merwe, Julia Lee-Thorp, Mark Teaford, David Smith, Inui Choi, Mark Griffin, Katherine Moore, Dawn Harn, Rebecca Shavit, Joanna Lambert, Susan Simmons, Leslie Sering, Hong Huynh, Elizabeth Moore, and Elizabeth Monahan. I thank the Edward John Noble Foundation, the St. Catherines Island Foundation, Dr. and Mrs. George Dorion, the Center for Early Contact Period Studies (University of Florida), the National Science Foundation (awards BNS-8406773, BNS-8703849, BNS-8747309, SBR-9305391, SBR-9542559), and the National Endowment for the Humanities (award RK-20111-94) for support of fieldwork and follow-up analysis. Research leave given to me during the fall of 1991 while I was on the faculty at Purdue University and a fellowship from Purdue’s Center for Social and Behavioral Sciences during the spring and summer of 1992 gave me a much needed breather from teaching and other obligations in order to get a jump-start on writing this book. Preparation of the final manuscript was made possible by generous funding from the University of North Carolina’s University Research Council. I acknowledge the support – institutional and otherwise – of the University of North Carolina’s Research Laboratories of Anthropology, Vincas Steponaitis, Director. A number of colleagues provided reprints or helped in tracking down key data or literature sources. I especially thank John Anderson, Kirsten Anderson, Brenda Baker, Pia Bennike, Sara Bon, Brian Burt, Steven Churchill, Trinette ConstandseWestermann, Andrea Drusini, Henry Fricke, Stanley Garn, Alan Goodman, Gisela Grupe, Donald Haggis, Diane Hawkey, Brian Hemphill, Frank Ivanhoe, Anne Katzenberg, Lynn Kilgore, Patricia Lambert, Daniel Lieberman, John Lukacs, Lourdes Márquez Morfín, Debra Martin, Christopher Meiklejohn, Jerome Melbye, György Pálfi, Thomas Patterson, Carmen Pijoan, William Pollitzer, Charlotte Roberts, Jerome Rose, Christopher Ruff, Richard Scott, Maria Smith, Dawnie Steadman, Vincas Steponaitis, Erik Trinkaus, Christy Turner, Douglas Ubelaker, John Verano, Phillip Walker, and Robert Walker.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:03:32, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.002
xvi
Preface to the First Edition
Various versions of individual chapters and parts of chapters were read by Kirsten Anderson, Brenda Baker, Patricia Bridges, James Burton, Stephen Churchill, Robert Corruccini, Marie Danforth, Leslie Eisenberg, Alan Goodman, Mark Griffin, Gary Heathcote, Brian Hemphill, Simon Hillson, Dale Hutchinson, Anne Katzenberg, Lyle Konigsberg, Patricia Lambert, Christine Larsen, George Milner, Susan Pfeiffer, Mary Powell, Charlotte Roberts, Christopher Ruff, Shelley Saunders, Margaret Schoeninger, Mark Spencer, Mark Teaford, and Christine White. Ann Kakaliouras, Jerome Rose, and Phillip Walker generously donated their time in the reading of and commenting on the entire manuscript. I am indebted to all of the readers for their help in improving the clarity, organization, and content of the book. The organization of the bibliographic computer database was completed by Elizabeth Monahan. Patrick Livingood helped in the preparation of figures. I thank the following colleagues for providing photographs and figures: Stanley Ambrose, Kirsten Anderson, David Barondess, Brian Hemphill, Charles Hildebolt, Dale Hutchinson, George Milner, Mary Powell, Christopher Ruff, Richard Scott, Scott Simpson, Holly Smith, Mark Teaford, Erik Trinkaus, Phillip Walker, and Tim White. A book like this is not written without a supportive press. I thank the Syndicate of the Cambridge University Press and the Editorial Board of the Cambridge Studies in Biological Anthropology – Robert Foley, Derek Roberts, C. G. N. MascieTaylor, and especially, Gabriel Lasker – for their encouragement and comments, especially when I proposed the idea of writing the book and what it should contain. Most of all, I thank Tracey Sanderson, Commissioning Editor of Biological Sciences at the Press, for her help throughout the various stages, from proposal to finished book. Chapel Hill, North Carolina August 28, 1996
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:03:32, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.002
1
Introduction
Many thousands of archaeological human skeletons are currently housed in various institutional repositories throughout the world. Some of these collections are extensive: The Natural History Museum in London holds some 10 000 cataloged individuals, the Smithsonian Institution has at least 30 000 skeletons, and the Bavarian State Collection more than 50 000 sets of human remains (Loring & Prokopec, 1994; McGlynn, personal communication; Molleson, 2003). These and many other major collections around the world started during the nineteenth century, mostly for purposes of collecting crania and other remains for investigations of racial classification (Larsen & Williams, 2012; Little & Sussman, 2010) or as “curiosities” without any manner of context (Taylor, 2014). While these motives for collection have largely disappeared, these collections in Europe, North America, and elsewhere still provide today an essential foundation for the study of past human variation and evolution. The importance of the collections lies in the fund of biological information they offer for interpreting lifeways of past peoples and for developing an informed understanding of the history of the human condition generally. One could argue that earlier generations of anthropologists may not have appreciated the enormous value of human remains for interpreting the past, especially in view of the high volume of excavated remains versus the remarkably low frequency of analysis and publication on those studied by biological anthropologists (Steele & Olive, 1989). Today, however, human remains have become a key element of archaeological research and have contributed to a burgeoning understanding of past population dynamics in general and the human condition in particular. This is especially the case for developing and testing hypotheses and drawing inferences about the human experience relating to diet, health, and lifestyle at all levels, from the individual (various in Stodder & Palkovich, 2012; and see Knudson, Pestle et al., 2012; Tiesler & Cucina, 2006) to large swaths of the globe (Steckel & Rose, 2002; Verano & Ubelaker, 1992; White, 1999). In addition, the study of ancient human remains is important in the ongoing discussions in anthropology about social organization, identity, and the linkages between health and gender and health and status (various authors in Gowland & Knüsel, 2006; Grauer & Stuart-Macadam, 1998; Knudson & Stojanowski, 2009). The study of these remains has played a fundamental role in investigating key adaptive shifts in recent human evolution, such as the foraging-to-farming transition and European exploration and colonization during the post-Columbian era (Bocquet-Appel & Bar-Yosef, 2008; Cohen, 1989; Cohen & Armelagos, 1984; Cohen & Crane-Kramer, 2007; Klaus, 2014a; Lambert, 2000a; Larsen, 2006; Larsen & Milner, 1994; Pinhasi & Stock, 2011), and documentation and interpretation of specific disease histories (Palfi et al., 1999; Powell & Cook, 2005; Roberts & Buikstra, 2003; Roberts et al., 2002). Especially in the last decade or so, regional analysis has provided compelling insights into human adaptation in
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
2
Introduction
broad perspective from a diverse range of settings globally (Domett, 2001; Fitzpatrick & Ross, 2011; Hanson & Pietrusewsky, 1997; Hemphill & Larsen, 2010; Hutchinson, 2002, 2004, 2006; Judd, 2012; Lambert, 2000a; Larsen, 2001; Mushrif-Tripathy & Walimbe, 2006; Ortner & Frohlich, 2008; Oxenham & Tayles, 2006; Pechenkina & Oxenham, 2012; Pietrusewsky & Douglas, 2002a,b; Robbins Schug, 2011b; Roberts & Cox, 2003; Ruff, 2010a; Schepartz et al., 2009; Steckel & Rose, 2002; Stodder, 2008; Tung, 2012a; Weber et al., 2010; Whittington & Reed, 1997; Williamson & Pfeiffer, 2003; Wright, 2006). These regional studies have provided new perspectives on variation in human adaptation, including the foraging-to-farming transition. For example, the experience of this transition in the western hemisphere has shown a general decline in health, linked in part to maize farming. By contrast, rice farming in Asia may have promoted better health, especially in relation to oral health (Domett, 2001; Domett & Tayles, 2007; Douglas & Pietrusewsky, 2007; Oxenham, 2006). It is especially exciting to see the development of insights into past lives and lifestyles in the regional context. In the mid-1980s, the record of poor health in the later prehistoric American Southwest was beginning to emerge (Merbs & Miller, 1985). Building on this record, discussions of conflict, its causes and consequences, and the crucial importance of human remains in these discussions have developed, creating a forum for anthropologists and other social scientists to engage in developing new solutions to long-standing problems. In particular, it is through regional and continental comparisons that we are beginning to understand the patterns and prevalence of violence and its effects on society, demography, and health (Schulting & Fibiger, 2012). Skeletal remains offer an important source of information for the study of human variation. Archaeological skeletons from specific localities are more homogeneous both genetically and in terms of the environments from where they came than are dissecting room or anatomical skeletal series. Skeletons from the latter contexts are from many populations and highly diverse circumstances. The use of archaeological series becomes especially important when making conclusions about intra-population variability for a range of topics where sex and age may be important factors. Various surveys and manuals of human osteology and application to archaeological settings are available (Baker et al., 2005; Bass, 2005; DiGangi & Moore, 2013; Roberts, 2009; Scheuer & Black, 2000, 2004; Schwartz, 2006; Ubelaker, 1999; White et al., 2012). In order to address the incompatibility of different researchers’ methods and results, “standards” for skeletal data collection have been developed (Buikstra & Ubelaker, 1994). Although dealing with the interpretive role in the study and documentation of human remains, these works serve primarily as “how to” guides for bone identification and skeletal analysis and not as resources for the investigation of broader issues in biological anthropology and sister disciplines. The present book focuses on the relevance of skeletal remains to the study of the human condition and human behavior generally; namely, how skeletal and dental remains derived from archaeological settings reveal life history at both the individual and population levels. It does not advocate a reliance on only human remains in order to tell the whole story of the human condition. Rather, human remains represent a part of the broad sweep of data derived from past settings. Human remains do not simply augment other data sources, archaeological or historical. Rather, they provide perspectives and understandings of past societies that pertain to human biology that simply may not be visible in other sources (Perry, 2007).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
Introduction
The goal of this book is to provide a synthesis of bioarchaeology – the study of human remains from archaeological contexts. Although the term was first applied to the study of animal remains from archaeological settings (Clark, 1972), the focus of study then surfaced with reference to the study of human remains in the regional “bioarcheological investigation” of the lower Illinois River valley, an ambitious and innovative research program directed by Jane Buikstra (1977a). This set into motion the future course of the field (Buikstra, 2006; Knüsel, 2010; Larsen & Walker, 2010). The field emphasizes integrative, interdisciplinary study. By doing so, the wide breadth of bioarchaeology has engendered cross-fertilization between different disciplines, contributing to its method and theory in approaching a wide diversity of problems. The field recognizes the inextricable connection between biology and culture – one simply would not exist without the other (Goodman, 2013). Just as human remains are a crucial component of study, it is the context of these remains that provide us with meaning and substance. Context is a broad term to include all potential sources, archaeological and otherwise, such as burial and social inference, diet, climate, living conditions, and all else that is inferred or documented that may inform our understanding of the people the skeletons represent. The enormous potential of bioarchaeology for understanding the past is growing (Buikstra & Beck, 2006; Katzenberg & Saunders, 2008; Larsen and Walker, 2010; Martin et al., 2013; Stojanowski & Duncan, 2014). I believe that this is the case for two key reasons. First, in contrast to earlier work that emphasized description, often poorly connected to any manner of context, there is a growing interest in the central role that human remains play in understanding patterns and trends in past societies. Second, the centrality of human remains for understanding past biological, social, and behavioral dynamics has motivated an emphasis on integrative research strategies, resulting in excavations of human remains having clear agendas and questions that guide both their recovery and their study. Today, there is a very different research profile, one that emphasizes the links between biological variation, health and well-being, and behavior viewed broadly. This contrasts with the descriptive orientation of earlier generations of osteologists, but it was the work of these predecessors that provided many of the tools that served to form the field (Buikstra & Beck [2006], an historical treatment of bioarchaeology). This book takes a largely population perspective. However, individual-based case studies are discussed, especially because collectively they help to build a picture of variability in earlier societies. The population approach is critical for characterizing patterns of behavior, lifestyle, disease, and other parameters that form the fabric of the human condition. The discussion in the following pages also underscores the importance of culture and society in interpreting population characteristics. Dietary behavior, for example, is highly influenced by cultural and social norms of behavior. If an individual is taught that a specific food is “good” to eat, then the consumption of that food item becomes fully appropriate in that cultural context. Food is also an important indicator of a person’s place in society – access to resources is influenced not only by where one lives but also by their identity and location within a society. Hence, cultural and social factors play an essential role in determining diets of individual members of a social group or society. As I have made clear, the book is fundamentally driven by context and especially by the placement of the biological record of skeletons in their archaeological context, recently called “contextualization,” or the location of human remains within relevant archaeological and
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
3
4
Introduction
historical frameworks (Knudson & Stojanowski 2009; Thompson et al., 2014), and treatment of the body in funerary settings (Duday, 2009). Contrary to the assertion that those who study human remains have largely disengaged bones from context, whereby the physical anthropologist “does not necessarily need the archaeologist once the archaeologist has excavated bone” (Goldstein, 2006:376–377), there has been a growing movement focusing on the links between archaeology and physical anthropology in the development of research programs globally. Characterizing bioarchaeology as involving a disengagement of bodies from place misrepresents the vast majority of research cultivated in this exciting discipline, especially given the enormous progress of the field today. Unlike many of the aforementioned guides to osteological analysis, this book is not a book about methods. Certainly, methodological developments make possible much of the discussion presented in the following chapters (see various authors in Katzenberg & Saunders, 2008). Moreover, technological advances, such as imaging, have provided new insights unimaginable a decade ago (various in Chhem & Rühli, 2004). However, this book focuses on how human remains inform our understanding of the past. By doing so, this book is intended to feature the various insights gained about human behavior and biology rather than to describe or evaluate specific methods and techniques of skeletal analysis. This approach is central to the biocultural perspective offered by anthropologists – we must seek to envision past populations as though they were alive today, and then ask what information drawn from the study of skeletal tissues would provide understanding of them as functioning, living human beings and members of populations. Nor is this book a critical review, highlighting the shortcomings of the field or what bioarchaeologists should be doing, but are not. Bioarchaeological findings are important in a number of areas of scientific and scholarly discourse. Within anthropology, the use of human remains in interpreting social behavior from mortuary contexts is especially fruitful (Artelius & Svanberg, 2005; Beck, 1995; Buikstra, 1977a; Chapman et al., 1981; Chesson, 2001; Gowland & Knüsel, 2006; Humphreys & King, 1981; Knudson & Stojanowski, 2009; O’Shea, 1984; Parker Pearson, 2000). The story human remains tell is also reaching an audience outside anthropology. There is an increase in use of bioarchaeological data in history, economics, and nutrition science. Skeletal studies of nutrition, disease, and related topics, and the importance of human remains in developing a broader understanding of economic history are opening a new path of research interest (Steckel & Rose, 2002). The emerging role of skeletal remains in the study of the human condition was underscored by the eminent historian, John Coatsworth (1996:1), who highlights the “masses of evidence” provided from bioarchaeological investigations and the important role they play in understanding historical developments. Breakthroughs have been made in the analysis of various body tissues in archaeological settings, including hair, muscle, skin, and other soft tissues (Arriaza, 1995; Asingh & Lynnerup, 2007; Aufderheide, 2003; Brothwell, 1987; Chamberlain & Parker Pearson, 2001; Cockburn et al., 1998; Hansen et al., 1991; Lynnerup et al., 2002; Stead et al., 1986). The discussions presented in this book focus largely on skeletal and dental tissues. Building on the study of human remains, the unifying theme in this book is behavioral inference. My discussion of behavior is not limited to physical activity; rather, it is considered in a wider perspective, including (in order of appearance in the book) physiological stress, exposure to pathogenic agents, injury and violence, physical activity, masticatory and extramasticatory uses of the face and jaws, dietary reconstruction and nutritional inference, population history,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
Introduction
and social factors and how they influence health and lifestyle. Simply, almost everything available in the study of human remains has behavioral meaning. Bioarchaeology is represented throughout the world. This book draws upon a sample of this record in illustrating important points and issues. The book deals with all regions of the globe, but especially in those areas that have been studied in depth by bioarchaeologists. One of the exciting advances in the field in the last decade or so is the proliferation of bioarchaeological treatments outside North America, especially Europe, Asia, and the Pacific (Domett, 2001; Lynnerup, 1998; Murphy, 2002, 2003; Ortner & Frohlich, 2008; Oxenham & Tayles, 2006; Papathanasiou, 2001; Pechenkina & Oxenham, 2013; Pietrusewsky & Douglas, 2002b; Rife, 2012; Roberts & Cox, 2003; Ruff, 2010a; Schepartz et al., 2009; Whittington & Reed, 1997; and many others). Reflecting this increasingly international bioarchaeology are the international, multidisciplinary collaborations around the world, including in Latin America, Europe, Asia, and Africa. Various points made in the book are addressed by contrasting and comparing data sets from skeletal assemblages representing human populations from different levels of sociopolitical complexity and differing subsistence regimes. Because of the vagaries of dietary reconstruction in the archaeological past, anthropologists usually characterize human groups broadly, using terms such as “foragers” or “farmers.” The reader should recognize that these terms are often overly simplistic and do not convey the underlying complexity of human adaptive systems adequately. Nevertheless, these categories help us to understand broad behavioral and adaptive features of different groups better, and therefore, are the starting points to facilitate yet more specific and contextual reconstructions and interpretations of past lifeways. Of far more importance to the focus of this book is that these contrasts and comparisons add an important dimension to the growing discussion in anthropology oriented toward the understanding of the causes and consequences of adaptive and behavioral shifts in the past and present. A fundamental barrier to understanding health in today’s populations is the very narrow temporal window in which they have been studied. The prevalence records for osteoarthritis and oral degenerative conditions, for example, are limited largely to the last several decades when large-scale epidemiological studies of living populations were first undertaken (Arden & Nevitt, 2006; Blinkhorn & Davies, 1996; Issa & Sharma, 2006; Jordan et al., 2007; Pilot, 1998; Sreebny, 1982; Woodward & Walker, 1994), representing approximately only the last 0.1% of the history and evolution of our species. Biomedical, experimental, molecular, and behavioral studies of skeletal and oral degenerative conditions have been investigated in great detail in human populations for this 0.1% window of time. However, these studies are limited in scope, focusing primarily on remarkably gross categories of human variation. They are deficient because they underrepresent and mischaracterize human biological variation, reducing the variation to simple dichotomous (or other limited) comparisons having little to do with real biological or social variation. Analysis of the recent biomedical and epidemiological literature characterizes variation according to “race” (e.g., White vs. Black vs. Hispanic), geography (mostly United States and western Europe), sex (men vs. women), diet (e.g., access to refined sugar), and socioeconomic status based on income level (upper-, middle-, and lower-income), in addition to the very narrow temporal window (Allen, 2010; Dominick & Baker, 2004; Jordan et al., 2007; Pilot, 1998). The (mis)characterization of human variation has important implications for understanding an increasingly diverse society in the United States, far more
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
5
6
Introduction
diverse than the racial categories so prevalent in biomedical research would seem to imply. By expanding understanding of diversity, in terms of both broad temporal and geographic perspectives, bioarchaeology as an anthropological subdiscipline provides a more informed understanding of health in the present through a consideration of heath in the past. Anthropologists, economic historians, and other social scientists have long recognized that humans in the past and present are extraordinarily diverse in their food consumption practices, social habits, workloads, and other behavioral characteristics that collectively characterize health and lifestyle. The biomedical and epidemiological literature on degenerative conditions is known from a limited, if not insufficient record. Moreover, this record is limited to the study of populations that have mitigated some of the conditions that influence the prevalence of degenerative conditions most strongly, namely industrialized societies having access to healthcare, adequate nutrition, and labor-saving technology. For example, prevalence of dental caries in many developed countries has been reduced due to the introduction of fluoride in drinking water, which is certainly the case in the United States (McDonagh et al., 2000). In addition, owing to work-saving technology, workload has greatly declined in developed countries especially. If physical activity is the primary factor in determining prevalence and pattern of osteoarthritis, for example, then one could predict a decline in its prevalence, especially in recent societies. That is, as advances in technology essentially replace what the body used to do in work and activity generally, we should expect to see long-term trends in terms of a decline in osteoarthritis. This hypothesis has not been tested using the kind of systematic approach offered by bioarchaeology. Bioarchaeology offers a unique opportunity to study a much more diverse sampling of humanity, namely the last 10 000 years of more than seven million years of human evolution, greatly extending the time framework for characterizing the biocultural context surrounding some of the most important changes and developments in the evolutionary history of our species. Arguably, it is this small percentage of our evolution where crucial developments occurred that set the stage for the rise of modern civilization, namely farming, appearance of complex societies, urbanization, and industrialization. Many studies of human remains from archaeological contexts focus on a single population without actively linking the analysis of these remains to the context (climate, diet, time, culture, settlement, and economic system) from which they derive. A central aim of bioarchaeology is to establish a comprehensive record of skeletal and dental conditions in relation to prevalence and pattern to develop an understanding of behavior and the costs and consequences of particular lifestyle circumstances and conditions. Bioarchaeology provides an unmatched record of health and life conditions in the past, thereby extending our understanding of diversity in geography, cultures, and time that is simply not possible with sole reliance on limited health attributes of living populations. Human skeletal and dental tissues are remarkable records of lives led in the past, what Stanley M. Garn referred to as “a rich storehouse of individual historical events” (1976:454). This book provides a tour of the vast holdings in this storehouse, displaying the knowledge gained about earlier peoples based on the study of their mortal remains.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:28:13, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.003
2
2.1
Stress and deprivation during growth and development and adulthood
Introduction
Physiological disruption resulting from impoverished environmental circumstances – “stress” – is central to the study of health and well-being and the reconstruction and understanding of health, adaptation, and behavior in both earlier and contemporary human societies. Stress involves disruption of homeostasis, or the maintenance of a constancy of conditions that keep the body’s internal environment stable. Stress is a product of three key factors, including (1) environmental constraints, (2) cultural systems, and (3) host resistance. Goodman and others (Goodman, 1991; Goodman & Armelagos, 1989; Goodman & Martin, 2002; Goodman, Martin et al., 1984, 1988; Klaus, 2012; Martin et al., 1991) modeled the interaction of these factors at both the individual and population levels (Figure 2.1). This model of health and context emphasizes the role of environment in providing both the resources necessary for survival and the stressors that may affect the health and well-being of a population, yet includes the profound influence of culture in health outcomes. Cultural systems serve as protective buffers from the environment, such as shelter and clothing as buffers against temperature extremes. Cultural systems can be highly effective at mitigating behaviors necessary for extraction of important nutrients and resources from the environment, thus supporting the ability to maintain stability. It appears impossible for the full spectrum of stressors in an environment to be buffered against; some inevitably slip through the filters of the cultural system. In these instances, the individual may exhibit a biological stress response observable at the tissue level (bones and teeth). Importantly, stress, buffering systems, and tissue-level responses are not linked by a simplistic, linear relationship. Instead, they can interact with and influence other variables within other levels of the model. Physiological disruption feeds directly back into environmental constraints and cultural systems. This model makes clear that health is a key variable in the adaptive process. Biological stress has significant functional consequences for the individual and for the society in which they are living. Elevated stress and associated disruption of homeostasis may lead to a state of functional impairment, resulting in diminished cognitive development and work capacity. The reduction in work capacity can be detrimental if it impedes the acquisition of essential resources (e.g., dietary) for the maintenance of the individual, the population, and the society. If individuals of reproductive age are affected by poor health, then decreased fertility may be the outcome. Ultimately, the success or failure of a population and its individual constituents to mitigate stress has far-reaching implications for behavior and the functioning of society. Stress and developmental instability in early life, prenatal and postnatal, have important implications for health outcomes in later life. David Barker’s research on chronic diseases in middle age reveals that individuals with low birth weight and nutritional deprivation in early
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
8
Stress and deprivation during growth and development and adulthood
Environmental Constraints
Culturally Induced Stressors
Limited Resource Ecological Variables Physiogeography
Social hierarchy Differential Access to Resources Sociopolitical Structure
Cultural Buffering Systems Technological and Behavioral Adaptations
Genome, Developmental Pathways, and Epigenetic Inputs
Host Resistance Feedback Loop
Physiological Disturbance Growth Disruption Disease Death
Feedback Loop
Poplulation-Level Outcomes Increased Morbidity: Epigenetic Effects Decreased Work and Reproductive Capacity Sociocultural Disruption and Instability
Biological Adjustment and Adaptation
Behavioral Alteration
Figure 2.1 Biocultural model for interpreting stress. This model emphasizes the primacy
of environmental constraints and cultural influences on outcomes in health and well-being. (Adapted from Klaus, 2012; reproduced with permission of author and University Press of Florida.)
life are more predisposed to earlier death and greater frequency of chronic disease, including cardiovascular disease, hypertension, and type 2 diabetes, than are individuals with normal birth weight and sufficient prenatal and early natal nutrition (Barker, 2001, 2012; Barker et al., 2002; Beltrán-Sánchez et al., 2012; Hales et al., 1991; Harding, 2001; and see Jovanovic, 2000; various in Henry & Ulijaszek, 1996; and others). Similarly, low birth weight infants show lower bone mass as adults than infants with normal birth weight (Antoniades et al., 2003; Cooper et al., 1995; Gluckman et al., 2008). Experimental evidence in laboratory animals suggests that poor prenatal environments are also a risk factor for growth stunting over the life course in general (Dancause et al., 2012). While the prenatal environment does not determine health consequences in adulthood, it appears to have a profound role to play in programing circumstances in later life that promote poor health. A growing record supports Barker’s developmental origins hypothesis regarding the profound influence of intrauterine stresses in early life on later health, morbidity, and mortality.
2.2
Measuring stress in human remains
Biological anthropologists employ a variety of skeletal and dental stress indicators that can be measured empirically. Use of multiple indicators gives a comprehensive understanding of stress and adaptation in past populations (Goodman & Martin, 2002). The multiple-indicator
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
approach stems from the recognition that health is a composite of nutrition, disease, and other aspects of life history. Simply, simultaneous study of multiple, but independent, markers of stress provides a more reliable and informed understanding of composite health as documented in a skeletal series. Contrary to medical models of health, stress and disease (see Chapter 3) represent a continuum rather than a presence versus absence phenomenon, with respect to both the population and the individuals that comprise it.
2.3
Growth and development: skeletal
2.3.1
Growth rate
Rate or velocity of growth in humans shows considerable variation in the comparison of different human populations, present and past (Bogin, 1999; Hoppa & FitzGerald, 1999; Ruff et al., 2013; Saunders, 2008). Nevertheless, human growth is punctuated by two intensive periods of activity from birth through adolescence. The first period pertains to a great increase in growth velocity during infancy, falling off soon after the first two years of life. The second involves another marked increase during adolescence, then declines and reduces to zero growth when epiphyseal fusion of the long bones (femur, tibia, fibula, humerus, radius, and ulna) and other skeletal elements is complete, marking full skeletal maturity (Crews & Bogin, 2010). Growth rate is widely recognized as a highly sensitive indicator of health and wellbeing of a community or population (Bogin, 1999; Eveleth & Tanner, 1990; Foster et al., 2005; Lewis, 2007). Various factors affect growth, such as genetic influences, growth hormonal deficiencies, and psychological stress (Bogin, 1999; Eveleth & Tanner, 1990; Gray & Wolfe, 1996; Ruff et al., 2013), but the preponderance of evidence underscores stressors that are produced by adverse environments, especially poor nutrition, in shaping growth and development (Foster et al., 2005; Leonard et al., 2000; Moffat & Galloway, 2007). Infectious disease can also contribute to poor growth, such as episodic diarrheal disease and circumstances involving poor sanitation and suboptimal living conditions that ultimately reduce nutrition at the cellular level (Cardoso, 2007; Jenkins, 1982; Martorell et al., 1977; Moffat 2003). Nutrition and disease have a synergistic relationship whereby poorly nourished juveniles are more susceptible to infection, while infectious disease reduces the ability of the body to absorb essential nutrients (Keusch & Farthing, 1986; Scrimshaw, 2003, 2010; Scrimshaw et al., 1968). Children raised in impoverished environments in developing nations or in stressed settings in developed nations generally are small in size for their age (Bhargava, 1999; Bogin, 1999; Crooks, 1999; Eveleth & Tanner, 1990; Foster et al., 2005). Among the best-documented populations are the Mayan Indians of Mesoamerica, who show retarded growth in comparison with reference populations from less adverse settings (Crooks, 1994). In Guatemala City, Guatemala, well-fed upper-class children are taller than poorly nourished lower-class children (Bogin & MacVean, 1978, 1981, 1983; Johnston et al., 1975, 1976). Additionally, unlike the markedly slower growth in lower-class children, upper-class children have comparable growth to Europeans. The cumulative differences between Mayan and European children are especially pronounced for the period preceding adolescence, suggesting that growth during the early years of childhood is the most sensitive to environmental disruption in
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
9
10
Stress and deprivation during growth and development and adulthood
comparison with other life periods (Bogin, 1999). During adolescence, genetic influence on growth is more strongly expressed than in childhood (Bogin, 1999; Bogin & Loucky, 1997). Juveniles have been growing taller over much of the twentieth century in industrialized countries and in some developing nations. This secular trend in growth is related to a variety of environmental and cultural changes, including improvement in food availability and nutrition, sanitation, reduction of infectious disease, and increased access to Western healthcare. As the environment improves, growth increases. On the other hand, declines in growth velocity are well documented, especially during periods of dietary deprivation in wartime settings, famines, and economic crises (Eveleth & Tanner, 1990; Himes, 1979; Komlos 1994). This link between growth status and environment is well documented via analysis of historical data. Comparisons of heights of British school children from various regions and economic circumstances for the period of 1908 to 1950 show that children were generally shorter in areas experiencing high unemployment (e.g., Glasgow, Scotland) than in other regions with more robust economies (Harris, 1994). These differences were especially pronounced during the severe economic depression in the late 1920s when nutritional and general health of children of unemployed parents declined. Similarly, growth velocity and attainment per age increased in post-World War II following the amelioration of negative socioeconomic conditions (Cardoso & Gomes, 2009; Malina et al., 2010; Tanner et al., 1982). In post-1945 Poland, relatively greater increases in growth were documented in higher socioeconomic groups (Bielicki & Welon, 1982). Beginning with Francis Johnston’s pioneering work on childhood growth based on the study of archaeological skeletons (Johnston, 1962, 1969), a range of studies have extended our understanding of growth rates in past societies. That is, the general pattern of juvenile growth in archaeological populations is broadly similar to that in living populations (Armelagos et al., 1972; Boldsen, 1995; Edynak, 1976; Hillson et al., 2013; Hoppa, 1992; HussAshmore, 1981; Johnston, 1962; Lewis, 2002; Mays, 1999; Merchant & Ubelaker, 1977; Molleson, 1995; Ribot & Roberts, 1996; Ruff et al., 2013; Ryan, 1976; Saunders, 2008; Sciulli & Oberly, 2002; Storey, 1992a, 1992b; Sundick, 1978; Walimbe & Gambhir, 1994; Walker, 1969; and see later). The congruence of growth in past and living groups suggests that there has not been a change in the general pattern of growth in recent human evolution (Saunders, 2008). That is, patterns and processes of growth in known human populations appear to have been present for at least the last 10 000 years of our evolutionary history, and likely longer. Some populations appear shorter for age than others. Analysis of juvenile long bones from prehistoric North America reveals evidence of growth retardation in agricultural and mixed subsistence economies versus foragers. In children younger than six years of age in the prehistoric lower Illinois River valley, matching of femur length to dental age reveals growth suppression in late prehistoric (Late Woodland period) maize agriculturalists in comparison with earlier foragers (Middle Woodland period) (Cook, 1979, 1984). Cook (1984) concluded that the decline in growth was due to a decrease in nutritional status with the shift to a protein-poor maize diet. Children short for their age during the later prehistoric period tend to express a higher frequency of stress indicators (e.g., porotic hyperostosis, enamel defects) than children who are tall for their age, lending further support for nutritional deficiency as a prime factor contributing to growth retardation. Lallo (1973; and see Goodman, Lallo et al., 1984) found, in addition, a decrease in the growth of femur, tibia, and humerus diaphysis lengths and circumferences during the
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
Mississippian period (AD 1200–1300) in comparison with earlier periods (AD 950–1200) in the central Illinois River valley. Dietary change during this time involved a shift from mixed foraging and farming to intensive maize agriculture. Growth during the period between two and five years of age was especially slow, which Goodman and coworkers (1984) conclude reflects an increase in physiological stress due to poorer nutrition and the presence of other stressors during the later prehistoric occupation of the region. The impact of increased stress loads due to the combined effects of undernutrition, European-introduced infectious disease (e.g., smallpox, measles), warfare, and increased social disruption has been investigated in the late prehistoric and contact-era Arikara Indians of the upper Missouri River valley (Jantz & Owsley, 1984a, 1984b, 1994a; Owsley & Jantz, 1985). Matching of long bone lengths (femur, tibia, humerus, radius) to dental age in perinatal (late fetal/early neonatal) and other juvenile skeletons reveals that late postcontact era (AD 1760–1835) Arikara juveniles are smaller than early postcontact (AD 1600–1733) juveniles, suggesting declining health status as European influence and encroachment into the region by other tribes increased. Comparison of postcontact Arikara with healthy upper middle-class Euroamericans from Denver, Colorado confirms a pattern of slower growth in early childhood in the Native American group (Hillson et al., 2013; Ruff et al., 2013) (Figure 2.2). In contrast, growth in young children from Neolithic Çatalhöyük, Turkey is strongly similar to the Denver growth study sample, indicating relatively better nutrition and health overall, a pattern consistent with other skeletal indicators (Hillson et al., 2013; Ruff et al., 2013) (Figure 2.2). 200
180
Stature (cm)
160
140
120
100
80
60 0
2
4
6
8
10 12 14 Age (years)
16
18
20
22
24
Figure 2.2 Fitted curves for estimated juvenile statures for protohistoric Arikara (open
circles and dashed lines), Neolithic Çatalhöyük (solid diamonds and solid line), and modern Denver population (asterisks and dotted line). (From Ruff et al., 2013; © 2012 Wiley Periodicals, Inc.)
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
11
12
Stress and deprivation during growth and development and adulthood
The interaction between stress and population mobility has been examined in the comparison of Late Archaic-period foragers from the Carlston Annis Bt-5 site, Kentucky (3992–2655 BC) and Late Woodland foragers from the Libben site, Ohio (AD 800–1100) (Mensforth, 1985). Archaeological evidence indicates that Late Archaic populations were highly mobile and exclusively dependent on wild plants and animals. In contrast, maize was consumed by the Libben population, but it was of minor dietary significance. For both groups, nutrition appears to have been adequate (Mensforth, 1985). Comparisons of tibia lengths reveal a general similarity between the two groups from birth to six months and from four years to 10 years. For juveniles aged six months to four years, Libben tibiae are shorter than Bt-5 tibiae. The growth period between six months and four years – the period differing most between Bt-5 and Libben populations – is highly sensitive to metabolic disruption. During this period, an infant undergoes weaning, involving the shift from breast milk, a relatively stable, nutritious food source, to exclusive solid food, often less stable in availability, less digestible, and less nutritious. Passive immunities derived from consumption of breast milk are lost during weaning during this period of life (Popkin et al., 1986). These immunities are crucial for early health and well-being because the child’s immune system is not fully developed until after five years of age (Newman, 1995). Consistent with this outcome of compromised immunity, Mensforth (1985) found a high prevalence of nonspecific periosteal infections in the Libben infants, suggesting that high levels of infectious disease in infancy and young childhood contributed to growth retardation. Thus, in comparison with the Bt-5 population, the Libben population experienced the effects of greater sedentism and community size that fostered poor sanitation, elevated infectious disease, and poor health. Comparison of archaeological samples with a modern reference population (Denver, Colorado; Maresh, 1970) confirms the presence of growth rate suppression in children from archaeological settings. That is, calculation of the average percentage of adult size attained in successive age groups for the major long bones reveals a slow-down in most settings (Humphrey, 2003), consistent with the general finding that children in archaeological contexts are shorter than their modern counterparts (Lewis, 2007; Mays, 1999). Lovejoy and coworkers (1990) argue that widespread infection was the chief cause of growth retardation in the Libben setting. They suggest that inflammation would result in an increased production of cortisol, the major natural glucocorticoid, which results in limitation of growth and availability of amino acids. Thus, elevation of infection in the Libben population may have had a strong influence on growth generally (Lovejoy et al., 1990). Historic-era skeletal series furnish important insights into stress in the recent past. Saunders and coworkers (1995, 2002; Humphrey, 2003) analyzed growth data available from a large series of juvenile remains from the St. Thomas’ Anglican Church cemetery in Belleville, Ontario. The cemetery was used by a predominantly British-descent population during the period of 1821 to 1874. Comparisons of femur length from juveniles buried in the cemetery with a tenth-century Anglo-Saxon series from Raunds, England, and modern growth data from Denver, Colorado (Maresh, 1970), indicate a strong similarity in overall pattern of growth between the three groups (Figure 2.3). The two cemetery samples are temporally separate, but share general ethnic origins with the modern US population. Figure 2.3 shows that the St. Thomas’ series is slightly shorter than the modern series for age. The Raunds series is considerably shorter than either of the other groups, which is to be expected given the inferior living standards of tenth-century England. With regard to the St. Thomas’ skeletons,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
400
Diaphyseal length (mm)
350
300
250
200
150
100
50 0
1
2
3
4
5
6
7
8
9
10
11
12
13
Age (years)
Figure 2.3 Fitted curves for femoral diaphyseal length for the nineteenth-century
St. Thomas’ Church cemetery (dotted line), tenth-century Raunds Anglo-Saxon skeletons (dashed line), and twentieth-century Denver, Colorado, living population (solid line). (From Saunders & Hoppa, 1993; reproduced with permission of John Wiley & Sons, Inc.)
Saunders and coworkers suggest that juveniles died from acute causes and not chronic conditions (e.g., chronic infections or chronic undernutrition) that would result in a decrease in skeletal growth. Children younger than two years of age had slightly lower growth rates than those of modern twentieth-century populations. They regard this as perhaps representing stresses associated with poor maternal health and prenatal growth. Similarly, on the north coast of Peru, comparison between late pre-Hispanic- and postcontact-period femoral growth velocity reveals markedly slower growth among the indigenous children, especially at ages 5, 10, and 12 years in the colonial Lambayeque Valley (Klaus & Tam, 2009). This decrease in the rate of growth takes place in a setting of introduced European disease, increased prevalence of other skeletal stress markers, poor diets, and sociopolitical marginalization. Analysis of juvenile cortical bone growth via measurement of cortical thickness provides a complementary source of information to long bone lengths. In living populations, deficiencies in cortical bone mass are present in groups experiencing undernutrition (Agarwal, 2008; Agarwal & Glencross, 2011; Cooper et al., 1995; Frisancho, Garn et al., 1970; Garn, 1970; Garn et al., 1964; Gluckman et al., 2008; Himes, 1978; Himes et al., 1975). The pioneering investigation by Garn and coworkers (1964), for example, showed that malnourished Guatemalan children have reduced cortical bone in comparison with well-nourished reference groups. Although bone lengths increased during periods of growth recovery, cortical thickness continued to show deficiencies due to earlier episodes of bone loss. Thus, growth recovery
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
13
14
Stress and deprivation during growth and development and adulthood
may involve an increase in bone length (and attained height), but not bone mass (Antoniades et al., 2003; Huss-Ashmore, 1981; Huss-Ashmore et al., 1982). Cortical bone mass is a sensitive indicator of environmental disturbance in archaeological settings. Comparison of femoral cortical thickness from Middle Woodland (Gibson site) and from Late Woodland (Ledders site) series from west-central Illinois reveals a reduction in bone mass in young children (24–36 months), the presumed time of weaning and increased dietary stress (Cook, 1979). In contrast, Hummert and coworkers (Hummert, 1983; Hummert & Van Gerven, 1983; Van Gerven et al., 1985) documented increased cortical bone deficiencies in exclusively older children from the early to late Christian periods in Sudanese Nubia (c. AD 550–1450). Long bone lengths of Nubians are shorter in the early Christian period than in the late Christian period, which may be due to nutritional deficiencies and bacterial and parasitic infections (Hummert, 1983; Hummert & Van Gerven, 1983). Increasing political autonomy during the later Christian period may have served to improve living conditions, resulting in better growth status and health generally. Cortical bone mass continued to be deficient in the later period, indicating that stress was present throughout the Christian period, both early and late. Unlike the long bone lengths that show a recovery during adolescence, there was a continued decrease in cortical bone mass in older children, suggesting that growth in long bone length continued at the expense of cortical bone maintenance (Hummert, 1983; and compare with Garn et al., 1964). Comparison of individuals interred in high-status brick vaults versus low-status grave pits in a nineteenth-century setting in Birmingham, England revealed that the former had higher cortical thickness for age than the latter (Mays, Brickley et al., 2009). These differences likely reflect greater access to adequate nutrition in higher-status individuals than in lower-status individuals in this early industrial setting. Analysis of bone mass in living and past populations underscores an important point about the plasticity of human skeletal tissue and its relationship with growth and development: skeletal tissue adapts in a highly dynamic way to physiological requirements of the living body. These requirements are informed by the environment, including biological, social, and cultural influences over the life course of the individual, from conception through infancy, childhood, adolescence, and adulthood (and see Agarwal & Beauchesne, 2011). This life course perspective is motivated by the understanding that the skeleton reflects the lived experience of the individual at all stages of life. This contextualized approach provides a powerful tool for understanding variation in bone mass and the implication for risk of fracture and debilitation (Glencross & Agarwal, 2011; and see Chapter 4).
2.3.2
Terminal adult height (stature)
As with bone mass, there is a substantial body of evidence drawn from the study of living populations showing a strong relationship between growth suppression due to poor environmental conditions during childhood and attainment of adult body size. In this regard, growthretarded children should develop into short-statured adults. The study of living populations provides some support for this conclusion. Comparison of growth of undernourished Thai children with American (US) children reveals that despite a longer period of growth in the former (by about one year), the reduction in growth over their lifetimes resulted in shortened
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
terminal height (Bailey et al., 1984; and see Bogin & MacVean, 1983; Frisancho, Newman et al., 1970; Satyanarayana et al., 1980). The close ties between environmental stress – especially due to poor nutrition – and adult height are abundantly documented in research developing out of a growing interest in anthropometric history (Floud et al., 1990; Heyberger, 2007; Komlos, 1994, 2009; Steckel, 1995, 2005; Steckel & Floud, 1997; Tatarek, 2006; Vercellotti et al., 2011; and many others). Originally inspired by controversy over the health and well-being of enslaved AfricanAmericans (Steckel, 1979), current research has broadened greatly to include a range of other populations in North America, Europe, and Asia (Fogel et al., 1983; Komlos, 1994; Steckel, 1995). Evidence from a wide range of recent historical populations indicates that stature variability can be explained in large part by environmental factors (Steckel, 1995). This evidence shows that terminal height is a product of nutritional adequacy and, to a lesser extent, disease history. Individuals with adequate nutrition tend to reach their genetic growth potential; those with poor nutrition do not. Genetic factors are also important in determining terminal height. For example, well-off Japanese reach only the fifteenth height percentile of well-off British (Tanner et al., 1982), and genomic studies reveal associations with height (Becker et al., 2011). Climate may be a mediating factor in determining height, but stature shows little correlation with latitude in the comparison of a wide range of human populations (Ruff, 1994a). Of much more importance to the issue of climate is body breadth, which plays a crucial role in determination of the amount of body surface area to body mass in hot and cold climates (Ruff, 1994a). Like childhood growth, there is a temporal trend of stature increase with economic and nutritional improvement (Boldsen, 1995; Floud, 1994; Greulich, 1976; Komlos & Kriwy, 2002; Malina et al., 2005; Őzer et al., 2011; Yagi et al., 1989; and many others) and decline during times of hardship and deprivation (Bogin & Keep, 1999; Fogel et al., 1983; KemkesGrottenthaler, 2005; Kimura, 1984; Leonard et al., 2002; Price et al., 1987; Steegmann, 1985). These analyses show that although growth and height have a genetic component, environmental factors play a profound role in these outcomes. Terminal height data for historical populations are drawn from various archival sources, including military records (Bielicki & Welon, 1982; Komlos, 1989; Mokyr & Ó Gráda, 1994; Sandberg & Steckel, 1987; Steegmann, 1985, 1986; Steegmann & Haseley, 1988), military preparatory schools (Komlos, 1987), prison inmates (Riggs, 1994; Tatarek, 2006), enslaved African Americans (Steckel, 1979, 1986, 1987), and voter registrations (Wu, 1994). Analysis of these data sets by economic historians reveals temporal trends in stature that can be linked with changing economic conditions relating to nutritional adequacy in particular and health status in general. Terminal stature in Euroamerican populations shows significant variability in relation to time, geography, and socioeconomic status. Over the last several centuries, marked improvements in health and nutrition have been documented. Popular convention holds that adult stature has increased during and after the Colonial period in North America. Steckel (1994) analyzed stature data for American-born Euroamerican male soldiers for the period of 1710 to 1950. Contrary to convention, twentieth-century Euroamerican males are not appreciably taller than their predecessors living in the eighteenth century (Steckel, 1995). Skeletons from archaeological contexts offer an important complementary data set for stature analyses based on archival sources. Comparison of stature estimates derived from
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
15
16
Stress and deprivation during growth and development and adulthood
measurements of long bones shows little change from the pre-modern (1675–1879) to the modern (1950–1975) period in the United States (Angel, 1976; Larsen, Craig et al., 1995) (Table 2.1). Analysis of military archival records shows a slight rise in stature (1710–1830) in European-descent, American-born males, followed by a marked decline that continues for most of the nineteenth century (Costa & Steckel, 1997). This decline appears to coincide with the movement of the population from a rural to urban setting, and increasingly poorer sanitation. Simply, the height reductions reflected the health declines of the newly urbanized people in the United States. Following the passage of sanitation laws in cities in the late nineteenth century, improved nutrition, and healthier environments overall, there is a steady rise in stature. The skeletal record for stature in the comparison of medieval and early modern Europeans reveals a reduction in height, reflecting reduced quality of life, exposure to pathogens, and decline in nutrition in the shift from rural to urban settings (KemkesGrottenthaler, 2005). In the New World, the transition to agriculture involved the adoption of maize as a key component of subsistence. There are several negative aspects of maize that potentially could lead to growth disruption and reduced height in native populations in the Americas. Although maize meets energy requirements, it is deficient in the essential amino acids lysine, isoleucine, and tryptophan (FAO, 1970; Whitney & Rolfes, 2011). Because maize has these amino acid deficiencies, it provides a poor protein source. Niacin (vitamin B3) in maize is chemically bound, which reduces the bioavailability of this nutrient to the consumer. In maize-based diets, iron absorption is very low (Ashworth et al., 1973), methionine and phenylalanine are minimally represented, and the leucine–isoleucine ratio is inadequate. The nutritive value of maize is altered by the preparation techniques used to transform it into food. Many native New World societies enhance the nutritional content of maize via alkali-processing (Katz et al., 1974; Stahl, 1989). The addition of alkali promotes the availability of niacin during digestion (FAO, 1953). Some evidence suggests that these treatment protocols actually promote dystrophic effects (Huss-Ashmore et al., 1982). Additionally, removal of the pericarp (bran) in the grinding process decreases the nutritive value of maize; important minerals and some fiber are removed if the pericarp is winnowed from the maize. If the aleurone, the protein- and niacin-rich layer, and bran are removed simultaneously, important nutrients are also lost (FAO, 1953; Rylander, 1994). Additionally, thiamine content is affected by the manner in which the maize is processed. The study of temporal series of archaeological remains, especially in the comparison of New World foragers with later farming populations, reveals trends that are consistent with declining nutritional quality in both maize consumers and populations dependent on other plant domesticates. Comparisons of prehistoric Georgia coastal foragers (pre-AD 1150) with later maize farmers (AD 1150–1550) indicate reductions in stature of about 3% for adult females and 1% for adult males (Larsen, 1982; Larsen, Crosby et al., 2002). Similar reductions in other New World settings are documented in the American Midwest (Perzigian et al., 1984; Sciulli & Oberly, 2002; but see Cook, 1984, 2007), Mesoamerica (Haviland, 1967; Márquez Morfín & del Ángel, 1997; Nickens, 1976; Saul, 1972; Stewart, 1949, 1953a; but see Danforth, 1994; Márquez Morfín et al., 2002; Wright & White, 1996), and Peru (Pechenkina, Vrandenburg et al., 2007). Comparisons of agricultural populations with other settings indicate relatively short statures in Mesoamerica (Storey, 1992a; Danforth, 1999), Ecuador (Ubelaker, 1994), and Peru (Pechenkina, Vrandenburg et al., 2007), which are linked with chronic malnutrition.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
Table 2.1 Euroamerican statures Sample
Description
Stature1
Reference
6 21 3 30 5 14
170 173 170 173 176 177
King & Ubelaker, 1996 Angel, 1976 Rathbun & Scurry, 1991 Cybulski, 1988; Saunders, 1991 Bellantoni et al., 1997 Steegmann, 1986
13 180
172 174
Sledzik & Sandberg, 2002 Saunders, 1991
23 127
174 172
1824–1879
17
173
Sledzik & Sandberg, 2002 Saunders et al., 2002; Sledzik & Sandberg, 2002 Pfeiffer et al., 1989
1825–1894 1826–1863
5 94
171 172
Saunders & Lazenby, 1991 Higgins et al., 2002
1829–1849 1831–1872
5 9
175 170
1832–1849 1833–1861
5 32
172 170
Larsen, Craig et al.,1995 Sledzik & Sandberg, 2002; Wesolowsky, 1989; Wood et al., 1986 Ubelaker & Jones, 2003
1862 24 1876 8 1840s–1870s 334 2003–2006 2331
173 176 172 178
Sledzik & Sandberg, 2002 Sledzik & Sandberg, 2002 Komlos, 1987 McDowell et al., 2008
1658–1690 1675–1879 1750–1830 1825–1894 1826–1863
5 7 8 4 64
161 160 166 161 161
King & Ubelaker, 1996 Angel, 1976 Bellantoni et al., 1997 Saunders & Lazenby, 1991 Higgins et al., 2002
1829–1849 1832–1849 1833–1861
6 3 14
163 162 160
Larsen, Craig et al., 1995 Wood et al., 1986 Ubelaker & Jones, 2003
2003–2006
2477
163
McDowell et al., 2008
Dates
Males Patuxent Maryland rural 1658–1690 Colonial-Civil War Various 1675–1879 Belleview Georgia rural 1738–1756 Old Quebec Colonial military 1746–1747 Walton Connecticut rural 1750–1830 Fort William British military 1755–1757 Henry Fort Laurens US military 1779 Bradford’s Colonial military 1812–1814 Company Snake Hill US military 1814 St. Thomas’ Ontario village 1820–1860 Prospect Hill
Ontario immigrants Harvie Ontario rural Highland Park New York poorhouse Cross Illinois rural Uxbridge Massachusetts poorhouse Mt. Gilead Georgia rural Voegtly Pennsylvania town Glorieta Pass US military Little Big Horn US military West Point cadets2 US military Modern US2 General population Females Patuxent Colonial-Civil War Walton Harvie Highland Park Cross Mt. Gilead Voegtly Modern US2 1 2
Maryland rural Various Connecticut rural Ontario rural New York poorhouse Illinois rural Georgia rural Pennsylvania town General population
N
Mean values in cm. Documentary/living statures.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
17
18
Stress and deprivation during growth and development and adulthood
1.5 Males Females
1
Z-scores, mean+/-SE
0.5
0
-0.5
-1
-1.5
-2 Beiliu
Jiangzhai Yangshao
Shijia
Kangjia Longshan
Xicun Dynastic
Figure 2.4 Variations in long bone lengths between Yangshao and Longshan cultures, as
expressed in average Z-scores. (From Pechenkina et al., 2002; reproduced with permission of authors and John Wiley & Sons, Inc.)
In Mesoamerica, prehistoric heights are greater than in modern Maya (Danforth, 1999; Márquez Morfín & del Ángel, 1997), which appears to be related to increased reliance on maize and associated nutritional decline, especially in the last 500 years. Other archaeological settings show reduction in stature in the shift to agricultural economies or agricultural intensification. Analysis of height data shows that foragers or incipient farmers in settings in East Asia were taller and more robust than their farming descendants (Bulbeck & Lauer, 2006; Kennedy, 1984; Temple, 2008; but see Clark et al., 2014). With regard to millet farmers in the Wei and Yellow River area of China, earlier less intensive farming populations are considerably taller than later more intensive farmers (Pechenkina et al., 2002, 2007, 2013) (Figure 2.4). Similarly, comparisons of skeletal series from the Upper Paleolithic through the Neolithic in western Europe indicate a general reduction in average stature (Bennike & Alexandersen, 2007; Meiklejohn et al., 1984; Meiklejohn & Babb, 2011; Roberts & Cox, 2003, 2007; but see Jacobs, 1993). In some settings, reduction in stature coincided with resource intensification, with either agriculture (Pechenkina et al., 2002, 2013; Van Gerven et al., 1995) or foraging (Ginter, 2011). However, analysis of the record for Europe reveals no change in the trajectory of stature reduction with the foraging-to-farming transition (Meiklejohn & Babb, 2011). Nevertheless, the overall record suggests a link involving diet, resource acquisition, physiological stress, and terminal height. Much of the research on body size in children and adults in archaeological settings is oriented toward tracking the consequences of adaptive transformations, primarily from foraging to farming; relatively little is known about other dietary transitions. The
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
consequences of change in dietary focus not involving agriculture are manifested in temporal comparisons of native populations from the Santa Barbara Channel region of southern California (Lambert, 1993; Walker, 2006; Walker & Thornton, 2002). In this region, populations shifted their dietary emphasis from terrestrial resources – especially plant foods – to marine resources after 500 BC (Glassow, 1996). Over the period of 5500 BC to AD 1250, stature reduced by about 7 cm (Walker & Thornton, 2002; and see Lambert, 1993, and compare with Temple, 2008). Height reduction was fostered by decline in health due to the combined effects of declining nutrition and elevated infectious disease. Protein, mostly derived from fish, was abundant, but other important nutrients may have been lacking in the diets of later prehistoric populations. During the latest prehistoric period (post-AD 1250), there was a rebound in stature, possibly due to improved living conditions. Similar trends in stature reduction have been documented in the Central Valley of interior California (Ivanhoe, 1995; Ivanhoe and Chu, 1996). Comparisons of populations spanning the period of 3000 BC to the mid-nineteenth century reveal statistically significant reductions in stature for both females and males (2.2% and 3.1%, respectively). Archaeological evidence indicates an increased reliance on acorns. Thus, stature reductions more likely reflect nutritional stress owing to a focus on carbohydrates and a narrowing of the dietary spectrum in later prehistory in this setting. While the foraging-to-farming transition shows a general pattern of stature reduction (Mummert et al., 2011), stature reductions identified in archaeological contexts are not universal. A number of regions show either no change, an increase, or a high degree of regional variability in stature (Danforth, 1994; Douglas & Pietrusesky, 2007; Roberts & Cox, 2007; Temple, 2011). In the lower Illinois River valley, there is no clear trend of stature change in the comparison of early prehistoric through late prehistoric periods (Cook, 1984). This is especially significant because it indicates that reduced juvenile height in this setting did not result in reduced adult stature in later prehistoric agricultural groups. Likewise, temporal comparisons of stature in a diversity of archaeological populations – from Ontario, northern Great Plains, Peru, and Chile – show no change in stature with the shift in adaptive strategies involving agriculture (Allison, 1984; various in Cohen & Crane-Kramer, 2007; Cole, 1994; Katzenberg, 1992; Steckel & Rose, 2002). From c. 8250 yBP to the colonial period in Ecuador, there is no evidence of stature decline despite increases in physiological stress (Ubelaker, 1994). All groups in Ecuador are relatively short-statured, thus stress (including poor nutrition) may have been severe throughout the entire sequence (Ubelaker, 1994). The influence of nutritional deprivation on human growth and terminal height and/or long bone length is revealed in the study of components of past groups that may have been differentially buffered against stress. Comparison of elite and non-elite adults from Old World settings shows evidence of greater height in elites than in non-elites (Angel, 1975, 1984; Becker, 1993; Buzon, 2006; Vercellotti et al., 2011). In a Maitas-Chiribaya (c. 2000 yBP) population from northern Chile, shaman males are taller than other, non-elite males, which may indicate better health and resources in the former (Allison, 1984). High-status adult males in some Mesoamerican populations appear to be taller than low-status individuals or the general population (Haviland, 1967; Helmuth & Pendergast, 1986–1987; Storey, 1998; but see Wilkinson & Norelli, 1981). Likewise, elite males are taller than non-elite adults in several contexts in prehistoric Southeast, Midwest, and Southwest United States (Buikstra, 1976a; Cook, 1984; Hatch and Willey, 1974; Malville, 2008; Powell, 1988). These apparent status
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
19
20
Stress and deprivation during growth and development and adulthood
differences in attained height suggest that elites may have had nutritional advantages such as having greater access to animal protein, resulting in achieving greater height than non-elite individuals (Malville, 2008). Many of the elite distinctions are in adult males, suggesting that the burden of stress may be primarily on adult males in ranked societies, at least as it is exhibited in attained height. This discussion points to the critical importance of understanding body size in relation to resource availability and social context. Analysis of elite and non-elite individuals from San Michele’s church in Trino Vercellese, Italy (c. AD 750–1300) shows considerable biological variation in stature – high-status adults are taller than low-status adults – that is clearly linked to relative social position in this Medieval setting (Porro et al., 1999). Additional analysis of stature, body mass, and body proportions in this series reveals significant differences in high-status adults (Vercellotti et al., 2011). In particular, the comparison of different body segments revealed significant differences between low- and high-status males. In particular, males’ distal lower limb and body mass are significantly different in high- versus low-status individuals. High-status men had longer distal lower limbs and were taller than low-status men. However, the taller, high-status men had the shortest relative size of the lower limb, indicating that height is not necessarily influenced by limb length. Rather, other factors such as body trunk size may have a more important influence on height, at least for members of a population having the most positive living conditions. The association between stature and limb length in males is not shared by females in this setting. This may be due to greater susceptibility to growth disruption in males than in females. The argument of greater buffering in females than in males is also supported by the presence of greater prevalence of enamel hypoplasias in men, especially low-status men, than in women in this setting.
2.3.3
Cranial base height
Biological anthropologists note specific patterns of variability in skull base height (auriculare– basion or porion–basion distances) in selected samples, which Angel (1982) suggests is linked to nutritional adequacy during the years of growth and development. Poorly nourished individuals should have flatter cranial bases (called “platybasia”) than well-nourished individuals due to relatively greater deformation of supporting bone in response to the weight of the head and brain: the “weakening of the bone from nutritional deficiencies decreases its ability to resist gravitational pull, therefore inhibiting upward growth of the skull. . .Thus the amount of compression in this area should give an indication of nutritional status” (Angel, 1982:298). Angel tested his hypothesis by comparing skull base heights from skeletal series representing nutritionally disadvantaged and advantaged populations. These comparisons revealed that the advantaged group has much higher cranial bases than the disadvantaged group, which Angel concludes “fits a nutritionally caused mechanical weakening of bone supporting a heavy head” (1982:302). The study of archaeological remains from the eastern Mediterranean Basin indicates variation in cranial base height that Angel (1984; Angel & Olney, 1981) attributed to nutritional quality: crania from populations experiencing nutritional deprivation are platybasic, whereas crania from populations or segments of populations (e.g., Middle Bronze Age “royalty”) with nutritionally adequate diets are not.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
The link between cranial base height and nutritional quality may be more apparent than real, however. Cranial base cartilages, like epiphyseal cartilages of limb bones, are primary cartilages. Primary cartilages have intrinsic growth capabilities that are characteristically resistant to compressive loading. This suggests that a model invoking compression as a causal factor in determining cranial base form is incorrect. Therefore, the phenomenon of cranial base flattening, while interesting, is largely unexplained.
2.3.4
Pelvic morphology
Severe vitamin D deficiency (rickets in children and osteomalacia in adults) caused by prolonged lack of exposure to sunlight or inadequate intake of foods with vitamin D (e.g., eggs and oily fish) weakens maturing bone because the rapidly forming protein matrix does not mineralize sufficiently. This results in pelvic deformation due to the forces created by body weight and gravity (Angel, 1975, 1978a, 1982, 1984; Angel & Olney, 1981; Brickley & Ives, 2008; Brickley et al., 2005, 2007; Greulich & Thoms, 1938; Nicholson, 1945; Thoms, 1947, 1956; Thoms et al., 1939; Walker et al., 1976). Pelvic inlet deformation is characterized by a reduction in the anterioposterior diameter relative to the mediolateral diameter (called “platypellism”). Flattening of the pelvis is well documented in clinical populations (Brickley & Ives, 2008; Greulich & Thoms, 1938; Nicholson, 1945; Thoms, 1947; Vieth, 2003) and in modern anatomical samples that compare lower- and middle-class groups from the United States (Angel, 1982). For example, British women who were young children during the war years of 1914–1918 have flattened pelvic inlets (Nicholson, 1945). Presumably, these women had relatively poor nutrition during these years. Consistent with the relationship between growth and nutritional status, women with flattened pelves also tend to be short-statured. Comparisons of pelvic inlet form between earlier and later (or modern reference) populations suggest improvements in nutritional health in several settings, including the eastern Mediterranean (Angel, 1984; Angel & Olney, 1981), North America (Angel, 1976), and Sudanese Nubia (Sibley et al., 1992). Preliminary evidence shows differences in pelvic shape by social status group. Low-status adult females from the Middle Woodland (Klunk and Gibson Mound groups) period in the lower Illinois River valley have flatter pelvic inlets than high-status adult females (Brinker, 1985). These differences appear to reflect better nutrition in the high-status women than in low-status women. Other aspects of pelvic morphology may also be linked to negative environmental factors. Sciatic notch widths are appreciably larger in nutritionally stressed eighteenth- and nineteenth-century British from St. Bride’s Church, London, than in better fed twentiethcentury Americans (Walker, 2005). Rickets was a severe health problem in industrial England, and is well documented in the St. Bride’s Church population. Archival documents indicate that the births of St. Bride’s individuals with wide sciatic notches occurred during cold months of the year, the period when rickets was especially prevalent. Analysis of a contemporary population from Spitalfields, London, reveals that individuals with rickets had also been exposed to extremely cold temperatures during the first year of life (Molleson & Cox, 1993). Thus, the relatively wide sciatic notches in the St. Bride’s Church population appear to be remnants of early childhood stress (Walker, 2005).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
21
22
Stress and deprivation during growth and development and adulthood
2.3.5
Long bone diaphyseal form
Pronounced bowing of the lower limb long bones is another skeletal deformation in rickets and osteomalacia. As with the pelvis, most bowing deformities occur during the first several years of life when the skeleton is undergoing rapid growth, especially between ages six months and three years (Brickley & Ives, 2008; Stuart-Macadam, 1989). Thus, most femoral diaphyseal deformations associated with severe vitamin D deficiency seen in adults likely occurred during childhood. However, deformation can occur in adulthood, such as with bending of the proximal femoral diaphysis and pelvic deformity (Brickley et al., 2007). Vitamin D deficiency became highly prevalent during the Industrial Revolution, especially in large, densely populated towns and cities in Europe. Culturally influenced avoidance of sunlight (e.g., excess clothing, infant swaddling) may involve decreased vitamin D synthesis, such as in Asia and North Africa (Fallon, 1988; Kuhnke, 1993). Increased availability of vitamin D-enriched foods and reduced air pollution resulted in a virtual disappearance of the disease in industrialized nations during the twentieth century. Skeletal evidence of rickets is uncommon prior to the Medieval period in Europe. In Medieval and later skeletal samples from Europe, a number of long bone deformities – especially severe bowing of upper and lower limb bones – have a rachitic origin (Gejvall, 1960; Giuffra, Vitiello et al., 2013; Mays et al., 2006, Mays, Ives et al., 2009; MøllerChristensen, 1958; Molleson & Cox, 1993) (Figure 2.5). Extreme bowing of lower limb bones of an eight-year-old recovered from an early nineteenth-century African American cemetery in Philadelphia probably resulted from rickets (Angel et al., 1987). A significant prevalence of adult males and females – 35% and 20%, respectively – have bowing resulting from childhood growth disturbance in the same population. Similar patterns of long bone bowing are documented in a nineteenth-century African American series in North Carolina (Lambert, 2006) and in Old World contexts from the Iron Age site of Mahujhari, India (Kennedy, 1984), and in Mesolithic, Bronze Age, and Medieval Europe (Giuffra, Vitiello et al., 2013; Meiklejohn & Zvelebil, 1991). The study of long bones from a large cemetery in nineteenth-century Birmingham, England, revealed a high prevalence of skeletal deformities due to rickets. The condition contributed to reduced stature, especially for individuals over two years of age, due to the combination of femoral bowing and endochondral bone deficiency (Mays, Brickley et al., 2009). This investigation links lesion prevalence (rickets) and growth analysis. Rickets is often associated with vitamin D deficiency in social contexts involving less advantaged members of communities. The study of the remains of a very wealthy elite Medici family of sixteenth-century Italy reveals, however, that children (newborn to five years old) interred in the Basilica of San Lorenzo Medici burial crypt in Pisa, Italy, have deformed limb bones, both upper and lower. These deformations in both upper and lower limbs reflect modifications occurring when these children were first crawling and subsequently walking (Giuffra, Vitiello et al., 2013). In addition to dietary deficiencies, historical sources indicate the emphasis on swaddling – wrapping infants in heavy layers of cloth – and sequestration in homes well away from sunlight. Thus, even in the most elite social classes and in a setting of the world with considerable sunlight, elite children of highly advantaged members of society were at considerable risk for developing rickets. Prehistoric evidence of rickets in the Americas is exceedingly rare, but a pattern of femoral and tibial shaft deformation in at least one setting in the American Southwest has been
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.3 Growth and development: skeletal
Figure 2.5 Rickets in left juvenile (four to five years old) ulna (left) and radius (right);
Medieval period, Gruczno, Poland. (From Kozłowski, 2012; photograph by Tomasz Kozłowski.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
identified. This appears to be associated with a segment of a late prehistoric society that had experienced relatively elevated levels of physiological stress generally (Palkovich, 2008). Flattening of femoral and tibial diaphyses has been documented in numerous archaeological skeletal samples worldwide (see Chapter 6). The primary indices measuring the degree of flatness of femora and tibiae include the meric index (anterioposterior flattening of the proximal femoral diaphysis), pilasteric index (mediolateral flattening of the femoral
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
23
24
Stress and deprivation during growth and development and adulthood
diaphysis), and cnemic index (mediolateral flattening of the tibial diaphysis at the nutrient foramen). Some attribute diaphyseal flattening to nutritional stress (Adams, 1969; Angel, 1984; Buxton, 1938). Buxton (1938) asserted that less bone is required in the construction of a diaphysis if it is flattened rather than round. He viewed the temporal trend of rounder diaphyses as representing an increase in amount of bone, inferring a decline in nutritional deficiency in recent “civilized” populations. Structural analysis of long bone diaphyses reveals that flattening is related not to the amount of bone present, rather to the manner in which it is distributed when viewed in cross-section (Ruff, 2008). Mechanical loading, not nutritional stress, is the primary determinant of flatness of long bone diaphyses (see Chapter 6). Nutritional deprivation or other physiological stressors certainly have an influence on amount of bone, but the relationship between nutritional status and diaphyseal shape is unsubstantiated.
2.3.6
Vertebral neural canal size
The effects of catch-up growth on stature and long bone lengths are problematical for documenting the stress history of an individual during their growth years. An individual may be stressed early in life, but amelioration of negative conditions (e.g., improvement in nutritional status) during later juvenile years may result in obliteration of evidence of growth disruptions that had occurred earlier in life. In the Dickson Mounds, Illinois series, for example, although juvenile growth became stunted in the transition to intensive farming for the period of AD 950 to 1300, no appreciable reductions occurred in adult height (Lallo, 1973). Thus, adult heights in this population are uninformative about juvenile stress. The temporal similarity of stature in the Dickson Mounds series may be simply due to growth recovery. Vertebral growth provides a means of addressing the problem of growth stress identification not possible with attained height. At the time of birth, vertebral neural canal size is approximately 65% complete; full size is reached by about four years of age (Clark, 1988; Clark et al., 1986). Vertebral body height continues to grow into early adulthood, well after the third decade of life. Thus, early and late stress in the life history of the individual is represented in the respective size of the vertebral neural canal and vertebral body height in adult skeletons. If there is a reduction in canal size but not in vertebral height, then catch-up growth likely occurred following early stress (prior to four years of age). If both neural canal size and vertebral body height are small, then stress was likely present throughout most of the years of growth and development, certainly after four years of age and possibly into adulthood (Clark, 1988; Clark et al., 1986). Analysis of thoracic and lumbar vertebrae from the Dickson Mounds site reveals that growth of neural canal size was completed prematurely, but growth in vertebral body height continued through the juvenile years into adulthood (Clark et al., 1986). This growth pattern suggests that stress amelioration in later childhood accounts for the similarity in adult long bone lengths and stature in the earlier and later populations from the Dickson Mounds site. Young adults (15 – 25-year age group) in the Dickson Mounds series have a significantly smaller vertebral neural canal size than older adults (25þ years) (Clark et al., 1986). Similarly, in the Medieval-period Fishergate House series, United Kingdom, individuals aged
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.4 Growth and development: dental
17 – 25 years have a significantly smaller vertebral neural canal size than older adults (Watts, 2011). This size variation suggests a link between skeletal development and conditions that predisposed them to earlier death. Simply, individuals stressed during the juvenile years of growth and development are likely to die earlier than individuals who are either not stressed or less stressed.
2.4
Growth and development: dental
2.4.1
Dental development rate
Dental development is comprised of two components: formation of crowns and roots and eruption of teeth. Dental development overall is less sensitive to environmental constraints than is skeletal development (Cardoso, 2007; Demirjian, 1986; Smith, 1991). The relatively greater resistance of dental tissues to environmental insults has been demonstrated by the observation that various stressors influencing stature and bone age have a relatively small effect on dental development (reviewed in Cardoso, 2007; Smith, 1991). The high degree of genetic control over dental development serves to minimize the effects of poor environmental circumstances (Cardoso, 2007; Garn et al., 1965; Holman & Yamaguchi, 2005; Moorrees & Kent, 1981; Smith, 1991). Tooth formation rates are relatively free of environmental influence (e.g., nutrition), which is indicated by low correlations between tooth formation and skeletal age, stature, relative body weight, fatness, and by the lack of any kind of secular trend (Smith, 1991). Eruption rates and timing, however, are more responsive to environmental factors, such as caries experience, tooth loss, and especially poor nutritional status (Alvarez, 1995; Alvarez & Navia, 1989; Alvarez et al., 1988, 1990; Gaur & Kumar, 2012; Holman & Yamaguchi, 2005; Oziegbe et al., 2014; Ronnerman, 1977). For example, eruption and exfoliation of deciduous teeth are delayed in nutritionally deprived children in comparison with well-nourished children from Cantogrande, Peru (Alvarez, 1995; Alvarez et al., 1988; and see Barrett & Brown, 1966; Cardoso, 2007). Additionally, unlike formation, eruption timing shows some correlation with body size (Garn et al., 1960; McGregor et al., 1968). It is not possible to identify delays in dental development timing in archaeological series based on teeth alone, because age-at-death must be determined by comparing the archaeological dentitions with some standard based on individuals of known age (Moorrees et al., 1963). However, relative differences between dental and skeletal development may provide some insight into growth stress. Comparison of skeletal age and dental age in Medieval-period skeletons from Sudanese Nubia reveals that most individuals (70.5%) have skeletal ages that are younger than their dental ages (Moore et al., 1986). These relative differences indicate that skeletal growth may have been retarded. Dietary reconstruction suggests that growth retardation may have been due to nutritional deprivation, a finding that is consistent with other skeletal indicators of stress (e.g., porotic hyperostosis: Moore et al., 1986). Analysis of skeletal and dental development where age is known reveals that dental development is less subject to environmental perturbations than is skeletal development. For example, Cardoso’s (2007) analysis shows that individuals with lower socioeconomic status in an urban setting in Portugal have more delayed skeletal development than dental development, by as much as half a year.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
25
26
Stress and deprivation during growth and development and adulthood
2.4.2
Tooth size
Like bone size, tooth size involves a complex interplay between environment and heredity. Unlike skeletal elements, tooth crowns do not remodel once fully formed. Thus, teeth provide a record of size well in advance of the adult years. Tooth size appears to be highly heritable, indicating that variation between and within human populations can be explained mostly by genetic differences (see Chapter 7; and see Kieser, 1990). Twin studies reveal that as much as 80% to 90% of observed variation in tooth size is due to genetic influence for anterior teeth and 60% for posterior teeth (Townsend et al., 1994). Other estimates of heritability vary widely (Kieser, 1990), but most workers agree that environmental influences on tooth size are significant, albeit small (Dempsey et al., 1996; Dempsey & Townsend, 2001; Garn et al., 1965; Garn, Osborne et al., 1979; Hughes & Townsend, 2013; Potter et al., 1983; Townsend, 1980, 1992; Townsend & Brown, 1978). Thus, tooth size represents a measure of deviation from genetic growth potential in response to some stressor or stressors (Apps et al., 2004; Bailit et al., 1968, 1970; Evans, 1944; Garn, Osborne et al., 1979; Garn et al., 1980; Goose, 1967). Placental insufficiency, low birth weight, maternal health status, nutritional status, and a variety of genetic and congenital defects (e.g., Down’s syndrome, cleft palate, prenatal rubella, congenital syphilis) are linked with reduced tooth size (Apps et al., 2004; Cohen et al., 1979; Garn & Burdi, 1971; Garn, Osborne et al., 1979; Goodman, 1989; Seow & Wan, 2000). Understanding the influence of nutrition on tooth size is hampered by the paucity of data holding genetic factors constant in situations of variable nutritional quality. These findings are consistent with experimental research on laboratory animals showing tooth size reduction in response to developmental disruptions and nutritional deprivations (Bennett et al., 1981; Holloway et al., 1961; Paynter & Grainger, 1956; Riesenfeld, 1970; but see Murchison et al., 1988). Prehistoric maize agriculturalists from coastal Georgia post-dating AD 1150 have smaller teeth than did their foraging predecessors (Larsen, 1982, 1983). Tooth size reduced in both the permanent and deciduous dentitions, which may reflect an increase in physiological stress due to declines in dietary quality and health status generally. Tooth size reduction in the primary dentition suggests a negative change in maternal health status and placental environment as deciduous teeth form in utero. Given the relatively narrow temporal window of tooth size reduction in this and other populations with the shift from foraging to farming or farming intensification (Bulbeck & Lauer, 2006; Christensen, 1998; Coppa et al., 1995; Hinton et al., 1980; Meiklejohn & Zvelebil, 1991; Pinhasi & Meiklejohn, 2011; y’Edynak, 1989), these changes likely indicate an increase in stress that accompanied this transition. In contrast, Lunt (1969) documented a temporal increase in permanent tooth size from Medieval times to the present in Denmark, which is attributed to improved dietary conditions in later times (and see Lavelle, 1968). Dental size decrease or increase in Holocene populations cannot be explained fully by nonevolutionary factors. In prehistoric Nubian populations, there is a relatively greater reduction in posterior tooth size than in anterior tooth size, which Calcagno (1989) attributes to a selective advantage for smaller posterior teeth in caries-prone agriculturalists. These findings underscore the complexity of tooth size, requiring consideration of both extrinsic and intrinsic circumstances in specific settings. The hypothesis that members of a population who suffer most from illness and physiological stress are more likely to die at an earlier age than are other (healthier) members of a
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.4 Growth and development: dental
Table 2.2 Juvenile and adult permanent tooth size (buccolingual; mm) from Santa Catalina de Guale, St. Catherines Island, Georgia (Adapted from Simpson et al., 1990; Table 5-1) Juvenile
Permanent
N
Mean
SD
N
Mean
SD
% Differencea
Tooth Maxillary I1 I2 C PM1 PM2 M1 M2
16 23 28 34 25 38 21
7.66 6.94 8.59 10.12 9.77 11.93 12.09
0.56 0.39 0.66 0.59 0.56 0.68 0.67
33 37 55 70 72 77 85
7.48 6.91 8.64 10.09 9.89 12.14 12.01
0.40 0.36 0.47 0.49 0.64 0.51 0.68
−2.4 −0.4 0.6 −0.3 1.2 1.7 −0.7
Mandibular I1 I2 C PM1 PM2 M1 M2
20 27 32 37 33 45 31
5.84 6.23 7.51 8.09 8.42 11.11 10.76
0.38 0.40 0.57 0.46 0.52 0.49 0.57
22 47 77 95 95 72 87
5.89 6.34 7.85 8.30 8.63 11.24 10.76
0.33 0.38 0.53 0.44 0.47 0.52 0.61
a b c
0.8 1.7 4.3c 2.5b 2.4b 1.2 0.0
Computed by the formula: 100 − [100 (min. mean/max. mean)]. P0.05 (Student’s t-test). P0.01 (Student’s t-test).
population has been tested by the comparison of permanent tooth size of juveniles and adults in different settings in the American Southeast, namely in the late prehistoric Averbuch series from the Middle Tennessee River valley (Guagliardo, 1982a) and colonial Spanish missions (Simpson et al., 1990; Stojanowski et al., 2007). Both populations were sedentary maize agriculturalists exhibiting skeletal evidence of high levels of physiological stress and poor health. In these settings, juveniles have smaller permanent teeth than adults. In the Santa Catalina series, nine of 14 tooth types examined are smaller in juveniles than in adults (Table 2.2). The other tooth types show either no difference or slightly smaller size in adults. All statistically significant differences are for smaller juvenile teeth. In these samples, juvenile – adult size differences are more common in mandibular teeth than in maxillary teeth, suggesting that the lower dentition may be more developmentally sensitive to stress than is the upper dentition. These studies indicate the failure of teeth to reach their growth potential in circumstances involving increased stress. This conclusion lends support for Sagne’s (1976) hypothesis that the Medieval-era Swedish dying young received suboptimal nutrition during the years of dental development, resulting in smaller teeth (and see Lunt, 1969). These investigations suggest that individuals with small teeth – those who were most stressed – had a reduced lifespan, which is consistent with evidence based on vertebral neural canal size, height
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
27
28
Stress and deprivation during growth and development and adulthood
(Kemkes-Grottenthaler, 2005), and other dental indicators (e.g., enamel defects: Goodman & Armelagos, 1988). As with neural arch canal size and height, it is unlikely that small teeth led to reduced longevity. Rather, size reduction is symptomatic of environmental stress that contributed to smaller teeth and early death.
2.4.3
Fluctuating and directional asymmetry
Beginning with the work of developmental geneticists in the 1940s, a consensus has emerged that bilateral structures normally developing as mirror images of each other will develop differently in response to environmental instability (Kieser, 1990). Van Valen (1962) suggested that one type of asymmetry – which he called fluctuating asymmetry – reflects that when confronted with stress, the body tissues are unable to develop bilaterally in their normal growth pathways. Thus, in settings involving elevated stress, teeth and other bilateral structures fail to develop evenly on both sides. Support for this hypothesis is provided by the study of laboratory animals exposed to induced stress (e.g., hypothermia, blood loss, heat, cold, diabetes, audiogenic stress). Stressed animals display an increase in fluctuating asymmetry in a variety of bilateral anatomical structures, including dental and skeletal tissues (Albert & Greene, 1999; Klingenberg, 2003; Kohn & Bennett, 1986; Møller & Swaddle, 1997; Nass, 1982; Palmer & Strobeck, 2003; Polak, 2003; Richtsmeier et al., 2005; Sciulli et al., 1979; Siegel & Mooney, 1987; Siegel et al., 1977). The study of odontometric fluctuating asymmetry in living, archaeological, and paleontological samples presents mixed and sometimes contradictory results (Barrett et al., 2012; Bassett, 1982; Black, 1980; Corruccini et al., 2005; Doyle & Johnston, 1977; GuatelliSteinberg et al., 2006; Harris & Nweeia, 1980; Kieser & Groeneveld, 1998; O’Connell, 1983; Suarez, 1974; Townsend & Brown, 1980; Townsend & Farmer, 1998). Left–right tooth size differences are present in archaeological samples, including the Archaic-period Indian Knoll (Kentucky), late prehistoric Campbell site (Missouri), and contact-era Larson site (South Dakota) (Perzigian, 1977). The Indian Knoll dentitions are the most asymmetric, which Perzigian (1977) attributes to poor diet in comparison with late prehistoric and contact-era agriculturalists. His interpretation of decrease in dietary quality in comparing prehistoric foragers and farmers runs counter to the conclusions drawn by many bioarchaeologists working in the Eastern Woodlands – namely, nutritional stress is more pronounced in farmers than in foragers (Larsen, 1995). Thus, the pattern of decreasing asymmetry in these groups remains unexplained. Temporal comparisons of a series of populations from prehistoric Paloma, Peru reveals a trend of decrease in asymmetry over time, and in this setting, substantive skeletal and dental evidence indicates improving health over time (Benfer, 1984). Using simulation sampling, Smith and coworkers (1982; and see Garn, Cole et al., 1979) assert that the amount of asymmetry is highly sensitive to sample size. They argue that sample sizes of several hundred individuals are required in order to detect meaningful differences between populations in fluctuating asymmetry. Similarly, Greene (1984) found that the confounding effect of measurement error can both obscure real differences and artificially create others.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.4 Growth and development: dental
Kieser and coworkers (1986a, 1986b; and see Kieser, 1990) analyzed fluctuating dental asymmetry as an indicator of environmental disruption in highly stressed Lengua Indians presently inhabiting the Chaco region of Paraguay. Application of Euclidean map analysis, a statistically powerful approach whereby each dentition is considered an independent variable, produces a measure of asymmetry represented by dividing the sum of Euclidean distances for tooth antimeres by a product of the mean individual tooth size and the number of tooth pairs. Comparisons with well-nourished, disease-free Whites reveals much greater asymmetry in the Lengua population. Younger, more acculturated Lengua with better diets and greater access to Western healthcare show lower asymmetry values than those found in more traditional Lengua, experiencing elevated stress. Using the same methodology, Townsend and Farmer (1998) determined asymmetry scores in a sample of South Australian children. Most children are healthy, and have correspondingly low asymmetry scores. A few individuals with low birth weight have relatively high asymmetry scores. Similar patterns of fluctuating asymmetry as a biological indicator of developmental instability have been documented in other hard tissues. Based on the earlier finding that environmental stress was likely higher in the early Christian period than in the late Christian period in Nubia (and see earlier), DeLeon (2007) tested the hypothesis that there is a decrease in fluctuating asymmetry in the craniofacial skeleton. Her analysis of landmark coordinate data revealed a pronounced decrease in level of fluctuating asymmetry in the later population compared to the earlier population (Figure 2.6). Interestingly, the lateral landmarks show more asymmetry than that seen in the medial landmarks, supporting the notion that developmental instability is trait specific and not random. Similarly, epiphyseal union in the earlier, more highly stressed population shows considerably more bilateral asymmetry than that found in the later population (Albert & Greene, 1999), also consistent with other bioarchaeological evidence for decline in stress in the temporal sequence in Nubia. The presence of fluctuating asymmetry in dental and craniofacial tissues would suggest the likelihood of poor health outcomes in adults. Indeed, analysis of nineteenth- and early twentieth-century remains from Portugal, having known cause-of-death, reveal that adults who died from degenerative diseases (e.g., chronic heart disease, diabetes) had higher rates of fluctuating asymmetry than those with other causes of death (e.g., infectious disease) (Weisensee, 2013). Directional asymmetry is another pattern of bilateral variation that has been identified in analysis of tooth size in human populations. This pattern is characterized by larger teeth on one side of the dental arch than on the other. Directional asymmetry is infrequently reported for human populations (Ben-David et al., 1992; Boklage, 1987; Harris, 1992; Lundström, 1948; Mizoguchi, 1986; Sharma et al., 1986; Townsend & Farmer, 1998). Harris (1992) detected directional asymmetry in a large sample of permanent teeth of Euroamerican adolescents, with a consistently greater left dominance in one dental arch. Directional asymmetry is unexplained by current models, but may be an indicator of developmental instability arising from stress (Harris, 1992). The strong environmental basis of directional (and fluctuating) asymmetry is inferred by observation of low intraclass correlations between monozygous and dizygous twins (Townsend, 1992). Additionally, detection of spurious genetic variance indicates a virtual lack of evidence for a significant genetic basis.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
29
30
Stress and deprivation during growth and development and adulthood
EARLY PERIOD
LATE PERIOD
LAM
NAS
PTP AST
GPF
LAM PTP NAS AST
GPF
NAS
PTP
AST LAM
Figure 2.6 Fluctuating asymmetry observed in Early and Late Period Kulubnarti, Sudan.
More fluctuating asymmetry is observed in the Early Period sample, indicating elevated physiological stress. (From DeLeon, 2007; reproduced with permission of author and John Wiley & Sons, Inc.)
2.5
Skeletal and dental pathological markers of deprivation
2.5.1
Anemia and abnormal cranial porosities
Anemia (“without blood”) is a pathological deficiency in red blood cells or hemoglobin, the key protein that chemically binds with oxygen, transporting it to the body tissues. Thus, with a deficiency in either red blood cells or hemoglobin, an individual has a reduced availability of
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.5 Skeletal and dental pathological markers of deprivation
oxygen. The outcome of such a deficiency and associated compromised access to oxygen is reduced physical activity in general and work performance in particular (Crouter et al., 2012; Haas, 2006). There are three causes of anemia – blood loss, impaired production of red blood cells, increased red blood cell production, or any combination of these conditions. Some of the anemias are genetic, such as thalassemia, sickle cell anemia, nonspherocytic hemolytic anemia (e.g., glucose-6-phosphate dehydrogenase deficiency [favism], pyruvate kinase deficiency), spherocytosis, and rarely, hereditary elliptocytosis. In living and past populations, the majority of the anemias are acquired, caused by either blood loss or nutritional deficiencies. There are regions of the world, however, where genetic anemias are highly prevalent, reflecting natural selection in areas where malaria is endemic. Angel (1966a) made the case that pathological conditions he documented in the Mediterranean basin were largely genetic in origin. DNA sequence analysis from an eight-year-old from the site of el Akhziv, Israel, revealed that the child had homozygosity for ß-thalassemia (Filon et al.,1995), thus providing some support for Angel’s hypothesis. However, it now seems unlikely that the record of cranial porosities in the region is largely due to genetic causes (and see later). In the normal individual, the rate of red blood cell production is balanced by the rate of red blood cell destruction. The balance between production and destruction of red blood cells requires a number of key micronutrients, including especially the essential amino acids, iron, and vitamins A, B12, B6, and folic acid (Martini & Ober, 2001; Walker et al., 2009). Iron is an essential element of hemoglobin, thus serving as a key component of oxygen transport. Iron deficiency is the most common cause of anemia, owing to blood loss, iron-poor diets, and gastrointestinal malabsorption of the element. The absorption of iron is dependent upon its source, from either heme or non-heme foods. Generally, heme sources of iron are efficiently absorbed, with meat being among the best (Baynes & Bothwell, 1990). Iron in meat does not require processing in the stomach, and the amino acids derived from digestion of meat help to enhance iron absorption (Wadsworth, 1992). Iron bioavailability of non-heme sources is highly variable, but plant sources are generally poorly absorbed. Various substances found in plants inhibit iron absorption, such as phytates in many nuts (e.g., almonds, walnuts), cereals (e.g., maize, rice, whole wheat flour), and legumes (Baynes & Bothwell, 1990). Unlike protein in meat, plant proteins (e.g., in soybeans, nuts, lupines) inhibit iron absorption. Tannates found in tea and coffee also reduce iron absorption significantly (Hallberg, 1981). A number of foods enhance iron bioavailability, especially when consumed in combination with non-heme iron. For example, ascorbic acid promotes iron absorption (Baynes & Bothwell, 1990; Hallberg, 1981; Wadsworth, 1992). Citric acid from various fruits and lactic acid from fermented cereal bees are implicated in promoting iron absorption (Baynes & Bothwell, 1990). Non-heme iron (e.g., in maize) is enhanced considerably by concurrent consumption with meat and fish (Layrisse et al., 1968; Navas-Carretero et al., 2008). If either absorption or consumption of iron or the other micronutrients is low, red blood cell production is reduced, resulting in anemia. In response to anemia, the body first steps up its production of red blood cells (Stockmann & Fandrey, 2006). If this first response fails in alleviating the shortfall, the regions of the skeleton responsible for red blood cell production are activated, resulting in an expansion of the cranial vault marrow (diploë). This elevated red blood cell production occurs at the expense of the ectocranial surfaces, resulting in resorption
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
31
32
Stress and deprivation during growth and development and adulthood
(a)
Figure 2.7a Porotic hyperostosis on frontal and parietals; Santa Maria de los Yamassee, Amelia
Island, Florida. (Photograph by Mark C. Griffin.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
of the bone and a characteristic porosity called porotic hyperostosis, especially involving the cranial vault bones (Figure 2.7). Marrow expansion, therefore, is clearly caused by anemia as a response to deficiencies in red blood cells and/or hemoglobin (Moseley, 1974; Ortner, 2003; Schultz, 2001). However, only some forms of anemia result in the characteristic pathognomic porotic lesions of the cranial vault. That is, only the anemias that can sustain high and elevated production of red blood cells likely cause porotic hyperostosis. Walker and coworkers (2009) argue that iron deficiency anemia per se results in decreased production of red blood cells, anemia certainly, but not the kind of anemia that could produce cranial porosities. That is, iron-deficiency anemia is a condition of reduced and inadequate blood production and is insufficient, thus resulting in the kind of marrow hypertrophy associated with porotic hyperostosis and cribra orbitalia. Specifically, two forms of anemia lead to elevated red blood cell production and cranial marrow expansion resulting in porotic hyperostosis, namely megaloblastic and hemolytic anemias. Megaloblastic anemia is commonly caused by chronic nutritional deficiencies and malabsorption of vitamin B12 or folic acid, or both. In both forms of anemia, there is an overproduction of red blood cells, resulting in expansion of the marrow (Walker et al., 2009).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.5 Skeletal and dental pathological markers of deprivation
(b)
Figure 2.7b Histological section from Figure 2.7a. The linear orientation and general morphology of diploic cavities are consistent with iron deficiency anemia. (Schultz et al., 2001; reproduced with permission of authors and University Press of Florida.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
Similar lesions found in the roof areas of the eye orbits, called cribra orbitalia, are also frequently observed in archaeological remains (Figure 2.8). There is an extensive body of bioarchaeological research documenting porotic hyperostosis, largely beginning with Angel’s (1966a, 1967, 1971a, 1978b, 1984) systematic study of a large series of skeletal remains on a regional and population basis. Based on his study of some 2200 archaeological crania from the eastern Mediterranean region, principally Greece, Cyprus, and Turkey, Angel proposed that porotic hyperostosis resulted from hereditary hemolytic anemias, especially thalassemia or sickle cell anemia. In this setting where malaria is endemic, individuals who are heterozygous for sickle cell anemia or thalassemia have a selective advantage over normal homozygous individuals lacking the sickle-cell or thalassemia genes. Carriers show lower infection rates by malarial parasites (genus Plasmodium), thus enjoying greater protection from malaria. Other regional studies dealing with large samples of skeletal remains showed that cranial porosities in past populations are likely due to a variety of nongenetic factors. In Wadi Halfa,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
33
34
Stress and deprivation during growth and development and adulthood
Figure 2.8 Cribra orbitalia; Santa Catalina de Guale de Santa Maria, Amelia Island, Florida.
(From Larsen, 1994; photograph by Mark C. Griffin; reproduced with permission of John Wiley & Sons, Inc.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
Nubia, in the Nile Valley, high prevalence of orbital lesions (21.4%) for the Meroitic (350 BC – AD 350), X-group (AD 350 – 550), and Medieval Christian (AD 550 – 1400) periods have been reported (Carlson et al., 1974). Reconstruction of the environmental context based on archaeological, historical, and ethnographic evidence indicates that several factors likely contributed to anemia. Milled cereal grains (millet, wheat), the focus of diet in this setting, contain very little iron and are high in phytate. Additionally, as with populations currently living in the Nile Valley, hookworm disease and schistosomiasis were likely highly endemic. These factors, combined with chronic diarrhea that is also highly prevalent in the region today, are consistent with the presence of anemia. Further to the south in the Nile Valley, a high prevalence of porotic lesions (45%) has been reported at the Medieval-period Kulubnarti site (AD 550 – 1500) (Mittler & Van Gerven, 1994). Early and late-period Kulubnarti juveniles have remarkably high prevalences (94% and 82%, respectively: Van Gerven et al., 1995). Like the Nubian groups down river at Wadi Halfa, the Kulubnarti population suffered the ill effects of anemia owing to nutritional deficiencies and other negative influences of sedentism and unhealthy living conditions. Analysis of demographic profiles of individuals with and without lesions indicates that those with porotic crania are associated with a shortened life expectancy, with differences greatest in the subadult years (and see Blom et al., 2005). There is a decline in porotic lesion prevalence from 51.8% to 39.0% from the early to late Christian periods (AD 550 – 750 to AD 750 – 1500). This apparent improvement in iron status coincides with improvements in health
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
2.5 Skeletal and dental pathological markers of deprivation
generally that arose following increased political autonomy and improved living conditions (Mittler & Van Gerven, 1994). Circumstances involving association between anemia and increasing or elevated stress have been documented in Medieval and seventeenth-century Tokyo (Hirata, 1990), prehistoric Iran and Iraq (Rathbun, 1984), third-century BC Carthage (Fornaciari et al., 1981), Neolithic Levant and Greece (Eshed et al., 2010; Papathanasiou, 2005; Papathanasiou et al., 2000), third-century AD Moldavia (Miritoiu, 1992), and Romano-British, Medieval, and eighteenth– nineteenth-century Britain (Grauer, 1993; King et al., 2005; Lewis, 2002; Molleson & Cox, 1993; Stuart-Macadam, 1991; Sullivan, 2005; Wells, 1982). All of these settings have good contextual evidence for elevated environmental stress, but the particular circumstances for anemia are regional specific. For example, causative factors for high prevalence of porotic lesions in the Roman-period Poundbury Camp, and likely in British populations, included parasitism, infectious disease, and perhaps lead poisoning (Stuart-Macadam, 1991). High prevalences in eighteenth–nineteenth-century urban London appear to be linked with shifts in weaning practices, poor living conditions, and low maternal health status (Lewis, 2002; Molleson & Cox, 1993). Indeed, comparisons of temporal series in England reveal that industrialization had the highest negative impact on children’s health, more so than events in any other period (Lewis, 2002). On the other hand, improved environments result in the decline of prevalence of porotic lesions. For example, decrease in prevalence in modern Japan reflects decreased crowding, reduction in infectious diseases, and improved hygiene (Hirata, 1990; Temple, 2010). The most abundant data on porotic lesions is available from the New World, especially North America. In the American Southwest, cranial porosities are highly prevalent, especially in late prehistoric and historic-era populations (Akins, 1986; El-Najjar & Robertson, 1976; ElNajjar et al., 1975, 1976, 1982; Hooton, 1930; Kent, 1986; Martin et al., 2001; Palkovich, 1980, 1987; Stodder, 1994, 2006; Stodder & Martin, 1992; Stodder et al., 2002; Walker, 1985; Walker et al., 2009; Zaino, 1967, 1968). Among mostly late prehistoric Puebloan samples studied by El-Najjar and collaborators (El-Najjar et al., 1976) from Canyon de Chelly, Chaco Canyon, Inscription House, Navajo Reservoir, and Gran Quivira, porotic lesions were documented in 34.3% of individuals. At Chaco Canyon alone, some 71.8% of individuals display the characteristic lesions. Similarly, high prevalences have been reported from late prehistoric- and contact-period sites, including San Cristobal (90%), Hawikku (84%), Black Mesa (88%), Mesa Verde (70%), Dolores (82%), Cases Grandes (46%), and La Plata Valley (40%) (Stodder, 1994; Weaver, 1985). There are some Southwestern samples that have relatively low prevalences (e.g., 16% for Navajo Reservoir children; see Martin et al., 1991). Martin and coworkers (1991) note that comparisons of data collected by different researchers is problematical because of the varying methods used in identification and recording of porotic lesions. For example, some researchers may include slight pitting when analyzing their data sets, whereas others may not. Unfortunately, this distinction is only rarely noted in bioarchaeological reports, regardless of geographic or cultural setting. El-Najjar (1976) links the elevated levels of abnormal cranial porosities in the American Southwest and other regions of the New World to the effects of over-reliance on maize in conjunction with food processing techniques that may contribute to iron deficiency. Specifically, he regards the presence of phytate – an iron inhibitor – as well as lime treatment to decrease the nutritional value of maize.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 12 Jan 2017 at 18:31:54, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.004
35
36
Stress and deprivation during growth and development and adulthood
Analysis of archaeological samples from other maize agriculturalists in the New World provides mixed support for El-Najjar’s dietary hypothesis. Relatively high prevalences of porotic lesions (>15% – 20%) are present in agriculturalists in the American Midwest (Cook, 1984; Garner, 1991; Goodman, Lallo et al., 1984; Lallo et al., 1977; Milner, 1983, 1991; Milner & Smith, 1990; Perzigian et al., 1984), Southeast (Boyd, 1986; Danforth et al., 2007; Eisenberg, 1991a, 1991b; Hutchinson, 2002, 2004; Lambert, 2000b; Larsen, 2006; Larsen, Crosby et al., 2002; Larsen & Sering, 2000; Larsen et al., 2007; Parham & Scott, 1980; Rose et al., 1984), and Northeast (Magennis, 1986; Pfeiffer & Fairgrieve, 1994), as well as a range of other settings in Mesoamerica and South America (Blom et al., 2005; Cohen et al., 1994; Hodges, 1989; Hooton, 1940; Hrdlička, 1914; Saul, 1972; Trinkaus, 1977; Ubelaker, 1984, 1992a, 2002; White et al., 1994; Wright, 2006). For some regions where skeletal remains of foragers (or less intensive agriculturalists) have been compared with agriculturalists, there are clear temporal increases in porotic lesion prevalence (Cook, 1984; Lallo et al., 1977; Perzigian et al., 1984; Rose et al., 1984; but see Hodges, 1989). However, skeletal series from large, late prehistoric Mississippian centers in the American Southeast (Blakely, 1980; Larsen et al., 2007; Powell, 1988, 1989), contact-era part-time maize agriculturalists in the Great Plains (Miller, 1995), a large urban center in Mesoamerica (Storey, 1992a), and the coastal desert of Peru and Chile (Allison, 1984) all display low prevalences of cranial porosities. The dietary hypothesis presented by El-Najjar does not account for the relatively high frequencies of porotic lesions in some foraging populations. A number of Pacific coastal foraging groups with access to iron-rich marine resources have high prevalences of cranial porosities. Moderate levels of porotic lesions are present in precontact and contact-era Northwest coast populations (Cybulski, 1977, 1992, 1994, 2006; Keenleyside, 1998, 2006). In this setting, European-introduced diseases may have prevented adequate iron metabolism during the contact period (Cybulski, 1994). Presence of porotic lesions prior to contact indicates that there may have been other important factors, such as blood loss and parasitism (Cybulski, 1994, 2006). Late prehistoric foragers from the islands and mainland of the Santa Barbara Channel Island region of California have higher prevalences than earlier foragers, increasing from 12.8% in the Early period to 32.1% in the Late Middle period (Lambert, 1994; Lambert & Walker, 1991; Walker, 1986, 2006). Late period populations living on islands located furthest from the mainland coast have an extraordinarily high prevalence of porotic lesions (73.1% on San Miguel Island). Walker and Lambert suggest that water contamination explains the elevated prevalence of the condition. High prevalence of porotic lesions in island populations coincides with a period of increasing sedentism and population size and a shift from terrestrial to marine diets. In the Late period, groups became concentrated around a limited number of water sources. As a result, diarrhea-causing enteric bacteria may have contaminated these water sources. Ethnographic evidence indicates that island populations preferred eating raw (versus cooked) fish (Walker, 1986), thus also increasing their chances of acquiring parasitic infections. Prevalences of porotic lesions in prehistoric Australian foragers are consistently high in tropical/subtropical environments and low in desert environments (Webb, 1995). For example, in southeastern Australia, prevalences range from 62.5% (35 years of age: 10.3% of frontal bone trauma), suggesting the violence may have led to early death. Most depressed fractures are either circular or oval, and were probably caused by the impact of either hand-held rocks or clubs. In addition, fragments of obsidian blades imbedded in some of the cranial fractures indicate the use of other types of weapons. Extensive healing indicates that very few of these injuries were fatal, which suggests that violence was generally not intended to have a lethal outcome (compare with Lambert, 1994, 2002c; Walker, 1989; and authors cited earlier). The arrival of Europeans resulted in the introduction of firearms weaponry, which may have contributed to a shifting of these intentions from nonlethal injury to homicide. For example, small lead pellets from a gunshot wound were imbedded in the frontal and left parietal bones of a historic-period adult. The bone tissue surrounding the entry wounds is fully healed, suggesting that the victim survived for a period of time following the attack. Most other victims of firearms were likely not as fortunate as this individual. In summary, the skeletal evidence of trauma on Easter Island is only partially consistent with folklore documenting frequent and violent interpersonal conflict, a perception for much of the Pacific but not especially well documented bioarchaeologically for this vast area of the world (DeGusta, 2000; Scott & Buckley, 2010). Contrary to this record, bioarchaeological analysis indicates that violence resulted in nonlethal injury rather than widespread death (Owsley et al., 1994). Additionally, the practice of cannibalism is not confirmed by the study
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
155
156
Injury and violence
of skeletal remains. Thus, folklore of Eastern Islanders overemphasizes warfare, violence, and cannibalism.
4.3.9
Nubia and the Nile Valley
Conflicts in the Nile Valley – the setting of Egypt and Sudan – during the age of the rise of complex societies are a matter of legend. One of the best bioarchaeological records is from Sudanese Nubia, representing a range of localities and times (Alvrus, 1999; Buzon & Judd, 2008; Buzon & Richman, 2007; Judd, 2002, 2004, 2006; Judd & Irish, 2009; Kilgore et al., 1997; Smith & Jones, 1910) that provide a record for conflicts within Nubia (Judd, 2006) and with colonizers from Egypt (Buzon & Richman, 2007). Comparison of rural with urban individuals reveals a distinctive difference between the two, suggesting a difference in risk and lifestyle. That is, urban individuals have more cranial injuries, likely reflecting the availability and use of hand-held implements, such as walking sticks. However, aside from gross comparisons of fracture prevalence, there is surprisingly little information on temporal trends in violent trauma, whereas temporal comparisons reveal important trends in accidental injury patterns in recent humans (also see later). The earlier record from the regions shows a highly elevated prevalence of injury. At Semna South (c. 2000 BC) and Kerma (c. 1750–1500 BC), postcranial fracture prevalences exceed 20% (Alvrus, 1999; Judd, 2002). Many of the individuals in these settings had multiple injuries, mostly arising from accidental causes (Judd, 2002). For example, in a rural Kerma culture sample (2500–1550 BC), of 55 adults, 80% had an injury, with 62% having at least two injuries. This is higher than the urban sample at Kerma, strongly suggesting that lifestyle and risk of trauma due to either occupational or environment circumstances was higher in rural settings (Judd, 2006). Of those individuals with injuries, 51% of females and 71% of males had multiple injuries. The remarkably high level of endemic violence in Nubia was also fueled by external forces, including repeated attacks from Egypt, due largely to the fact that Nubia controlled key trade routes, which promoted a long history of raiding into Nubia. Kerma was able to resist these forays for centuries, finally being defeated in c. 1520 BC (Buzon and Richman, 2007). While warfare was an important policy of control, Egypt appears to have shifted its preference from raiding and battle to diplomacy and incorporation of Kerman Nubians into Egyptian government. Buzon and Richman (2007) tested the hypothesis that this shift in political strategy would result in lower levels of violent confrontation and injury than what Judd (2004) documented for earlier periods involving conflict between Nubia and Egypt. Their analysis of remains from the Egyptian colonial settlement of Tombos (1550–1050 BC) in northern Sudan revealed a remarkably lower prevalence of violence-related cranial injury in comparison with the earlier Kerma and later Tombos populations, decreasing from 11.2% to 1.4%. Analysis of social and political circumstances strongly suggests that this change in violence-related injury was due to alterations in Egyptian policies in Nubia. In the Wadi Halfa region of Sudanese Nubia, fracture prevalence increased in the Christian period (AD 550–1300) relative to earlier periods (Armelagos, 1969; Armelagos et al., 1981). Within the Christian period in Kulubnarti, Nubia, there is a general increase in fracture prevalence from 18% to 23% in the early (AD 550–750) to late (AD 750–1550) periods,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
respectively (Burrell et al., 1986; Kilgore et al., 1997; Van Gerven et al., 1995). Upper limb fractures show an especially pronounced increase (30%) (Van Gerven et al., 1995). Increase in skeletal trauma in juveniles and old adults is more pronounced than in other age groups in the late Christian period at Kulubnarti. These age-specific increases in fracture prevalence may be due to elevated risks of living in two-story houses in the late period versus one-story houses in the early period (Burrell et al., 1986). Access to the living area on the second story in late-period houses was gained by use of a retractable ladder, which may have caused falls and other types of accidents (Burrell et al., 1986). A significant portion of the Kulubnarti individuals possesses multiple injuries (27%). This unusually high prevalence of accidental injuries, coupled with the presence of numerous and severe fractures, reflects the hazards of living in a very difficult terrain. Unlike Lower Nubia to the north, cultivated areas in Upper Nubia are highly constricted and are limited to small pockets of flat land immediately adjacent to the Nile River. Individuals living at Kulubnarti would have been exposed to difficult walking conditions on a daily basis. The adoption of defensible architecture (e.g., two-story houses) later in the Christian period may also have placed individuals at increased risk of injury. Most Kulubnarti fractures are in the forearm (75% of fractures) (Kilgore et al., 1997). Aggressive interactions may have contributed to some of the fractures because a push and subsequent fall could have resulted in forearm fractures. The record suggests that the difficult terrain and nearly continuous conflict between Egypt and Nubia resulted at various times in a high risk of injury and death, arising from accidents and interpersonal conflict.
4.3.10
Australian foragers
Cranial trauma in prehistoric native Australian populations provides a compelling picture of violence and injury in a wide range of geographical, ecological, and cultural settings (Webb, 1989, 1995). Most regions of prehistoric Australia have relatively elevated frequencies of cranial trauma, especially depressed fractures (Table 4.2). Most of the injuries are well healed, indicating that the attacker’s intentions may have been to injure rather than kill the victim. Many studies of human populations worldwide document the higher proportion of violence directed at males (Gurdjian, 1973; Jiménez-Brobeil et al., 2009; Lahren & Berryman, 1984; Novak, 2006, 2014; Owsley, 1994; Robb, 1997b; Walker, 1989; and many others; but see Wilkinson & Van Wagenen, 1993), which likely reflects the central role of men in the violent resolution of conflicts in most human societies. This sex-specific pattern of head injury contrasts with that of Australia. In virtually all samples throughout the continent, adult females show a consistently higher prevalence of cranial injury than adult males, thus contributing to the greater prevalence in females than males overall (Table 4.2). Some of the sex differences in specific localities are slight, but many series show a remarkably strong disparity between sexes. For example, in the south coastal Swansport sample, 39.6% (21/53) and 19.3% (11/57) of females and males exhibit cranial trauma, respectively. For the few skeletal series (4/22) where males have more cranial injuries than females, the differences are statistically indistinguishable. The disparity in cranial injury between males and females is not restricted to prevalence alone. For virtually all regions of Australia – regardless of ecological or cultural setting – women show a predominance of depressed fractures on right parietal and occipital bones. This pattern suggests that attacks came from behind the victim, perhaps while
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
157
158
Injury and violence
Table 4.2 Cranial trauma by region in Australia. Percentages of crania are shown (Adapted from Webb, 1995: Table 8–2) N
One lesion (%)
Two lesions (%)
Three lesions (%)
Males Central Murray Rufus River South Coast Desert Tropics East Coast
(247) (122) (138) (132) (92) (133)
13.4 26.2 14.5 16.7 6.6 23.3
3.6 7.4 4.4 3.8 0 6.8
0 0.8 0.7 1.5 0 1.5
Females Central Murray Rufus River South Coast Desert Tropics East Coast
(151) (83) (123) (51) (62) (86)
19.9 27.7 31.7 33.3 24.2 32.6
4.0 8.4 10.6 11.8 9.7 10.5
1.3 2.4 4.9 5.9 4.8 3.5
fleeing the attacker. Adult males show a different pattern of injury location to that of women: for the entire series, more left than right parietal bones are fractured. This pattern suggests that male violence usually involved face-to-face confrontations. It is not uncommon for an individual in the Australian samples to have two, three, or even four cranial bones that display depressed fractures (Table 4.2). Consistent with the sex differences in the prevalence of crania affected, more women than men have multiple injuries. The general pattern, therefore, indicates that violence and aggression were directed more at women than men in prehistoric foraging societies throughout Australia. Overall, however, the pattern of higher injury in females than males evokes the conclusion that perhaps the record represents continental-wide domestic violence – females being injured by male associates or partners. Indeed, in modern societies, millions of women globally are injured by males in domestic contexts (Novak, 2006). In Australia, violence was not limited to prehistoric societies. Ethnographers observe that violence is a common occurrence and a part of everyday discourse (Burbank, 1994). Unlike Western societies, such as in the United States where fighting – and especially aggression against women – is viewed as a deviant behavior, physical aggression in native Australians is considered an accepted if not legitimate form of social interaction (Burbank, 1994; Myers, 1986). Burbank (1994) provided detailed observations on physical aggression in men and women in an aboriginal group living in Arnhem Land (Northern Territory). Her study shows that both men and women were heavily involved in confrontations. However, the majority of aggressors and their victims are adult females. These observations of both deceased and living native Australians reveal a striking consistency in behavior between prehistoric and contemporary populations. Women played a key role in aggressive encounters, and not simply as victims of attack. This work underscores the important notion that violence, domestic or other, is not the sole purview of males, certainly in this setting and likely others around the world (Guliaev, 2003).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
4.3.11
Southeast Asia: Iron Age Cambodia
The record of bioarchaeology in Southeast Asia reveals little evidence of violence and associated trauma. However, new analyses of the Iron Age (350 BC–AD 200) Phum Snay cemetery in northwest Cambodia reveals increased violence-related injury and competition for arable land and other resources, in many ways providing a context for the later emergence of complex societies and state systems involved in widespread conflict, such as Angkor after AD 800 (Domett et al., 2011). In this setting, 30 of 128 individuals display cranial trauma, with most in adult males (66%). The injuries comprise blunt-force (depression fractures) and bladeinduced trauma. Some of the trauma is perimortem, clearly resulting in death, whereas some is healed. A number of individuals have multiple injuries of varying degrees of healing. None of the postcranial regions show violence-related injuries. Thus, in this setting, combatants focused their attacks on the heads of opponents. Like other series described previously (e.g., Norris Farms), the series shows that violence occurred over a period of time. That is, some of the victims have clear evidence of previous confrontations, which they survived. Unlike Norris Farms, however, the victims are predominantly adult males. Moreover, a number of the males were associated with military paraphernalia, such as weapons in their graves. While the cemetery certainly includes victims, it also has a focus on militarism and purposeful arming. The placement of this series in this time and place supports the emerging understanding that armed conflict arising from competition and social friction arose in the region centuries before the appearance of complex social systems in Southeast Asia several centuries later.
4.3.12
Northern European violence
There is a rich bioarchaeological record for violence and confrontation in Europe for especially post-Pleistocene western and central Europe. Like the settings described earlier, the social, political, and economic contexts are almost always related to competition for resources generally, but local circumstances may also feature prominently.
Scandinavia and western Europe The historical record of violence and warfare is abundant for northern and western Europe. Systematic studies of violence have been produced for several areas of northern Europe, including Scandinavia, and especially prehistoric and early historic Denmark. Analysis of human remains reveals evidence of traumatic injury, decapitation, and mutilation. Like much of the history of paleopathology, these studies are largely descriptive, having focused on single or few individuals (Bennike, 1991a, 1991b). Relying primarily on remains dating from the Mesolithic (c. 8300–4200 BC) to the Middle Ages (to AD 1536), Bennike (1985) identified patterns of injury in Denmark. These patterns can be characterized as involving a predominance of cranial trauma in mostly adult males (principally depressed fractures) on anterior cranial vaults, indicating face-to-face violent interactions. Folklore and historical accounts emphasize a high prevalence of violence during the Viking period (AD 800–1050). Bennike’s (1985) assessment clearly indicates that the Mesolithic and Neolithic periods were far more violent than the Viking period: Mesolithic crania display the highest prevalence of trauma (43.8%), which is markedly reduced in the Neolithic (9.4%), Iron Age (4.7%), Viking period (4.3%), and Middle Ages (5.1%). Violence is
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
159
160
Injury and violence
well illustrated by the presence of projectile injuries, sword and axe cuts, cranial depressed fractures, and decapitation (Bennike, 1985; Ebbesen, 1993; Kannegaard Nielsen & Brinch Petersen, 1993). At the Mesolithic site of Bogebakken, a bone projectile was found lodged between the second and third thoracic vertebrae of an individual. In fact, all projectile wounds at this and many other Danish sites are found in the thoracic and head regions, revealing the lethal intentions of the attackers. Placement of Denmark in the larger regional context of western and central Europe reveals that the Neolithic and the foraging-to-farming transition was a relatively turbulent period in comparison with later times, and not only in Denmark (Fibiger et al., 2013; Smith, 2014; various in Schulting & Fibiger, 2012). The reasons for violence in this broad expanse are varied. For example, the co-occurrence of elevated violence-related injuries in the early Neolithic (c. 5000 BC) and deteriorating environmental circumstances across the region suggests the possibility of negative social, cultural, and economic factors (compare with Orschiedt & Haidle, 2012; Teschler-Nicola, 2012; Wahl & Trautmann, 2012). At least one Neolithic series from Talheim, Germany, represents a massacre (Wahl & Trautmann, 2012). The posterior position of traumatic injuries for many of the 34 individuals documented indicates that they were attacked from behind while fleeing from aggressors. With a few exceptions, most cranial injuries (e.g., depressed fractures) throughout the region are healed, suggesting that as in southern California coastal foragers, inflicted injuries were not intended to have a lethal outcome (Walker, 1989; and authors cited earlier). In their analysis of cranial trauma in Neolithic Denmark, males generally have more anterior and leftsided trauma than females, indicating the involvement of men in face-to-face encounters (Fibiger et al., 2013). In interments dating to the Middle Ages, the heads of victims had been removed and placed between their legs. The reasons for this unusual treatment are unclear, but during the Middle Ages, the practice was associated with criminals in order to prevent their return following death (Bennike, 1985; and see later). Decapitation and other forms of head and neck trauma were likely more common than is indicated by the skeletal evidence alone. A number of Neolithic and Iron Age bog corpses show evidence of decapitation and strangulation, the latter of which may have involved hanging (Bennike, 1985; Glob, 1971). Owing to the relatively small number of projectile- and weapon-related deaths, it is not possible to identify a pattern of decrease or increase in violence-related death in Denmark (Bennike, 1985). However, there is a shift from the use of projectile weapons to axes and swords in the Iron Age (Bennike, 1985). The lethal nature of this new weaponry for the enemies of Danes is demonstrated in at least one battle site (see later). Violence in western Europe during the Mesolithic and Neolithic was likely highly localized, with a relatively higher prevalence in some regions than in others (Bennike, 1985; Frayer, 1997; Jiménez-Brobeil et al., 2009; Orschiedt et al., 2003; Papathanasiou et al., 2000; various in Schulting & Fibiger, 2012). For western Europe as a whole, violent trauma was frequent during the Neolithic and Mesolithic (Schulting & Fibiger, 2012; but see ConstandseWestermann & Newell, 1984; Roksandic et al., 2006; Schulting, 2006). An increase in population density, increase in social complexity, and resource circumscription during the period suggests the potential for an increase in hostilities. Moreover, there are clear instances of violence predating the Neolithic, indicating possible processes underlying violence in the region; for example, at the Mesolithic site of Ofnet (Germany) where two pits contain the
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
skulls of 32 individuals. The skulls show perimortem depressed fractures and cutmarks. Some of the skulls have upper cervical vertebrae articulated, a number of which display cutmarks suggesting slitting of the throat and decapitation.
Battle of Wisby The Middle Ages in Europe involved a tremendous upsurge in armed conflict between many warring city-states and various confederations of states. Today, across Europe, the vestiges of walls surrounding villages and towns remain, a lasting record of violence and threat of violence during this time. Only rarely, however, have battle sites from the conflicts resulted in documented archaeological remains. One important exception is the battle site of Wisby, located on the island of Gotland in the Baltic Sea. Hundreds of skeletons excavated at the battle site present some of the grim details of preindustrial warfare in northern Europe. The city of Wisby was invaded in 1361 by an army led by the Danish king, Waldemar (Thordeman, 1939). Over the course of a single day, the poorly organized peasant forces defending the city were decisively defeated and massacred by the king’s highly disciplined army. Estimates indicate that some 1800 Gotlanders were killed in this battle (Ingelmark, 1939). Archaeological excavations of three common graves at Wisby yielded an enormous sample of human skeletal remains (n¼1185). Analysis of these remains reveals that only males were victims, but the age distribution was extraordinarily varied, ranging from adolescents to very old adult males (Ingelmark, 1939). Consistent with research completed on skeletons from other Middle Ages archaeological sites (compare with Bennike, 1985), most of the injuries resulted from the use of cutting weaponry, especially swords and axes (n¼456). A significant minority of injuries was from projectiles (n¼126). Skeletal wounds are variable, ranging from scratches and nicks to dismemberment. The latter, for example, is illustrated by the presence of severed hands and feet, partial limbs, and complete limbs. One individual expresses the intensity of battle: the lower halves of both left and right tibiae and fibulae are completely severed. The lower legs are affected more than any other area of the body: about two-thirds (65%) of cutting trauma involved tibiae. Ingelmark (1939) observed that the focus on the lower limbs likely reflects the use of shields and protective clothing for the body trunk, leaving the legs especially vulnerable to injury. Sword blows directed at the lower legs typically resulted in the slicing and chipping of bone on the anterior crests of tibiae. Poor protection of heads of individuals from the Gotlander army is indicated by the large number of cranial injuries, some of which involve extremely deep cuts. Heads of some groups of Gotlander soldiers may have been better protected than others. For example, only 5.4% of crania are injured in common grave no. 3; this frequency contrasts with the prevalence of cranial injuries of 42.3% and 52.3% in common graves no. 1 and no. 2, respectively. The majority of cranial wounds are on the left side of the head, which fits the expected pattern of injury sustained by a weapon held by a right-handed individual during a face-to-face encounter (Courville, 1965). Some crania have injuries on posterior vaults, suggesting that these victims were struck from behind while fleeing their attackers. The presence of all ages of males suggests that the majority of the male population in and around Wisby were recruited for the defense of the city. Analysis of pathological conditions suggests that virtually anyone who could walk (and even those who could not) were drafted. Ingelmark (1939:192) remarked on the “good many morbid processes” present in the skeletal
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
161
162
Injury and violence
assemblage of battle victims. Many vertebrae have pronounced osteoarthritis, and at least four individuals have extensive vertebral tuberculosis. One individual displays a completely ankylosed (fused) knee. The angle of flexion (about 55) greatly disabled the individual: running was impossible for this victim. A number of individuals show well-healed but poorly aligned femoral neck fractures, which limited their ambulatory capabilities. These observations, combined with other health problems – including skeletal infections and numerous healed, but malaligned limb fractures (n¼39) – also contributed to reduction in efficiency on the battlefield. The defending army, therefore, was not composed of a group of robust males who were in their peak years of fighting prowess. Like many of the skeletal samples discussed in this chapter from both New and Old World contexts, these victims of the massacre were members of a population not unfamiliar with violence during their lifetimes (compare with Milner et al., 1991; Willey, 1990). The pattern of injury in the Wisby series is similar in many respects to the pattern of injury documented in more detail from the Towton battle site dating to AD 1461, a century after Wisby. There, some 28 000 of an estimated 100 000 combatants died on a single day (Fiorato et al., 2000; Knüsel, 2014). Analysis of 37 of the casualties from a mass grave revealed clear indicators of perimortem trauma, ranging from single to multiple (up to nine) injuries (Novak, 2000). Interestingly, nine of the 29 crania examined had well-healed trauma, likely derived from previous battle-related injuries. Most of the injuries were on the left side of the face and vault, indicating blows struck by a right-handed person successfully landing the business end of the weapon on the victim in a face-to-face confrontation. The injuries include sharp (blade), blunt, and projectile wounds (Figure 4.19). The injuries provide a picture that contrasts with the romanticized version of Medieval battle showing honor and pageantry, but show clearly that those who had inflicted the injuries were well trained in the combat methods and practice. The patterns provide a useful record for evaluating the consequences of conflict in this key setting prior to the use of firearms in modern warfare (Cunha & Silva, 1997; Düring, 1997; Giuffra, Baricco et al., 2013; Mitchell et al., 2006; Šlaus et al., 2010, 2012; Weber and Czarnetzki, 2001).
Beheading in Britain In the upper Thames valley, a high frequency of decapitation and prone burial in RomanoBritish (third to early fifth centuries AD) cemeteries (Cassington, Radley, Stanton Harcourt, Queensford Mill, and Curbridge) has been documented (Harman et al., 1981). In total, 15.3% show evidence of beheading. Analysis of other Romano-British cemeteries indicates that this practice was part of a widespread behavior during this period (Anderson, 2001; Bush & Stirland, 1988; Montgomery et al., 2011; Philpott, 1991; Tucker, 2014; Wells, 1982). During the following Anglo-Saxon period (fifth to tenth centuries AD) beheadings continued, and victims were included in execution cemeteries (Buckberry, 2008). Reminiscent of the decapitations from Denmark (Bennike, 1985; and see earlier), heads of a number of burials had been purposefully placed between and associated with the legs of the deceased (Anderson, 2001; McKinley, 1993). The simultaneous occurrence of decapitation and prone burial in Romano-British and AngloSaxon cemeteries suggests a probable connection between the two practices. The demographic composition of decapitated and prone skeletons shows a selectivity for adults, suggesting that execution may have been the primary motive. Review of historical, archaeological, and folklore literature indicates other possibilities, such as prevention of the deceased from walking or communicating, sacrifice, and deprivation of the soul, for either sacrificial purposes or for
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
Figure 4.19 Sharp force trauma to the face associated with a bladed weapon; Towton, United
Kingdom. (From Novak, 2000; reproduced with permission of author and Oxbow Books.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
punishment for some wrongdoing (Harman et al., 1981). Decapitation and/or prone burial, may have been a “final form of indignity inflicted on the corpse of an individual in consequence of particular characteristics or offenses during life. But it seems more probable that both were believed to have some effect on the subject in an after life” (Harman et al., 1981:168). The manner of beheading is indicated by the location and pattern of cutmarks on affected skeletal elements. Severing of the head was usually done in the upper neck region. Damage to anterior surfaces of cervical vertebrae in some individuals and posterior surfaces of cervical vertebrae in others indicates that the beheading blow was delivered from in front and behind at various times, and probably with a variety of tools (Harman et al., 1981; and see McKinley, 1993; McWhirr et al., 1982). Detailed analysis of a beheading victim from Hertfordshire shows a series of at least six carefully placed cuts, including three cuts on the anterior odontoid process, superior body, lower body, and right inferior articular process (McKinley, 1993). The narrowness of the cuts indicates that the decapitation was completed with a narrow blade administered as blows to the neck.
4.3.13
European invasion of the Americas
The patterns of violence and warfare discussed previously for Europe set the context for understanding the bioarchaeological record of violence in the postcontact, colonial-era Americas. When Europeans began exploration of the New World in the late fifteenth century,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
163
164
Injury and violence
they brought with them a weapons technology that facilitated their (mostly rapid) conquest of native populations. The early expeditions were violent affairs at times, resulting in brutal treatment of natives (Weber, 1992). Although these tactics seem repulsive now, they were well within the bounds of behavior regarded as fully acceptable for European males during the Middle Ages. Historical literature and accounts of violent confrontations (e.g., Wisby) indicate that conflict behavior between European males was often excessively cruel (Weber, 1992). The study of native remains dating to the early period of contact with Europeans has provided a new dimension to understanding the nature of the interactions between these groups.
La Florida The study of hundreds of human skeletal remains from colonial-era Spanish sites in the American Southeast (Georgia, Florida) and Southwest (New Mexico and Arizona) provides important perspective on violent confrontation, especially during the sixteenth and seventeenth centuries. In Spanish Florida (present northern Florida and coastal Georgia), the region named La Florida by Juan Ponce de León in 1513, short-term encounters between native populations and Spaniards occurred during the exploration period (c. 1513–1565), followed by long-term, sustained contact during the mission period (1565–1704) (McEwan, 1993). The earliest contacts frequently resulted in hostile interactions and deaths of both Europeans and natives (Varner & Varner, 1980). The later mission period was relatively peaceful; long periods of calm were occasionally punctuated by native revolts violently put down by Spanish military forces (Hann, 1988). Analysis of skeletal remains from both periods of Spanish occupation in the region produced only limited evidence of violent interactions. In skeletons from the Tatham Mound site on the Gulf coast of western Florida – the probable location of Hernando de Soto’s visit in 1539 – perimortem trauma caused by the impact of metal weapons is present in 17 skeletal elements (Hutchinson, 1991, 1996, 2006). The most dramatic examples of cut bones include the severed acromion process of a right scapula and a left humerus diaphysis cut through 60% of the midshaft (Figure 4.20). Neither bone showed evidence of healing. Other long bones in the sample have multiple cutting injuries around diaphyseal perimeters. In total, the pattern of damage appears to be associated with purposeful dismemberment (Hutchinson, 1996). It is possible that Indians inflicted the injuries using captured Spanish weapons. The early dates of the site (early sixteenth century) suggest that this is an unlikely possibility. The injuries were more likely inflicted by Spaniards (Hutchinson, 1996). The pattern of injury is also consistent with European battle tactics. For example, the orientation in the scapular cut is consistent with the practice of removing the fighting arm of the opponent (Hutchinson, 2006). Only one other skeleton in Spanish Florida shows evidence of violent confrontation. A single high-status male from Mission San Luis de Talimali (AD 1656–1704) likely died from a gunshot wound, but it is not possible to identify the perpetrator, Indian or Spaniard (Larsen, Huynh et al., 1996). Thus, based on the study of violent trauma in skeletal remains, the legends suggesting an unusually cruel treatment of natives – at least as it is indicated by metal-edged or firearms weaponry – are not substantiated. Rather, the form of violence that has been well documented in this setting is what Klaus (2014a) described as bioarchaeological evidence of structural violence (see later).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
(a)
(b)
Figure 4.20 Cut adult humerus (a) and scapula (b); Tatham Mound, Florida. (From Hutchinson & Norr, 1994; reproduced with permission of authors and John Wiley & Sons, Inc.) (A black and white version of these figures will appear in some formats. For the color version, please refer to the plate section.)
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
165
166
Injury and violence
American Southwest There is a highly visible record of violence in precontact Southwestern populations, much of which highlights episodes of stress and (possibly motivated by starvation and heightened resource stress generally) cannibalism. This record is closely linked with periods of violence as shown by an abundant skeletal record of injury and violent trauma, sometimes in association with processed remains (Turner & Turner, 1999; White, 1992). The presence of fortifications and other defensive architecture and high frequencies of traumatic injury (including injury resulting from violence) in some later prehistoric sites suggests that confrontations were frequent (Stodder & Martin, 1992). Cranial trauma in the American Southwest shows an increase in prevalence during the late prehistoric period, which continued into the historic, mission period (Bruwelheide et al., 2010; Stodder, 1990, 1994; Stodder & Martin, 1992). Archaeological and historical evidence shows that the high frequency of cranial trauma in the historic period can be attributed to confrontation between Spaniards and Indians as well as among Pueblos, and between Pueblos and non-Pueblo native groups. The study of skeletal remains reveals that most cranial injuries are in adult males (Stodder, 1994). At San Cristobal and Hawikku, sites with significant contact-period skeletal assemblages, very high frequencies of cranial injuries have been reported (Stodder, 1994). Twenty percent and 17% of males have cranial trauma from the San Cristobal and Hawikku sites, respectively. Paleopathological markers of stress (e.g., dental defects) indicate that nutritional and other health disruptions were generally widespread during the late prehistoric and contact periods, which may have contributed to fostering intra- and inter-group hostilities during this time (Stodder, 1994; Stodder et al., 2002). At Pecos Pueblo, the prevalence of cranial trauma, largely in the form of depressed fractures, in the mission population (42.1%) is considerably higher than the prehistoric ancestral population and most other samples in North America where data have been collected systematically (Bruwelheide et al., 2010).
4.3.14
North American military campaigns
The fight for political domination of vast areas of North America, especially after the seventeenth century, is indicated by the many military campaigns involving confrontations between warring European nations prior to American independence, between the fledgling United States and United Kingdom, and between native populations and various Euroamerican or European interests.
Fort William Henry During the North American French and Indian War (called the Seven Years War in Europe), France and Great Britain fought over control of the vast territories of the Northeast United States and Canada. During the summer of 1757, the British garrison surrendered Fort William Henry at the southern end of Lake George, New York, to French and Canadian troops and their Native American allies (Starbuck, 1993). As part of the conditions of surrender, British soldiers and dependents were allowed to leave the fort and return to British-controlled territory under French protection. The French-allied Native American warriors felt slighted that scalps and other prizes of warfare would not be forthcoming. In retaliation, the French-allied Indians killed the remaining British troops at the fort. Warriors then proceeded to kill or capture the hundreds of civilians and soldiers under the care of the French troops.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.3 Intentional injury and interpersonal violence
Analysis of the remains of five adult males buried in a mass grave within the fort indicate clear evidence of violence-related injuries, bearing testimony to the historical and fictionalized (Cooper, 1919) accounts of the battle (Liston & Baker, 1996). Four of the five show perimortem trauma to the legs reflecting injuries received during the siege of the fort. None of the injuries are healed, suggesting that they died prior to or during the siege. The trauma was not lethal, but serious enough so as to prevent their departure from the fort. These skeletons display a range of perimortem trauma that likely represents injuries resulting in death. One individual shows a series of four cutmarks on the posterior surface of the odontoid process of the second cervical vertebra. Another individual expresses a series of radiating fractures through the face and frontal bone, indicating crushing of the skull with a blunt object. The pattern of trauma suggests that the soldier had been beheaded from behind. All five individuals show notches, slices, and gashes in skeletal elements of the anterior and posterior trunk (e.g., scapula, ribs) and pubic region. The morphology of cutmarks evinces the use of both knives and axes in the mutilation of victims.
Snake Hill Some of the most intense fighting between the British and Americans during the War of 1812 took place in the frontier region between Lake Ontario and Lake Erie along the Niagara River (Whitehorne, 1991). During the four-month period in 1814 when American troops successfully captured and held Fort Erie on the Canadian side of the river, heavy siege and combat resulted in the deaths of hundreds of soldiers from both the British and American armies. Archaeological excavations at the battle site of Snake Hill, Ontario, resulted in the recovery of the skeletal remains of American soldiers from burial and medical waste pits (Owsley et al., 1991; Thomas & Williamson, 1991). Demographic assessment of the complete or nearly complete skeletons indicates that they were young adult males, aged 15 to 29 years; seven soldiers were older than 30 years at death. Half (50%) of the individuals in the sample had fractures caused by damage from firearms projectiles. The general lack of healing in most cases indicates that these wounds were usually fatal. The highest percentage of fractures were ribs (28%; 7/25), followed by femora (25%; 7/28), and crania (9.1%; 2/22). Locational assessment of skeletal wounds indicates that most injuries (69.8%) were above the waist. With regards to the total number of noncranial and nonvertebral trauma (n¼53), twice the number of fractures occurred on the left side (54.7%; 30/53) than on the right side (26.4%; 14/53) of battle victims. This pattern may reflect handedness or body postures during the battle (Owsley et al., 1991). Cause of death is especially apparent for several victims. For example, a young adult died of a massive head injury whereby a firearm’s projectile had passed through the left and then right parietal bones. This individual also had a large, completely healed cranial depressed fracture from earlier injury. Other individuals had fractured facial bones from a firearms projectile and shattered long bones.
Battle of the Little Bighorn In present-day South Dakota in June of 1876, General George Armstrong Custer and 267 soldiers and civilians were overwhelmed and massacred by a superior force of Native Americans (Scott et al., 2002). Reminiscent of prehistoric and historic conflicts between native groups in the region (compare with Crow Creek, Larson Village site; and those discussed earlier), this
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
167
168
Injury and violence
battle is part of an overall pattern of political domination and control of lands and resources in the Great Plains by opposing groups. Within two days of the battle, eyewitness accounts described mutilation (including scalping) and dismemberment not unlike patterns observed in other Plains samples (e.g., Crow Creek, Larson Village site; see earlier) (Scott et al., 2002; Snow & Fitzpatrick, 1989). Temporary graves were hastily prepared at the locations where individuals were killed. Some of the bodies were identified, but owing to decomposition and mutilation, many were not. Skeletal fragments of battle victims from erosion and limited test excavations provide the basis for detailed study of battle injuries (Snow & Fitzpatrick, 1989). Analysis of 375 partial and complete bones and 36 teeth from a minimum of 34 individuals indicates three primary types of perimortem trauma, including blunt-force trauma, cutmarks, and bullet wounds (Scott et al., 2002; Snow & Fitzpatrick, 1989; Willey & Scott, 1996). Blunt-force trauma involved catastrophic fragmentation of crania, and to a lesser extent, postcranial elements. All 14 partial crania showed massive injuries due to heavy blows. Additionally, presence of cutmarks on various skeletal elements indicates widespread perimortem mutilation. Several different forms of cutmarks, ranging from fine cuts to pronounced incisions, reflect the use of metal arrows or knives. Cutmarks on a variety of skeletal elements indicate the high degree of mutilation of battle victims. One individual, for example, has cutmarks on a humerus head and sternum. The use of heavy metal-edged weapons (e.g., hatchets) is clearly indicated in several instances. Elsewhere, a completely transected cervical vertebra indicates decapitation by a single blow to the neck. Another individual shows distinctive sets of chopping blows to the proximal ends of the left and right femora indicating purposeful dismemberment. In addition to traditional native weaponry, the presence of gunshot wounds in six individuals indicates the use of firearms by Native Americans at the battle site (Scott et al., 2002; Willey & Scott, 1996). Individual M85, for example, had at least two upper body gunshot wounds, including an entrance wound on a rib margin and shattered ribs from another wound. A third wound is represented by a bullet or bullet fragment embedded in the distal left radius. This individual also displays cutmarks on his clavicle. Three gunshot wounds are located in the crania, one entering from the back and two entering from the right side and exiting from the left. In summary, based on the study of human remains from the battle site, a sequence of events can be reconstructed: namely, soldiers were wounded, killed (frequently with blunt-force trauma to the skull), and mutilated (Snow and Fitzpatrick, 1989). A consideration of direction of entry wounds is consistent with historic records indicating that the battle was chaotic (Willey & Scott, 1996). As would be expected, except for the use of firearms, the pattern of killing and mutilation of victims is strikingly similar to that observed in other North American native populations from the Great Plains and Midwest discussed in this chapter (e.g., Crow Creek, Larson Village site, Norris Farms).
4.4
Medical care and surgical intervention
Depending on the severity of injury – originating from accidental or violent circumstances – the survivor is often debilitated and unable to perform key functions, such as acquisition of food and other essential resources. For purely economic reasons, it is in the best interest of the
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.4 Medical care and surgical intervention
social group to ensure that the injured person returns to a functional state of health and wellbeing. Ethnographic and historic accounts of nonindustrial societies report tremendous variation in the care of traumatic injuries, ranging across alignment of fractures and use of splints, immobilization, oral medicines, and other treatments (Ackerknecht, 1967; Ortner 2003; Roberts, 1991). For example, lack of angulation or significant difference in length of long bones in the fractured versus the normal side has been documented in the Libben population from the American Midwest (Lovejoy & Heiple, 1981) and in Medieval populations from England (Grauer & Roberts, 1996). Similarly, in prehistoric Australian skeletal series, most fractures show proper unification and alignment (Kricun, 1994; Webb, 1989, 1995), which Webb regards as evidence for “a firm commitment to care and concern for the injured patient” (1995:200). The presence of wellhealed amputations in two individuals from the central Murray Valley region suggests knowledge of this type of surgical procedure before the arrival of Europeans (Webb, 1995). In five to six thousand Nubian skeletons, some 160 fractures are present, most of which are well-healed and aligned, with little evidence of infection (Smith & Jones, 1910). Bark splints were found associated with limb fractures in a couple of instances (Smith, 1908). Some earlier societies appear to have lacked either the ability or interest in treatment of fractures. For example, the fourteenth-century Wisby battle victims had a high degree of angulation of healed fractures, suggesting only minimal levels of treatment (Ingelmark, 1939). The battle victims are largely drawn from the peasant class, and may not have had access to the same medical care afforded the nobility. Temporal assessment of a large sample of Roman, Anglo-Saxon, and Medieval long bone fractures reveals changes in injury management in these populations (Roberts, 1991). During the earlier Roman and Anglo-Saxon periods, healing of fractures was generally good, suggesting that fracture sites were correctly reduced and aligned, probably with some type of a support (Roberts, 1991). Treatment was so widespread in the Roman period that deformities from poorly healed or misaligned fractures were no more common than they are in living populations. Medieval management of fractures appears to have been less efficient than that in earlier periods. There is a generally higher prevalence of deformation and angulation of long bones (Roberts, 1991). Additionally, many fractures (35/59) from this period had associated periosteal infections. If this analysis and the findings based on the study of the Wisby sample are representative, it appears that northern European populations living during the Middle Ages were far less knowledgeable about fracture management than their forebears. Treatment of head injuries is suggested by the association between trephination and cranial trauma in a number of settings. In Denmark, trephinations frequently accompany cranial trauma, including fractures and sword or axe cuts (Bennike, 1985). Most of the Danish trephinations are on the left side of the cranial vault, which coincides with the location of skull injuries received in battle. This pattern is also consistent with the predominance of trephinations in males, the primary participants in battle and interpersonal conflict. Similarly, over 50% of recorded trephinations from Anatolia documented by Erdal and Erdal (2011) were associated with cranial trauma. Precontact sites in Andean South America also show an abundance of trephination, especially in Peru and Bolivia where scores of cases are reported. Analysis of crania from central and south coastal Peru and regions of the highlands of the Central Andes reveals that
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
169
170
Injury and violence
Figure 4.21 Incomplete trephination and depressed skull fracture on right frontal bone; Cinco Cerros, Peru. (Verano, 2007; reproduced with permission of author and University of Arizona Press.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
adult males comprise the majority of trephined individuals, but adult females and some juveniles are also included (Verano, 2003). The association between cranial injuries and trephination indicates that this form of surgery was likely performed as a treatment for head trauma (Figure 4.21). Diachronic comparisons indicate that the frequency of well-healed trephinations increased over a 2000-year period (400 BC–AD 1532). The highest rate of success was from the latest precontact period (Inca), including some individuals with as many as five to seven healed trephinations. The apparent increase in survival may have been due to the reduction in size of the trephination opening as well as to the greater use of the circular grooving technique of excision, thus reducing the risk of dura mater penetration and neurological damage (Verano, 2003). The lack of an association between trephination and head injury in other settings suggests that there may have been other motivations, including treatment of real or imagined ills. For example, none of the few trephined crania from North America are associated with cranial injury (Ortner, 2003; Stone & Miles, 1990). Evidence for treatment of dental disease has been identified in the form of alterations of teeth, namely drilling in tooth roots (Schwartz et al., 1995) and crowns (Bennike, 1985;
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.4 Medical care and surgical intervention
Figure 4.22 Anteromedial (above) and scanning electron microscopic (below) perspective of drilled right mandibular canine; Sky Aerie Overlook, Colorado. (White et al., 1997; reproduced with permission of authors and John Wiley & Sons, Inc.)
Koritzer, 1968; Turner, 2004; White et al., 1997). These holes are usually found in association with carious or otherwise diseased teeth, indicating a therapeutic intention (Figure 4.22). In summary, the study of samples worldwide reveals that injuries sustained either by accidental means or during violent confrontations were treated in some earlier societies. These findings indicate that many past societies were aware that proper restoration of function could only be brought about by appropriate treatment protocols (Roberts, 1991).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
171
172
Injury and violence
4.5
Interpreting skeletal trauma
A variety of injuries are well documented in earlier societies. Accidental injuries generally reflect the hazards of day-to-day living, including during food procurement and preparation, specific kinds of occupations, and in transportation from one place to another in general. The inchoate analysis presented in this chapter indicates that walking on difficult terrains or engaging in behaviors requiring levels of high activity tend to result in elevated prevalences of skeletal injuries, such as lower limb, Colles’ and rib fractures. An abundance of skeletal data exists on violence and trauma, much of which is now placed within the context of society, economics, and living circumstances. Violence plays a critical role in human social relations and interactions, and is a strategy for dealing with a range of circumstances, including expansion of political control and gaining access to resources. Bioarchaeology couches its broader discussions within a contextual record, thereby rendering violence in the past a subject of broader interest to anthropologists and other social scientists, in developing a more informed understanding of social relations and the measurement or documentation of conflict and warfare and the circumstances under which they arise. Studies emphasizing the integration of social, political, or economic systems as they affect conflict-related behavior are beginning to emerge for a number of regions. For example, in the American Midwest and Southeast, numerous cases of projectile injuries, scalping and other forms of mutilation, and lethal trauma covering a long temporal span are reported, which provide compelling evidence of violence and warfare in prehistoric societies (Milner, 1995). This evidence suggests that there is violence relatively early in prehistory (e.g., western Tennessee Valley; Smith, 1995, 1997), but temporal comparisons indicate that conflicts leading to injury increased in frequency, especially during the late prehistoric period (postAD 1000) in the Eastern Woodlands of North America. This regional trend appears to be related to an increase in social tensions due to population increase, sedentism, increasing social complexity, and increased focus on restricted and valued resources, especially highyielding domesticated plant foods (Bamforth, 1994; Eisenberg, 1986; Milner, 1995; Milner et al., 1991). This evidence runs counter to the arguments raised by various authors (Ferguson, 1990, 1995; Ferguson & Whitehead, 1992) who contend that violence is either missing or minimal in precontact New World societies, having been a result of disequilibrium arising from Western contact. Certainly, social and cultural disruptions arising from contacts with expanding Western, state level societies resulted in increased conflicts within some regions (Bamforth, 1994). The skeletal evidence from several well-studied regions indicates, however, that conflicts leading to injury were commonplace in a variety of settings. Skeletal injury resulting from violence, therefore, represents a principal indicator of environmental stress in human populations. It is important to recognize variability in violence-related injury within larger regions. For example, in the Tombigbee Valley of Alabama, a decrease in skeletal injuries due to violence coincided with an increase in dispersion of human settlement and increased political centralization from the period of about AD 900–1550 (Steponaitis, 1991). Thus, reduced circumscription of the population, perhaps brought about by new political forces, likely had an influence on conflict in this setting. The influence of political factors may explain why some regions undergoing an increase in population density and social complexity do not show an increase in violent trauma (e.g., Mesolithic Europe: Constandse-Westermann & Newell, 1984). Violent
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.5 Interpreting skeletal trauma
injury occurs in individuals and populations who were the victims of conflict. Thus, cemetery assemblages representing groups who were the winners would not be expected to exhibit the frequency of injury seen on the losing end of violent encounters. Elevated prevalence of violence and injury mortality in some prehistoric and historic settings and their possible relationship with increased population density and/or resource circumscription is similar to that of recent, twentieth-century populations. For example, Relethford and Mahoney (1991) documented markedly higher rates of injury mortality in the most densely populated areas of New York State (excluding New York City). These similarities may reflect common themes between past and recent humans, such as high density of population and social inequalities that serve to promote violence. Population density is a complex composite of a number of factors, such as the physical and sociocultural environments, demographic, cultural, and social influences, and individual behavior. Therefore, although these apparent similarities are informative, it is important to identify specific causal factors in specific settings before making conclusions generally regarding the relationship between population distribution and injury mortality in humans. In some regions, clear patterns of violence have begun to emerge. For example, an increasingly hostile landscape in the millennium preceding European contact in the Eastern Woodlands generally is corroborated by the archaeological evidence of an increase in defensive construction in later prehistory (Keener, 1999; Milner et al., 1991). It is during this time that there is the rise of chiefdoms, population increase, and competition between neighboring villages. Similar patterns of increasing conflict and aggression in the final centuries before European conquest are indicated in the southern California Santa Barbara Channel islands, the American Southwest, Southeast, Great Plains, and Arctic (Lambert, 2007, 2014). Regional investigations suggest that resource productivity and climatic instability may have had a strong influence on the presence or degree of conflict in the past. This hypothesis is supported by the increasing evidence for broad patterns of violence in association with climatic instability and drought in the later prehistory of North America. In particular, some of the most severe violence occurs in settings and periods where climate is unstable and prone to drought. This is not to say that climate causes violence. Rather, periods of drought were triggering events, setting off a stream of events resulting in violent encounters between groups competing for the same limited resources. Similar patterns of the increase in evidence of violence are also documented in a wide range of Old World settings (Kennedy, 1994; various in Knüsel & Smith, 2014; Schulting & Fibiger, 2012). Overall, bioarchaeological studies indicate that violence and conflict are not random events, but are strongly influenced by extrinsic factors, such as resource depletion and competition for important resources. Dietary deprivation may have been a motivation for cannibalism. Historical records in a number of settings are informative. For example, cannibalism may have been symptomatic of a larger pattern of animosity and aggression between groups (e.g., Saunaktuk) or a key part of ritualized violence involving sacrifice and cannibalism (e.g., Teotihuacan). The study of trauma in skeletal remains reveals that, on comparing different societies, areas of the body attacked are highly patterned. Walker (1997, 2001a) observed that in modern industrial Western societies (e.g., United States), the head and neck are highly favored targets of attack, probably for both strategic and symbolic reasons. He argues that the face is an appealing target because the injuries are especially painful. The face and head generally bleed
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
173
174
Injury and violence
profusely and bruise easily, which may symbolize the aggressor’s dominance (Walker, 2001a). This probably explains why the most highly traumatized focal point of the body in recent urban populations is the face (Allen et al., 2007; Hussain et al., 1994). Many past societies show a penchant for head injury, but these injuries are usually directed at the vault and not the face or dentition. Dental fractures are present in archaeological remains, but they are relatively rare (Alexandersen, 1967; and see Leigh, 1929; Lukacs, 2007b; Lukacs & Hemphill, 1990, for regional studies documenting dental trauma). The location of the injury on the body provides insight into some of the details of conflict between the individuals involved. For example, many cranial injuries are found on the left side of the frontal bone or other anterior elements, indicating that a right-handed attacker successfully engaged his/her weapon while facing the victim (e.g., native Australian males). A more haphazard pattern of cranial injury (e.g., prehistoric Michigan) or higher frequency of trauma on the right side or posterior vault indicate that injuries were sustained while the victim was fleeing their attacker or perhaps while lying prone (e.g., Wisby). This pattern is more common in women than in men in some settings (e.g., Australia, Michigan, Peru), suggesting that aggression was also directed at women. Ethnographic evidence reveals that although the aggressor was often an adult male, attack by adult females on other females (and on males) occurs in no small number in some settings (Burbank, 1994). Historical documentation indicates that children have long been a target of violent injury and death. DeMause (1974), for example, regarded child abuse as widespread in Europe prior to the eighteenth century. Yet, examination of thousands of archaeological skeletons reveals remarkably little evidence of skeletal trauma – localized trauma-induced subperiosteal lesions in multiple stages of healing, and perimortem fracture – associated with battered-child syndrome (Fibiger, 2014; Lewis, 2014; Walker, 2001b; Walker et al., 1996; Wheeler et al., 2013). Certainly, juveniles in earlier societies were victims of homicide and violence (e.g., Ofnet, Crow Creek, Norris Farms). Juvenile skeletons, however, lack the injuries associated with long-term abuse. This suggests that, as with the pattern of facial and other injuries in twentieth-century Western societies (Love et al., 2011), child abuse resulting in severe skeletal trauma is primarily a modern phenomenon. Walker (2001b) suggests that the rise of childhood abuse is due to the loss of social controls over behavior in largely recent urban settings in comparison with controls present in earlier and traditional societies. Technological factors are important in interpreting patterns and types of skeletal injuries. The introduction of the bow-and-arrow is linked with an increase in lethal conflict (e.g., southern California). Prior to the invention of firearms, violence-related injuries were caused primarily by projectiles, cutting, and blunt force. The skeletal record shows the presence of both lethal and nonlethal forms of trauma, thus providing essential insight into the previous history of interpersonal aggression both within and between past societies. The Wisby, Towton, Crow Creek, and Norris Farms victims display numerous healed injuries (e.g., cranial depressed fractures) that reflect a long and wellestablished history of conflict well prior to the event or events resulting in widespread death. In a sense, then, these injuries foreshadow a future act (e.g., a major battle) that later resulted in more widespread violence. The study of human remains from these sites suggests that debilitating injuries, poor health, or generally high levels of physiological stress may have increased the susceptibility of a population to attack and defeat. The Norris Farms and Crow Creek skeletons display numerous
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.5 Interpreting skeletal trauma
pathological indicators of stress, including iron deficiency anemia, dental defects, tuberculosis, and generalized infection, that reflect compromised health and the reduced ability to perform subsistence and other arduous tasks (Bamforth, 1994; Milner et al., 1991). Although this pattern of poor health does not explain the demise of the population, it suggests that they may have had a reduced ability to successfully mitigate hostile social environments. Some populations display fractures and other debilitating conditions that limited their ability to protect themselves or even flee a more powerful adversary (e.g., Wisby). Although both nonlethal and lethal forms of violent injury are highly prevalent in many of the populations discussed in this chapter, the dominance of one category over the other informs our understanding of the intentions of the attacker. For example, the higher prevalence of nonlethal than lethal injury in a number of settings – Australia, Santa Barbara Channel, Peru, and Easter Island – indicates that injury was meant to maim and not kill the victim. Death of the opponent was clearly the preferred outcome of attack in the Middle Ages of Europe (e.g., Wisby), late prehistoric Great Plains and Midwest (e.g., Crow Creek, Norris Farms), in the Arctic (e.g., Saunaktuk), and in historic North America (e.g., Snake Hill). In prehistoric settings, it usually is not possible to determine the reasons for preference of lethal or nonlethal forms of violence. In California, the shift to lethal forms of injury from projectiles in later prehistory may have been influenced by the change in weapons technology coupled with increasing resource stress. Clear patterns of mutilation of victims are well documented in a number of prehistoric and other New World settings. In North America, the evidence for removal of soft tissue and skin from the cranial region – especially scalping – is abundant. Typically, the scalp was removed by first slicing skin along the frontal and parietal bones and peeling back of the skin (e.g., Norris Farms, Koger’s Island), but other approaches involved removal of facial and other tissues (e.g., Saunaktuk). Mutilation was a highly visible behavior. In addition to scalps, tongues, noses, limbs, and heads were removed from the near-dead or deceased (e.g., Great Plains). A more profound mutilation to the head than scalping was decapitation. Decapitation was practiced in the New World and Old World, and for a variety of reasons. In Roman Britain (and throughout history), this was a preferred form of execution in some groups. In northern Europe, the head of the victim was sometimes placed between the legs (Denmark, England), perhaps as the ultimate insult. Other unique and highly localized forms of body treatment of the living victim were likely practiced. For example, gouging of knees at Saunaktuk in the Arctic may have been associated with a practice of dragging the victim through the village prior to his or her death. Trauma data are important for dispelling prejudices and assumptions about past societies. For example, hunter-gatherer societies around the world are often characterized as peaceful inhabitants of stress-free environments living in a state of blissful repose (Lee & DeVore, 1968; Service, 1966; and see discussions by Burbank, 1994; Fienup-Riordan, 1994; Walker, 2001a). This characterization may reflect the fact that anthropologists doing fieldwork among these societies are guests – after all, what guest is going to go back home and write about the unpleasant things they observed, especially with regard to violent encounters between individuals (Erchak, 1996; Keeley, 1996)? Many anthropologists underplay the negative or offensive, avoiding realism in social beliefs. In fact, a number of cultures described as nonviolent or peaceful have homicide rates far exceeding those of some Western nations (Knauft, 1987).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
175
176
Injury and violence
Ethnographic research has undergone a dramatic change, with key developments showing the social, political, and economic contexts for a variety of settings and the relevance of violence to social life (Burbank, 1994; Kapferer, 2012; Stewart & Strathern, 2002). The point is not to replace peaceful characterizations of earlier societies with violent ones. Rather, these findings underscore the importance of substituting an incorrect image with one that fits the evidence, past and present. This approach is critical for informing our perspective on past groups as functioning societies rather than as images of what earlier social behavior must have been like. This newfound precision adds a more complete historical context for the study of recent human behavior. Anthropologists and others seem to employ – either consciously or unconsciously – their own cultural and social assumptions about earlier societies in order to “remember” the past (Keeley, 1996). We project these assumptions into a past that seems to reflect current and highly biased perspectives on the condition of humankind, be they peaceful or violent. These skeletal data help us to reconstruct and interpret trauma and violence in a more comprehensive and accurate manner. This chapter has discussed the obvious skeletal correlates of violence and trauma in a wide range of societies around the world, namely injuries received in interpersonal conflict, ritual, warfare, and other settings where individuals are intentionally injured or killed. Yet, there is another kind of violence that places people at risk of harm or death, perhaps not immediately, but rather the kind that inhibits a person from maintaining homeostasis owing to social circumstances. This structural violence pertains to members of societies where needs are not met or where they are exploited for economic, political, or other reasons (Farmer, 1996). Borrowing from developments in other social sciences, Klaus (2012, 2014a) makes the case that broken bones and cutmarks are not the only reflection of violence in human remains. Specifically, he documents evidence of the impairment of human needs caused by social structures that prevent persons from reaching biological, ocial, and economic potentials in prehistoric societies involving social inequality. Focussing on the colonial-era Lambayeque Valley, Peru, he draws on ethnohistoric and archaeological context in the newly established Spanish colonial state. This hierarchical structure contained strong elements of violent repression for members of the native populations in the region. The dominant European power viewed native peoples as a source ripe for exploitation and ruled by a set of repressive laws where their labor was bought, sold, and (usually) abused. This kind of exploitative system denied certain foods and other resources, yet at the same time, demanded heavy physical labor. The predictable outcome, of course, would be evidence of poor health, disease, physiological stress, and biomechanical demand. In these circumstances, this outcome in health is best described as structural violence. In order to test the hypothesis that structural forms of violence were in place in the Lambayeque Valley, a region colonized by Spain beginning in 1534, Klaus (2014a) compared the record of health and diet of precolonial populations with colonial-era populations represented by remains recovered from the Chapel of San Pedro de Mórrope dating from the period of 1536–1750. This record reveals a compelling picture of outcomes of repression, including disrupted homeostasis and decline, a remarkable increase in porotic hyperostosis (154% increase), enamel hypoplasia (184% increase), and periosteal reaction owing to infection (471% increase). Along with the bioarchaeological record of violence and traumatic injury associated with the Spanish conquest elsewhere in the Inca Empire (Gaither & Murphy,
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
4.6 Summary and conclusions
2012; Murphy et al., 2010), the record of structural violence provides a window onto a rapidly changing political landscape involving conquest and population collapse.
4.6
Summary and conclusions
A diverse range of bioarchaeological evidence helps to inform our understanding of accidental trauma and violence-related injury and their relationship to behavior and lifestyle in earlier societies. Human populations living in difficult circumstances have an elevated prevalence of skeletal injuries due to accidental causes. The skeletal record of conflict in the past is highly visible in a number of settings worldwide. The study of remains from a wide variety of contexts helps to provide better understanding of the circumstances of violence, whether due to intra- or inter-group conflicts. Some conflict may result in cannibalism, but based on the study of human remains alone, it is difficult to identify causes of this practice. Ritualized violence, including sacrifice and elaborate body treatment, is also highly visible in some settings. Although limited in scope, its study provides an important link between culture and treatment of the body both in life and in death. Regardless of the circumstances of death, contextual data are essential for its interpretation, including resource availability, history of intra- and inter-group social relationships, and weapons technology. Many of the injuries we see in the archaeological past have clear analogs in the present, including cranial blunt-force trauma, knife wounds, and projectile trauma of all kinds. An analog that seems to be largely missing from the past is the battered-child syndrome. This involves circumstances where a parent, parents, or guardians physically abuse, with violence, young children under their care. The violence goes undetected for months, if not years, resulting in skeletal lesions (Walker et al., 1997). The injury involving a distinctive pattern of periosteal reaction is not present in archaeological remains (Walker, 2001b). This suggests that some cultural norms of behavior that exist today were not present in the past. The proximate circumstances for violence have been identified in a number of settings. For example, in the American Southwest, the surge in violence and cannibalism coincides with chronic drought. Historically, and in other archaeological settings, violence is often associated with climatic instability that results in reduction of food from crop production. Today, many world populations in developed nations live in circumstances where cultural buffering of environment and food supply are not dramatically affected by climatic swings. This underscores the point that climate change and violence are not necessarily linked in a simple cause and effect manner. On the whole, however, the record, past and present, documents a clear causal link between climate and conflict (Hsiang et al., 2013). The bioarchaeological record of violence and traumatic injury provides an important dimension to our growing understanding of different levels of interpersonal interactions, ranging from conflict between small groups competing for limited resources (e.g., Santa Barbara Channel Islands, California; Norris Farms, Illinois; Talheim, Germany) to full-scale subjugation, population expansion, and empire building (e.g., Wari, Inca, and Aztec empires). The “success” of these endeavors is difficult to measure as both winners and losers display evidence of traumatic injury. However, when viewed in context and drawing upon a range of social, economic, and environmental evidence, the bioarchaeological record reveals key findings that help us develop a more informed understanding of the origins and evolution of violence and its various forms, both overt and structural.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 07:05:40, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.006
177
5
5.1
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
Introduction
Physical activity is a defining characteristic of human adaptations. Hunter-gatherers, for example, are often characterized as highly mobile, hard-working, and physically active. In contrast, agriculturalists are sometimes seen as having an easier life – they are settled in one place and have a lighter workload than hunter-gatherers. In his popular and influential archaeology textbook, Robert Braidwood (1967:113) distinguished hunter-gatherers as leading “a savage’s existence, and a very tough one. . .following animals just to kill them to eat, or moving from one berry patch to another (and) living just like an animal.” Ethnographic and other research calls into question these simplistic portrayals of economic systems. Following the publication of Lee and DeVore’s (1968) Man the Hunter conference volume, and especially Lee’s (1979) provocative findings regarding work behavior and resource acquisition among the !Kung in northern Botswana, a consensus emerged that, contrary to the traditional Hobbesian depiction of hunter-gatherer lifeways as “nasty, brutish, and short,” prehistoric foragers were not subject to overbearing amounts of work, and life overall for them was leisurely, plentiful, and confident (Sahlins, 1972). More importantly, these developments fostered a wider discussion by anthropologists and other social scientists of activity, behavior, and lifestyle in both present and past hunter-gatherers (Kelly, 2013). These discussions led to the conclusion that human adaptive systems are highly variable. As a result, it is now clear that it is not possible to make blanket statements about the nature of workloads or other aspects of lifestyle in foragers and farmers (Kelly, 1992, 2013; Larsen, 1995). Rather, workload and lifestyle are highly influenced by the kinds of resources exploited, climate, and sometimes highly localized circumstances. Nevertheless, there are some general patterns that emerge via bioarchaeological study of past human populations, which this chapter will discuss, in part. Workload and activity have enormous implications for demographic history of a population. The study of living humans indicates, for example, that demanding physical activity in reproductively aged females results in reduced ovarian function and fecundity (Dufour, 2010; Ellison, 1994; Jasienska, 2010). Thus, the identification of workload and patterns of physical activity from the study of human remains may provide indirect reflections of variation in birthrates and fertility in some past populations. Of course, we cannot observe and document levels of ovarian function in past populations directly. However, human remains offer a fund of data for documenting and inferring patterns and level of workload and other aspects of lifestyle that involve physical activity. Specifically, the study of pathological and nonpathological changes of articular joints and behaviorally related modifications of nonarticular regions offers important insights that are not available from any other record derived from archaeological settings.
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
5.2
Articular joints and their function
Two types of articular joints, amphiarthrodial (syndesmoses) and diarthrodial (synovial), are important for interpreting pathological and other modifications in a behavioral context. Amphiarthrodial joints are somewhat mobile but serve primarily to stabilize specific regions of the skeleton (e.g., pubic symphysis for the anterior pelvis, intervertebral bodies for the spine). The ends of bones constituting diarthrodial joints (e.g., knee, interphalangeal, elbow) articulate with each other within a fibrous capsule, and the articular surfaces are covered with highly lubricated hyaline cartilage. Depending upon the shape of the articular surfaces, the anatomy of the capsule, and the ligamentous connections across the joint, freedom of movement is extensive. Thus, in addition to providing some stability, diarthrodial joints function primarily in mobility roles, such as extension and flexion of the interphalangeal joints for grasping, and extension and flexion of the knees for walking and running. Like any other biological material, the components of these tissues deteriorate, expressing general and specific pathological changes over the course of a person’s lifetime.
5.3
Articular joint pathology: osteoarthritis
Articular joint deterioration involving bone and associated tissues expresses itself through multiple conditions, causes, and circumstances. The most common degenerative condition is a group of joint diseases called osteoarthritis (Felson, 2000; Pritzker, 2003). The clinical, epidemiological, and research literature represent a confusing array of terms, definitions, and mixed consensus of etiology. All agree that it is a multifactorial degenerative disorder involving focal, progressive loss of articular (hyaline) cartilage, often accompanied by marginal (osteophyte) lipping and articular surface deterioration from direct bone-on-bone contact (Felson, 2000). Osteophyte formation is the most common expression in archaeological settings, and likely is an adaptive response to joint instability (van den Berg, 1999). All authorities also agree that osteoarthritis and its manifestations represent a pattern of responses to various predisposing factors, including both genetic and environmental/ behavioral causes (Corti & Rigon, 2003; Felson, 2000, 2003; Flores & Hochberg, 2003; Issa & Sharma, 2006; Manek & Spector, 2003; Sharma, 2001; Valdes & Spector, 2008; Zhang & Jordan, 2008). There is considerable disagreement on the relative importance of factors, but a mechanical loading environment due to activity features prominently (Block & Shakoor, 2010; Felson, 2000; Hough, 2001; Jordan, 2000; Jordan et al., 1995; Moskowitz et al., 2004; Radin, 1982, 1983; Radin et al., 1972, 1991). The mechanical influences having most agreement are excessive body weight and activity (Abbate et al., 2006; Felson et al., 1988; Hart and Spector, 1993; Melanson, 2007; Sowers and Karvonen-Gutierrez; 2010; Stürmer et al., 2000). Clinical and epidemiological studies indicate a greater incidence of osteoarthritis in obese individuals, especially in the weight-bearing joints and most often in the knee and hip joints (Abbate et al., 2006; Felson et al., 1988; Jordan et al., 2007; Melanson, 2007; Sharma, 2001; Sowers & Karvonen-Gutierrez, 2010; Stürmer et al., 2000). The mechanical stress argument is supported by various findings. For example, industrial laborers show patterns of articular degeneration in relation to particular physical activities in the workplace. Strenuous lifting by miners causes articular change in the hips, knees, and vertebrae (Anderson et al., 1962; Kellgren & Lawrence, 1958; Lawrence, 1977); use of pneumatic tools
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
179
180
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
by ship builders and others results in similar modifications (Lawrence, 1955, 1961); lifting of long tongs to move hot metals by foundry workers results in degenerative changes in the elbow (Hough & Sokoloff, 1989); and repetitive activity involving the hands in cotton mill workers results in different patterns of osteoarthritis (Hadler, 1977; Hadler et al., 1978). Other findings for manual laborers, farmers, ballet dancers, various types of athletes, and those who engage in rigorous exercise generally support these observations (Coggon et al., 1998; Cooper et al., 1994; Croft, Coggon et al., 1992; Croft, Cooper et al., 1992; Felson et al., 1988; Forsberg & Nilsson, 1992; Lawrence, 1977; McKeag, 1992; Nakamura et al., 1993; Stenlund, 1993). Epidemiological findings are providing important corroboration for conclusions linking mechanical demand and osteoarthritis. Comparisons reveal markedly higher prevalence of knee and hip osteoarthritis in North Carolina rural populations than in the United States (primarily urban) population as a whole (hip: 25.1% vs. 2.7% in the 55–64 age cohort: Jordan et al., 1995). These differences suggest the greater physical demands of the rural lifestyle in the modern United States. This same pattern of variation is expressed in at least one archaeological context. In this regard, the comparison of rural and urban populations from ancient Corinthian Greece reveals a generally higher prevalence in the former than the latter, reflecting more strenuous physical labor (Rife, 2012). Rife (2012) suggests that the greater prevalence of shoulder and vertebral osteoarthritis in the rural individuals reflects their exposure to work involving agricultural labor, such as field labor, tending livestock, and other behaviors. The links between physical activity and osteoarthritis are not straightforward, however. The hand bones of weavers from the Spitalfields, London skeletal series have no more osteoarthritis than hand bones from the general sample (Waldron, 1994; and see various citations in Jurmain, 1999). Manual laborers in this series have no more or less osteoarthritis than the population as a whole. These findings and a survey of inconsistencies found in the epidemiological literature led Waldron to conclude “that there is no convincing evidence of a consistent relationship between a particular occupation and a particular form of osteoarthritis” (1994:94). On the other hand, in some unusual circumstances, there appears to be a pattern of articular modifications that link with known and highly specific physical activities (Ciranni & Fornaciari, 2003) or age-related degeneration of a particular joint or joints of the skeleton in relation to a particular activity regimen (Stirland, 2002; and see Waldron, 2012). Thus, while articular pathology relating to activity offers an important insight into behavioral characteristics of human populations in a general sense, the identification of specific activities or occupations from individual remains is the rare exception to the general rule. Like other chronic diseases or disorders, the influence of environmental conditions in the prenatal environment, infancy, and childhood (low birth weight, poor nutritional status) has been implicated (Jordan et al., 2005; Melanson, 2007; Peterson et al., 2010). Epidemiologists and anthropologists observe a great deal of worldwide variation in osteoarthritis in relation to age (Corti & Rigon, 2003). For example, young adults and older juveniles in some human populations express a relatively high frequency of osteoarthritis (Chapman, 1972; Chesterman, 1983; Larsen, Ruff et al., 1995; Rojas-Sepúlveda et al., 2008). In urbanized industrial societies, osteoarthritis rarely occurs before the age of 40 years (Arden & Nevitt, 2006), a pattern that is clearly different from the archaeological record showing a much earlier age of
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
onset (Larsen & Hutchinson, 2010). Regardless of cause, the record shows consensus on one key factor. That is, owing to disability, the economic, social, and behavioral costs of osteoarthritis are substantial in light of the cascade of negative health outcomes from loss of movement (Corti & Rigon, 2003; Sharma, 2001). In some respects, epidemiological studies provide an important baseline for interpreting osteoarthritis in past human populations. For example, the very large samples present a compelling picture of a huge amount of variation. However, much of the literature is hampered by the focus on the comparison of folk racial groups (African American vs. White vs. Asian), having little to do with population biology or modern concepts of human variation (Allen, 2010). Moreover, bioarchaeological and epidemiological studies are not strictly comparable in that the latter are almost always based on clinical contexts, either radiological examinations or patient interviews, which do not identify subtle degenerative changes seen in the actual skeletal specimens that bioarchaeologists study. Moreover, clinical evaluations include factors such as joint capsule spacing. Thus, hard tissue changes observed in the clinical setting are not strictly comparable to those observed in archaeological or other types of skeletal collections. The pathophysiology of osteoarthritis is complex and incompletely understood, especially regarding the relationship between hyaline cartilage and bone changes. Some have argued that changes in cartilage – including fibrillation or tearing – precede bony responses; others contend that minute changes in subchondral bone precede cartilaginous changes (Radin, 1982). For archaeological remains, the exact order of tissue response to mechanical stress is immaterial because regardless of the order of events, the skeletal changes arising from osteoarthritis are universal, including proliferative exophytic growths of new bone on joint margins (“osteophytes” or “lipping”) and/or erosion of bone on joint surfaces (Figure 5.1). In some joints, the cartilaginous tissue covering the articular surface has failed, resulting in pitting or rarefaction of the surface. In instances where the cartilage has disintegrated altogether, the articular surface becomes polished due to direct bone-on-bone contact (Figure 5.2). Because the surface has a glistening appearance reminiscent of ivory, the polished area is called eburnation. In the hinge joints of the knee and elbow, deep, parallel grooves may be present on the eburnated surface (Ortner, 2003; Rogers & Waldron, 1995). The presence of eburnation indicates that although the articular cartilage is missing, the joint was still actively used at the time of death (Rogers & Dieppe, 2003). Osteophytes vary from fine tuft-like, barely perceptible protrusions to large projections of spiculated bone. Even in the extreme, mobile diarthrodial joints do not usually fuse. In spinal osteoarthritis, the marginal osteophytes of two adjacent vertebrae may unite, thus forming a bridge of continuous bone. This change (ankylosis) is also accompanied by reduction in disk space separating the two vertebral bodies, and hence, marked reduction in mobility of the spine. Compression or crush fracture of anterior vertebral bodies – an occasional concomitant of spinal osteoarthritis – gives them a wedge-shaped appearance (Figure 5.3). Additionally, herniation of the intervertebral disk results in irregular depressions on intervertebral body surfaces called Schmorl’s depressions (Ortner, 2003; Schmorl & Junghanns, 1971) (Figure 5.4). Biological anthropologists, anatomists, and others have systematically collected data on osteoarthritis for more than a century. Wells referred to osteoarthritis as “the most useful of all
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
181
182
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
(a)
(b)
Figure 5.1 Pathognomonic indications of osteoarthritis at various joints: deformation of
the shoulder joint visible on both the right scapula (left) and humerus (right) (a); new bone growth, marginal osteophytes, and eburnation on the humerus (left), ulnae (center), and radius (right) (b); marginal osteophytes and pitting at the wrist (c); eburnation and osteophytosis on the femoral knee surface (d); pitting and osteophytosis of cervical (e) and lumbar (f) vertebrae; Morropé, Peru. (From Klaus et al., 2009; reproduced with permission of the authors and John Wiley & Sons, Inc.) (A black and white version of these figures will appear in some formats. For the color version, please refer to the plate section.)
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
(c)
(d)
Figure 5.1 (cont.)
diseases for reconstructing the life style of early populations” (Wells, 1982:152). Osteoarthritis is present in all human populations, and regardless of etiology, the patterns documented and interpreted by bioarchaeologists provide a picture of the cumulative effects of mechanical stress and age on the body in different human groups. Owing to the lengthy history of study
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
183
184
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
(e)
(f)
Figure 5.1 (cont.)
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
Figure 5.2 Distal right humerus showing eburnation (osteoarthritis); anatomical specimen.
(From Larsen, 1987; photograph by Barry Stark; reproduced with permission of Academic Press, Inc.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
as well as to its ubiquity in skeletal samples, there is a voluminous literature on frequencies and prevalences in both living and past human groups (Bridges, 1992). There is also considerable disagreement among paleopathologists about the diagnosis and meaning of osteoarthritis in archaeological remains. On the one hand, some authorities argue that it is present only when eburnation is clearly manifested. Alternatively, the presence of at least two other pathological conditions (i.e., marginal lipping and surface porosity) may be considered diagnostic (Waldron, 2009). Others, however, regard it as present if any of the visible pathological changes described in the biomedical/pathology literature are visible (Hemphill, 2010). The former is influenced by clinical contexts (the visible manifestations viewed indirectly via various means of imagery), whereas the latter are influenced by evidence provided from the archaeological context (the visible manifestations via observation of the bone directly). Moreover, some have downplayed the significance of osteoarthritis in relation to lifestyle reconstruction. Indeed, it is usually not possible to reconstruct specific habitual activities from the osteoarthritic patterns. However, looking at the general picture of lifestyle by observation of multiple joints, prevalence, and severity – all within the context of population and demographics – provides important perspective. The central point is that this data set can provide meaningful perspective on the relative demands of
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
185
186
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
Figure 5.3 Collapsed thoracic vertebrae; Cochiti, New Mexico. (Photograph by Clark
Spencer Larsen.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
particular lifestyles. For example, it is highly unlikely that sedentary United States populations in the twenty-first century exhibit the same profile of osteoarthritis as Great Basin foragers.
5.3.1
Population-specific patterns of osteoarthritis
Early hominins Osteoarthritis is present in the earliest hominins, providing an important perspective on activity patterns in the remote past. The three-million-year-old Hadar australopithecine, A.L. 288–1 (“Lucy”), displays a distinctive anterioposterior elongation of thoracic vertebral bodies, marginal lipping, disk space reduction, and intervertebral disk collapse (Cook et al., 1983). These modifications reflect an extraordinarily demanding activity repertoire, including
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
Figure 5.4 Schmorl’s depression on the superior surface of the thoracic body; Lambayaque, Peru. (Photograph by Sam Scholes.) (A black and white version of this figure will appear in some formats. For the color version, please refer to the plate section.)
lifting and carrying. The conspicuous anterioposterior vertebral body elongation may be caused by various activities that involve extreme ventral flexion of the body trunk. A number of Neanderthal skeletons have distinctive patterns of osteoarthritis that are useful for reconstructing posture and activity in the late Pleistocene, providing a context for interpreting behavioral antecedents to modern humans. Based on his study of the La Chapelle-aux-Saints skeleton, Boule reconstructed the individual “as an almost hunchbacked creature with head thrust forward, knees habitually bent, and flat, inverted feet, moving along with a shuffling, uncertain gait” (Straus and Cave, 1957:348). This image of Neanderthal locomotion served as a model for behavioral reconstruction, and it reinforced the popular image of Neanderthals as less than human. Straus and Cave (1957) suggested that Boule misinterpreted key aspects of the anatomy of the skeleton and overlooked the possibility that severe osteoarthritis may have prevented the individual from normal perambulation. Analysis of the La Chapelle skeleton reveals the presence of widespread degenerative pathology (especially marginal lipping) involving the temporomandibular joint, the occipital condyles, lower cervical vertebrae, and thoracic vertebrae (the T1–T2 exhibits eburnation, and the T6, T10, and T11 have possible eburnation) (Trinkaus, 1985). The left acetabulum shows extreme lipping and eburnation. Although the right acetabulum is missing, the head of the right femur is normal, suggesting that the hip osteoarthritis is unilateral. The severe osteoarthritis in the left hip suggests that it would have been painful for the individual to walk or run. The overall
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
187
188
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
pattern of degenerative pathology indicates that locomotor abilities may have been somewhat limited, but certainly not in the manner imagined by Boule (Trinkaus, 1985). Osteoarthritis is also extensive in the Neanderthal adults (n¼6) from Shanidar, Iraq (Trinkaus, 1983). The widespread nature of articular pathology in these individuals reflects a highly physically demanding lifeway for these archaic Homo sapiens. This conclusion is confirmed by other lines of evidence, such as the high overall robusticity and bone strength in these hominins (Lovejoy & Trinkaus, 1980; Ruff et al., 1993; Trinkaus, 1984; Trinkaus & Ruff, 1999, 2012; and see Chapter 6).
Hunter-gatherers in marginal settings: Sadlermiut Eskimos and Great Basin foragers The most comprehensive and contextually based bioarchaeological study of osteoarthritis is the investigation of Sadlermiut Eskimo (Southhampton Island, Northwest Territories) skeletons by Merbs (1983). Skeletons in this series display a distinctive patterning of degenerative articular pathology, which generally matches ethnographically documented activities. Adult males show bilateral osteoarthritis of the acromioclavicular joint, which is involved mostly in the elevation of the arm, and hypertrophy of the deltoid tuberosity of the proximal humerus. A number of potential activities might cause this distinctive pattern of articular pathology and skeletal morphology, but kayak paddling is the most likely. Extreme loading of the shoulder and upper arm during kayaking likely contributed to this highly specific pattern of osteoarthritis (Merbs, 1983). Sadlermiut adult females have high levels of degenerative changes in the temporomandibular joint – twice the prevalence of that in males. This pattern suggests heavy loading of the mandible, especially in women. As documented ethnographically, adult females habitually soften animal hides with their dentitions, which may contribute to deterioration of this joint (Merbs, 1983). Both adult females and males have a high prevalence of postcranial osteoarthritis, which reflects their physically demanding lifestyles. For example, widespread and severe vertebral osteoarthritis indicates that the backs of both sexes were subjected to marked compressive forces, such as those that occur during sledding and tobogganing. Assessment of osteoarthritis prevalence in prehistoric adult males and females from the American Great Basin similarly contributes to a developing understanding of lifestyle in marginal settings, especially those that suggest extremely demanding activity and the ongoing debate about workload in harsh environmental settings (Larsen, Ruff et al., 1995). In the Great Basin, archaeologists suggest two competing hypotheses regarding subsistence strategies and resource acquisition generally for the region (Thomas, 1985). One hypothesis states that prehistoric native populations pursued a limnosedentary exploitive strategy whereby food and other resources were obtained primarily in ecologically rich, circumscribed wetland areas that punctuate the desert landscape, thus resulting in a sedentary lifeway. Alternatively, the limnomobile hypothesis contends that resources in these wetlands do not provide sufficient resources for the support of native populations, at least on a full-time basis. These wetlands are subject to occasional resource crashes arising from droughts and floods. Thus, from this point of view, native populations relied on marsh resources in part, but spent significant amounts of time in collection and transport of foods recovered from upland settings in the nearby mountains and elsewhere. The implications of the former hypothesis is that the more sedentary wetlands-focused adaptation involved less mechanical stress than the nonsedentary lifestyle; the limnomobile hypothesis is built on the premise that carrying of
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
supplies and long-distance travel was physically demanding, requiring heightened strength and endurance of long distance travel (Larsen, Ruff et al., 1995). In order to determine which of the two models best characterizes adaptive strategies of native populations in the Great Basin, Larsen, Ruff, and coworkers (1995, 2008) assessed pattern and prevalence of osteoarthritis in prehistoric human remains recovered from the Stillwater Marsh region, a large wetlands area in western Nevada. Analysis of these remains revealed an abundance of osteoarthritis. Most adults, including all individuals over the age of 30 years, have osteoarthritis in at least one, and usually multiple, articular joints. Articular pathology for older adults involves severe proliferative lipping on joint margins, eburnation, vertebral compression fractures, and Schmorl’s nodes. Contrary to expectations of the limnosedentary model, these findings suggest that hunter-gatherers in this setting led extremely demanding lives. The high prevalence of osteoarthritis suggests elevated mechanical demand, such as in heavy lifting and carrying. These findings also imply that prehistoric groups may not have been tethered to the marsh, but they exploited a wide range of resources from both the marsh and the surrounding uplands. Beyond concluding that the Great Basin lifeway was physically demanding, however, it is not possible to state whether these populations were sedentary or mobile from osteoarthritis evidence alone. Analysis of long bone structural morphology is more informative on this point (Larsen et al., 2008; Ruff, 2010b; and see Chapter 6). Similar assessment of osteoarthritis in the Malheur wetlands in the northern Great Basin (Oregon) reveals that, like the Stillwater series, there is a strikingly high level of articular joint pathology, especially in comparison with foragers and later farmers from the Georgia Bight (Hemphill, 2010) (Figure 5.5). Interestingly, the prevalences for each articular joint, comparing the two Great Basin series, are statistically indistinguishable (chi-square; P>0.05). The high values for both series speak to the likelihood that both settings – Stillwater in the western Great Basin and Malheur in the northern Great Basin – were engaged in very similar adaptive strategies focusing on largely the same resources, manner of acquiring them, and level of activity and workload that make the acquisition process economically successful (Hemphill, 2010).
Population comparisons These studies of archaic hominins and modern foragers underscore the highly variable nature of osteoarthritis. In order to assess general patterns of and variation in past physical activities on a broad basis, comparisons of many different skeletal samples are necessary. Bridges (1992) attempted a broad-scale analysis by reviewing published studies on appendicular (shoulder, elbow, hip, knee) and axillary (vertebrae) osteoarthritis in native populations from North America. In the 25 skeletal samples included in her review, osteoarthritis shows the highest prevalence in the knee for 17 samples; elbow osteoarthritis is either the most or next most prevalent for 15 samples. No clear association between osteoarthritis and subsistence mode emerges in comparing hunter-gatherers and agriculturalists. However, agriculturalists tend to have a low prevalence in the wrists and hands, but not all foraging groups have high levels in these joints. For nearly all populations reviewed, ankle or foot arthritis is less common than hand osteoarthritis. The comparison of different populations in published findings (Bridges, 1992) contributes to an understanding of variation in work burdens and activity. However, these comparisons
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
189
190
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
Males
Percentage of Individuals Affected
100
Malheur Stillwater Georgia Preagricultural
80
Georgia Agricultural 60
40
20
0
–20 Cer
Tho
Lum
Sac
Sho
Elb
Wri
Han
Hip
Kne
Ank
Ft
Joint Region
Percentage of Individuals Affected
Females 90
Malheur
80
Stillwater Georgia Preagricultural
70
Georgia Agricultural
60 50 40 30 20 10 0 –10 Cer
Tho
Lum
Sac
Sho
Elb
Wri
Han
Hip
Kne
Ank
Ft
Joint Region
Figure 5.5 Frequency and distribution of osteoarthritis among males and females in
the Great Basin (Malheur Lake and Stillwater Marsh) and Georgia coast preagricultural and agricultural populations.
are limited by the variable nature of the methods of data collection used by the different researchers (see discussions by Bridges, 1993; Lovell, 1994; Waldron & Rogers, 1991). This factor alone may prevent investigators from presenting clear diachronic trends or population differences in osteoarthritis prevalence when comparing findings reported by different researchers (Cohen, 1989). Data collection and population comparisons by the same researcher
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
Table 5.1 Frequency of osteoarthritis in right articular joints expressed by severity (Adapted from Jurmain, 1980: Table 5) White
Black
Pecos
Eskimo
Moderate Severe
Moderate Severe
Moderate Severe
Moderate Severe
Joint Males Knee Hip Shoulder Elbow
27.0 51.0 47.3 12.5
3.0 2.9 1.1 5.8
38.2 47.3 50.9 19.8
4.5 1.8 3.8 5.2
29.3 20.7 33.3 11.4
1.7 2.3 1.5 3.8
32.4 35.2 53.6 31.1
13.5 2.8 0.0 18.0
Females Knee Hip Shoulder Elbow
35.6 37.4 44.3 12.7
10.9 13.1 8.2 1.0
31.9 47.8 53.6 21.7
18.6 7.8 8.9 0.9
16.1 20.7 22.2 10.4
0.0 0.0 0.0 3.0
32.0 22.4 23.1 22.0
4.0 1.7 2.6 7.3
or by researchers sharing the same methods circumvent this problem. These types of comparisons provide an important perspective on general characteristics of different lifestyles, especially with regard to workload and level of mechanical demand on the body. In a classic investigation of variation in degenerative articular pathology, Jurmain (1977a, 1977b, 1978, 1980) assessed osteoarthritis patterns in the appendicular skeleton (shoulder, elbow, hip, and knee) in a range of populations, including American Whites and Blacks (Terry Collection), Eskimos (Alaska), and Native Americans (Pecos Pueblo, New Mexico). Eskimos have a higher prevalence and severity of osteoarthritis than do American Whites and Blacks or Pecos Pueblo Native Americans; Pecos Pueblo adults have the least prevalence and severity of osteoarthritis among the four groups (Table 5.1). These population differences reflect the highly variable mechanical demands associated with contrasting lifestyles and subsistence strategies. For example, mechanical demands for the Pecos Pueblo agriculturalists may be mostly limited to the growing season, whereas Eskimos are subjected to high levels of activity throughout the year (Jurmain, 1977a; and see Merbs, 1983). The impact of specific lifestyles and occupations on patterns of degenerative articular pathology in various colonial and postcolonial North American populations has received considerable attention by biological anthropologists. These studies reveal that for many settings, physical activities were highly demanding. African Americans from a range of circumstances provide a growing record of lifestyle and activity in urban and rural settings. For example, individuals in the First African Baptist Church (urban Philadelphia) cemetery display extensive degenerative spinal pathology, including osteoarthritis (males, 69%; females, 39%) and Schmorl’s nodes (males, 31%; females, 13%) (Parrington & Roberts, 1990; and see Angel et al., 1987; Davidson et al., 2002). Similarly, in the African Burial Ground (urban New York), osteoarthritis is highly prevalent, with the lumbar vertebrae being especially highly affected (45% of individuals aged 15–24 years; Wilczak et al., 2009). These prevalences are higher than those of a contemporary African American population from a rural setting in Cedar Grove, Arkansas (compare with Rose, 1985) and such differences
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
191
192
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
suggest that the urban lifestyle was far more mechanically demanding than the rural lifestyle. The differences in degenerative joint pathology between the urban and rural settings may be due to specific differences in habitual activities. For example, historical records indicate that individuals interred in the Philadelphia cemetery held unskilled, physically demanding jobs (see also other settings of African Americans in relation to mechanical environment: Davidson et al., 2002; Kelley & Angel, 1987; Owsley et al., 1987; Rathbun, 1987; Rathbun & Steckel, 2002; Thomas et al., 1977). Highly demanding circumstances are also inferred from the study of osteoarthritis in pioneer Euroamericans living in the rural frontier of the American Midwest and Great Plains. Euroamerican adults from Illinois and Texas have remarkably elevated prevalences of osteoarthritis and highly developed muscle attachment sites on limb bones (Larsen, Craig et al., 1995; Winchell et al., 1995). Articular degenerative pathology includes extensive marginal lipping on weight-bearing and nonweight-bearing joints, eburnation, and extensions of articular surfaces (e.g., anterior femoral head and neck). High prevalence of nonspecific physiological stress indicators (e.g., enamel defects) and historical evidence indicate that life on the early American frontier was generally unhealthy and physically demanding. Numerous historical accounts from the early to mid-nineteenth century discuss the extremely hard physical labor that pioneer families endured, especially in preparation of fields and tending and harvesting crops (see discussion in Larsen, Craig et al., 1995). Degenerative joint pathology among war casualties is especially revealing about physical activity in military contexts, mostly drawn from rural, frontier circumstances. Many of the Euroamerican skeletal remains from the War of 1812 Snake Hill cemetery near Fort Erie, New York, and the Little Bighorn, South Dakota battlefield display Schmorl’s nodes on vertebral bodies, with an unusually high prevalence, including the majority of individuals (Owsley, 1991; Scott & Willey, 1997). Some individuals from Snake Hill, for example, have multiple Schmorl’s nodes: six individuals have five or more vertebrae with nodes, and one soldier has pronounced nodes in 11 vertebrae. In addition, several individuals have vertebral compression fractures resulting from excessive mechanical loading of the back. Similarly, military recruits serving on the mid-sixteenth-century warship, Mary Rose display a high prevalence of Schmorl’s depressions (thoracic vertebrae: 26.7%) (Stirland, 2002; and compare with Knüsel, 2000). These recruits, composed mostly of adolescents and young adults, were subjected to heavy mechanical loading in general, and of the back specifically, in a range of activities, including lifting in quite confined spaces. In these military settings, the elevated level of vertebral pathology indicate that pre-modern military recruits were subjected to excessive loading of their spines, such as from lifting heavy military hardware, carrying heavy loads, and participation in rigorous activity regimens. In sharp contrast to these settings, the largely sedentary urban population from nineteenth-century Belleville, Ontario, presents a remarkably low prevalence of osteoarthritis, including in the spine (Saunders et al., 2002). This speaks to the relatively low amount of physical activity in comparison with the rural and military individuals described earlier. In addition, a poorhouse population from Rochester (New York) displays a relatively low prevalence and severity of osteoarthritis, although not as low as that of Belleville (Higgins et al., 2002).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
Weaponry, food acquisition, and food processing Ortner’s (1968) classic study of elbow (distal humerus) osteoarthritis in Arctic and Peruvian Indians (Chicama valley) reveals highly contrasting patterns that reflect different uses of the upper limb in food acquisition. Arctic populations display a greater prevalence of degenerative changes – marginal proliferation and articular surface destruction – than Peruvian Indians (18% vs. 5%). Arctic samples also show a distinctive bilateral asymmetry in degenerative pathology; right elbows are far more arthritic than left elbows. Right-sided dominance of osteoarthritis is due to the greater use of the right arm than the left, such as from spear throwing with throwing boards (atlatls) by predominantly right-handed hunters (see also Kricun, 1994; Merbs, 1983; Webb, 1989, 1995). The prolonged use of weapons over the course of an individual’s lifetime, such as the bowand-arrow or atlatl, may also contribute to the degeneration of the elbow joint. Angel (1966b) first described the “atlatl elbow” in a skeletal series from the Tranquility site, California. He speculated that the atlatl facilitates a faster spear throw without involving extension and abduction of the shoulder; extension is primarily limited to the elbow. Consistent with his hypothesis, Tranquility shoulder joints display very little degenerative pathology, but elbow osteoarthritis is severe (Angel, 1966b). In order to document the shift in weapons technology from the atlatl to the bow-and-arrow, Bridges (1990) assessed patterns of upper limb osteoarthritis in early (Archaic) and late (Mississippian) prehistoric populations from the Pickwick Basin, Alabama. She suggested that only one upper limb and specifically the elbow joint is involved in use of the atlatl (Angel, 1966b), whereas both left and right upper limbs and the elbow and shoulder joints of each are involved in use of the bow-and-arrow. Thus, the joints of the upper limb should show different distributions of osteoarthritis reflecting either an atlatl pattern (unilateral, elbow) or bow-and-arrow pattern (bilateral, elbow and shoulder). Because males in most human societies are responsible for hunting, they should show a higher prevalence of osteoarthritis than females. As expected, early prehistoric males have a higher prevalence of elbow osteoarthritis than late prehistoric males, a pattern that probably reflects the use of the atlatl in the earlier group and the bow-and-arrow in the later group. Contrary to expectations, both temporal groups display slight right dominance of osteoarthritis. Early prehistoric females have the highest frequency of the right-dominant elbow osteoarthritis. These findings provide mixed support in this setting for the link between weapons use and degenerative articular pathology. Angel’s and Bridges’s studies indicate that some groups using the atlatl have a distinctive pattern of elbow osteoarthritis (e.g., Eskimos), whereas others do not (e.g., Pickwick Basin). These differences may reflect the relative importance or intensity of specific activities (Bridges, 1990). For example, traditional Eskimo diets are heavily dominated by meat, and they relied exclusively (or nearly so) on hunting over the course of the entire year. Thus, their atlatl use was highly intensive. Early prehistoric Indians living in the Pickwick Basin had a far more diverse diet that was acquired only partially by hunting. For much of the summer and spring, native populations utilized riverine resources (e.g., fish) and various flood-plain plants (e.g., edible seeds); hunting was practiced mostly during the winter. Therefore, the very different pattern of elbow osteoarthritis in Tennessee Indians cannot be attributed solely to use of the atlatl or the bow-and-arrow. Rather, a range of activities likely contributed to the patterns of upper limb osteoarthritis (Bridges, 1990).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
193
194
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
In contrast to the pattern of right-side dominance of osteoarthritis in upper limbs (Merbs, 1983; Webb, 1995), some groups display bilateral symmetry. Elbow osteoarthritis in native populations from Chavez Pass, Arizona, is highly prevalent and bilaterally symmetric (Miller, 1985). In this setting, mechanical loading of both elbows while processing maize with grinding implements – pushing manos against metates with the hands – involves equal use of the left and right upper limbs (Miller, 1985; and see Merbs, 1980). In traditional Southwestern native societies, females are responsible for this activity. Thus, the relatively higher frequency of such arthritis in adult females in the Chavez Pass series reflects the role of women in food preparation.
Horseback riding The horse was an important mode of transport for many Holocene societies, in the Old World and later in the New World following European contact, and into the early twentieth century. Some populations show articular degenerative sequelae of an equestrian lifestyle in the limited number of settings studied by bioarchaeologists (Dutour & Buzhilova, 2014; Edynak, 1976; Larsen, Craig et al., 1995; Owsley et al., 2006; Pálfi, 1992; Reinhard et al., 1994). Following the introduction of the horse to the American Great Plains by Europeans, native populations relied on this animal as the key element in the acquisition of resources. Patterns of osteoarthritis attributed to horseback riding include a high frequency of degenerative changes in the vertebrae and pelves of adult males in historic-era Omaha and Ponca from northeastern Nebraska, along with other skeletal features that are best explained by mechanical loading of specific joints during horseback riding (Reinhard et al., 1994). Features associated with horseback riding are especially diagnostic in the hip joint (innominates, proximal femora). These features include superior elongation of the acetabulum, extension of the femoral head articular surface onto the anterior femoral neck (Poirier’s facets), and hypertrophy of muscle attachment sites for the muscles: adductor magnus, adductor brevis, vastus lateralis, and gastrocnemius (medial head) (Dutour & Buzhilova, 2014; Erikson et al., 2000; Reinhard et al., 1994). The development of these hip (and knee) muscles reflects the emphasis on stabilizing the hip and keeping the rider upright. Extensive osteoarthritis of the first metatarsals is suggestive of mechanical stresses associated with the placement of the first toe into a leather thong stirrup (Reinhard et al., 1994). In all settings studied, more males than females have pathological changes associated with horseback riding, thus indicating that men were habitually engaged in behaviors involving the use of the horse, more so than women.
Vertebral osteoarthritis The vertebral column has been studied in a large number of settings in the Americas (summarized in Bridges, 1992) and elsewhere. For prehistoric North America, these comparisons reveal a number of tendencies. First, prevalence is always greatest in the articular region between the fifth and sixth cervical vertebrae; second, there is a tendency for the lower thoracic to be affected more than the upper thoracic vertebrae; third, the second to fourth lumbar vertebrae usually show the greatest degree of marginal lipping in comparison with other vertebrae; and finally, the region encompassing the seventh cervical vertebra to the upper thoracic vertebrae (to about the third thoracic vertebra) is always least affected by the disorder (Bridges, 1992). The relatively minimal amount of osteoarthritis in the thoracic vertebrae is due to the lower degree of movement in this region of the back (Waldron, 1993).
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
For a wide range of populations globally, the highest prevalence of vertebral osteoarthritis is in the lumbar spine, followed by the cervical spine (Bennike, 1985; Bridges, 1994; GunnessHey, 1980; Jurmain, 1990; Klaus et al., 2009; Merbs, 1983; Snow, 1974). Some human populations show relatively higher levels of osteoarthritis in the cervical vertebrae. For example, cervical vertebral osteoarthritis is relatively elevated in the Spitalfields, London industrial urban group (Waldron, 1993). Similarly, Harappan populations from the Indus Valley display higher frequencies of osteophytes and articular surface pitting of cervical vertebral bodies than in either the lumbar or thoracic spine (Lovell, 1994). This pattern suggests an activity-related cause, such as carrying of heavy loads on the head. Individuals in traditional agricultural communities and from lower socioeconomic groups from urban settings in South Asia habitually carry loads on their heads (Lovell, 1994). These loads include laundry bundles, water jars, firewood, and dirt-filled containers at construction sites. Clinical and observational studies confirm that the upper (cervical) spine is susceptible to injury and cumulative degenerative changes by persons carrying heavy loads on their heads (Allison, 1984; Levy, 1968; Lovell, 1994). The greater severity of osteoarthritis in the cervical spine in women than in men suggests that the practice of burden-carrying with the use of the head is gender specific. For example, severity of cervical osteoarthritis is greater in adult females than adult males in the Romano-British Bath Gate populations from Cirencester, England (Wells, 1982; and see Lovell, 1994).
5.3.2
Sexual dimorphism in osteoarthritis
Adult males and females show a wide range of variation in osteoarthritis prevalence in the prehistoric New World and other settings (Bridges, 1992). Some series show a greater prevalence in females than males (Rife, 2012) or no difference (Rojas-Sepúlveda et al., 2008), but in general, males show a consistently greater prevalence of osteoarthritis than females, regardless of subsistence strategy or sociopolitical complexity (Hemphill, 2010; Klaus et al., 2009; Larsen, 1982; Larsen & Hutchinson, 2010; Novak & Šlaus, 2011; Sofaer Derevenski, 2000; Waldron, 1992; Webb, 1995; Woo & Sciulli, 2013; and many others). Sex comparisons for prehistoric foragers from coastal Georgia reveal statistically significant differences between males and females for lumbar (69.2% vs. 32.1%) and shoulder (10.5% vs. 2.4%) joints (Larsen, 1982). In later prehistoric agriculturalists, more articulations show significant differences, including the cervical, thoracic, and lumbar vertebrae, elbow, and knee joints. A similar pattern of increase in sexual dimorphism has been documented in prehistoric northwest Alabama (Bridges, 1991a). In this setting, differences in osteoarthritis prevalence between Archaic-period males and females are not statistically significant, whereas later Mississippian-period males have more severe osteoarthritis than females (Bridges, 1991a). These patterns in Georgia and Alabama do not specifically define behaviors associated with either sex, but they are suggestive of contrasting patterns of physical activity (see later). The presence of more significant differences between agriculturalist males and females in both settings suggests the possibility that sex differences in labor demands were greater in later than in earlier prehistory. Similarly, comparisons of foragers from Indian Knoll, Kentucky, with maize agriculturalists from Averbuch, Tennessee, indicate different prevalence of osteoarthritis between adult males and females (Pierce, 1987). For example, Indian Knoll males have significantly greater frequencies of shoulder, hip, and knee osteoarthritis than females;
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
195
196
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
Averbuch males have significantly greater frequencies of osteoarthritis for the shoulder and hip, but not the knee. This pattern is suggestive of change in workload and activity with the adoption of agriculture. Unlike males, agriculturalist females from the lower Illinois River valley have a higher prevalence of vertebral osteoarthritis than forager females from the same region (Pickering, 1984). These differences are especially pronounced in cervical vertebrae, which may be related to an increase in mechanical demand in this region of the skeleton with the shift to agriculture (Pickering, 1984). Fahlström (1981) identified an unusually high prevalence and severity of shoulder osteoarthritis in adult males in the Medieval skeletal series from Westerhüs us, Sweden. Historical analysis of this population suggests that the high frequency in males reflects work and activity practices that are exclusive to men, including parrying in sword fighting, spear throwing, timber cutting, and other activities associated with repetitive, heavy loading of the shoulder joint (Fahlström, 1981). Some analyses reveal no appreciable differences between males and females. For example, males and females in the Dickson Mounds, Illinois series show no differences in prevalence of appendicular osteoarthritis (Goodman, Lallo et al., 1984; Lallo, 1973). The similarity between sexes infers that mechanical loading of most articular joints in this setting was broadly the same in adults regardless of sex, in contrast to most other prehistoric Eastern Woodlands populations (compare with Bridges, 1992). Similarly, documentation of prevalence of osteoarthritis in two series of African American adults from nineteenth- and twentiethcentury Washington, DC shows no appreciable differences between adult males and females, suggesting that labor demands were similar for men and women in this setting (Watkins, 2012). Two clear trends emerge when examining sex differences (Bridges, 1992). First, where there are statistically significant differences between males and females, males nearly universally show a higher prevalence of osteoarthritis than females. Second, when looking at specific regions of the New World, maize agriculturalists tend to display more sexual dimorphism in degenerative pathology than foragers. This suggests a difference in behavior leading to degeneration of articular joints in agriculturalists but not in earlier foragers. The change in pattern of sexual dimorphism suggests that there was a fundamental shift in the division of labor once agriculture was adopted (Bridges, 1992).
5.3.3
Age variation
The documentation of age-at-onset of osteoarthritis should provide an indication of when individuals enter the work force. In the late prehistoric Ledders series from the lower Illinois River valley, elbow and wrist osteoarthritis commences earlier in females than in males, which may indicate that women were subjected to the mechanical demands of adulthood earlier than men (Pickering, 1984). Eskimos have the earliest age-at-onset in comparison with Southwestern (Pecos Pueblo) agriculturalists and urbanized American Whites and Blacks (Jurmain, 1977a). These differences reflect the relatively greater mechanical demands on the Eskimos in comparison with other human populations. Interpretation of intra- and inter-population differences in osteoarthritis prevalence must consider age structure as it is such an important predisposing factor. For example, females have a greater prevalence of osteoarthritis than males in all but three of 16 joints in a series of human
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
remains from coastal British Columbia (Cybulski, 1992). Adult females are older than the adult males in the assemblage. Thus, an unusually high prevalence in females relative to males is likely due to the difference in age composition rather than variation in mechanical environment.
5.3.4
Social rank and work pattern
There is a clear and growing record that tracks inequality in health and quality-of-life among hierarchical societies, that shows that societies around the world specifically demonstrate better health in individuals of greater prestige compared to individuals of lower prestige (Farmer, 2003; Strickland & Shetty, 1998). The study of human remains from archaeological contexts provides a crucial perspective for examining the origins and evolution of hierarchy and inequality, especially as it relates to health outcomes (Cohen, 1998), including outcomes related to labor and workload (Klaus, 2012). Comparison of osteoarthritis prevalence and severity between social classes in prehistoric stratified societies suggests that higher-status individuals were exposed to less demanding activities than lower-status individuals. Archaeological evidence indicates that Middle Woodland populations in the lower Illinois River valley were hierarchical and organized on the basis of ascribed (hereditary) statuses (Tainter, 1980). The hierarchy of different social ranks is clearly displayed in the contrasting levels of energy expenditure in construction of tombs: a great deal of energy and resources were devoted to the construction of elaborate tombs for high-status individuals. The highestrank graves include individuals who were either interred in or processed through large, logroofed tombs located at the centers of individual mounds. Little energy was expended on the construction of tombs for low-status individuals; graves are simple and unadorned. Analysis of shoulder, elbow, and knee osteoarthritis in skeletons from the Pete Klunk and Gibson mound groups reveals that the highest-ranking adults over the age of 35 years display less severe elbow osteoarthritis than lower-ranked individuals, and high-ranking females have less severe knee osteoarthritis than females from the other ranks (Tainter, 1980). Similarly, in the Middle Sicán culture (AD 900–1100) in the Lambayeque Valley on the Pacific coast of Peru, Klaus (2012) tested the hypothesis that the social hierarchy would display marked differences in workload generally. This hypothesis was based on mortuary analysis, finding evidence for a small dominant ruling class that was interred with artifacts associated with remarkable wealth, and access to food that was obtained through the labor of individuals from lower social classes. The analysis of osteoarthritis in comparing elite and non-elite individuals in this setting is striking: non-elite are 3.3 times more likely than the elite to have osteoarthritis of the shoulder, 7.1 times more likely for the elbow, 3.6 times more likely for the thoracic vertebrae, 3.8 times more likely for the lumbar vertebrae, and 4.2 times more likely for the hip. Thus, overall, there is a strong association between status and workload in the Middle Sicán period of coastal Peru. These findings strongly suggest that non-elite individuals were involved in activity regimens that increased the likelihood of excessive joint loading in a range of motions.
5.3.5
Temporal trends and adaptive shifts
The prior discussion underscores the tremendous range of variation in osteoarthritis prevalence and pattern, linking the condition to lifestyle, food acquisition, food preparation, age, social status, and other circumstances. Comparisons of prehistoric foragers and farmers from
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
197
198
Activity patterns: 1. Articular degenerative conditions and musculoskeletal modifications
different settings (Jurmain, 1977a, 1977b, 1978, 1980) indicate differences in osteoarthritis – and presumably workload and activity – in relation to subsistence. Regionally based temporal studies of osteoarthritis give additional perspective on change in functional demand as populations underwent adaptive shifts in the past. Based on comparisons of earlier and later societies from the same region, it has become possible to assess the relative labor costs of change in economic focus, at least as these costs are measured by mechanical stress. The most extensive temporal studies of osteoarthritis have been completed for several settings in North America. The study of osteoarthritis prevalence in Archaic-period huntergatherers and later Mississippian-period maize agriculturalists from northwestern Alabama suggests changes in activity and workload, especially when viewed in the context of diet and lifeway (Bridges, 1991a). Archaic-period populations exploited a range of terrestrial and riverine animals and plants, including deer, raccoon, beaver, fish and shellfish, wild plants, and limited cultivation of sunflower, sumpweed, chenopod, squash, and bottle gourd (Dye, 1977). Populations moved seasonally from river valleys to nearby uplands. Later prehistoric groups were intensive maize agriculturalists, but also exploited a limited number of species of nondomesticated plants and animals (Smith, 1986). These later groups were largely sedentary and lived primarily in villages on river floodplains, although smaller temporary upland habitations were utilized for hunting deer and other animals on a seasonal basis (e.g., small mammals, turkey, waterfowl). In summary, although sharing some features, the subsistence strategies and settlement patterns in the earlier and later periods were very different. Because foraging and farming involved very different kinds of physical activity, the respective populations should display different prevalence and patterns of osteoarthritis. Comparisons of shoulder, elbow, wrist, hip, knee, and ankle osteoarthritis show a number of important temporal trends in the Alabama series (Bridges, 1991a). For individuals 30–49 years of age-at-death, the Archaic group generally has more osteoarthritis than the Mississippian group, and these differences are especially consistent for males (Table 5.2). Statistically significant differences between periods are present in only a few of the joints. However, the overall greater prevalence in the Archaic sample is clear. The severity of osteoarthritis tells the same story: Archaic populations generally have greater severity of the disorder than Mississippian populations. The pattern of degenerative pathology is remarkably similar in the prehistoric foragers and farmers and in the males and females within each group – for all samples, osteoarthritis is most common in the elbow, shoulder, and knee, and it is least common in the hip, ankle, and wrist. Prehistoric and contact-period human remains representing a temporal succession of Native American populations living in the Georgia Bight have been the focus of research on physical activity and behavioral changes by Larsen and coworkers (Fresia et al., 1990; Larsen, 1982, 1984, 1998; Larsen & Ruff, 1991, 1994, 2011; Larsen et al., 2007; Ruff & Larsen, 1990; Ruff et al., 1984). Temporal comparison of osteoarthritis prevalence shows a distinctive decline in prehistoric farmers relative to earlier foragers. For the series as a whole (sexes combined), statistically significant reductions occur for the lumbar vertebrae (26.2%), elbow (6.8%), wrist (4.5%), hip (3.8%), knee (7.2%), and ankle (4.0%) joints. Frequency of osteoarthritis either reduces or does not change in all other joints. Both adult females and adult males show the same trend of reduction; more significant reductions occur in females than in males (six joints versus three joints; and see earlier). The pattern of osteoarthritis prevalence in the
Downloaded from https:/www.cambridge.org/core. University of Florida, on 09 May 2017 at 06:28:45, subject to the Cambridge Core terms of use, available at https:/www.cambridge.org/core/terms. https://doi.org/10.1017/CBO9781139020398.007
5.3 Articular joint pathology: osteoarthritis
Table 5.2 Percentage of individuals with moderate to severe osteoarthritis, aged 30–49 years (Adapted from Bridges, 1991a: Table 2) Males Archaic
Joint Shoulder (n) Elbow (n) Wrist (n) Hip (n) Knee (n) Ankle (n)
Females Mississippian
Archaic
Mississippian
Left
Right
Left
Right
Left
Right
Left
Right
36.8 (19) 27.3 (22) 9.5 (21) 5.0 (20) 27.3 (22) 23.8a–c (21)
42.1 (19) 40.9 (22) 15.8 (19) 5.0 (20) 31.8 (22) 0.0 (22)
30.0b (20) 28.0b (25) 0.0 (23) 0.0 (21) 21.7 (23) 0.0b (24)
30.4 (23) 24.0 (25) 17.4 (23) 0.0 (21) 8.6 (23) 4.8b (21)
7.7 (13) 26.4 (19) 0.0 (13) 0.0 (13) 15.8 (19) 0.0d (18)
28.6 (14) 37.6 (16) 6.7 (15) 0.0 (10) 22.3 (18) 5.9 (17)
10.0 (20) 15.8 (19) 5.6 (18) 7.1 (14) 21.1 (19) 0.0 (16)
17.6 (17) 20.0 (20) 0.0 (14) 0.0 (17) 23.5 (17) 0.0 (19)
a
Frequency significantly greater in males than in females (chi-square: P